热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

Movingdatafrommysqltocassandra_MySQL

Movingdatafrommysqltocassandra
I had a relational database, that I wanted to migrate to cassandra. Cassandra's sstableloader provides option to load the existing data from flat files to a cassandra ring . Hence this can be used as a way to migrate data in relational databases to cassandra, as most relational databases let us export the data into flat files.

sqoop gives the option to do this effectively. Interestingly, DataStax Enterprise provides everything we want in the big data space as a package. This includes, cassandra, hadoop, hive , pig, sqoop, and mahout, which comes handy in this case.

Under the resources directory, you may find the cassandra, dse, hadoop, hive, log4j-appender, mahout, pig, solr, sqoop, and tomcat specific configurations.

For example, from resources/hadoop/bin, you may format the hadoop name node using

./hadoop namenode -format

as usual.

* Download and extract DataStax Enterprise binary archive (dse-2.1-bin.tar.gz).

* Follow the documentation , which is also available as a PDF .

* Migrating a relational database to cassandra is documented and is also blogged .

* Before starting DataStax, make sure that the JAVA_HOME is set. This also can be set directly on conf/hadoop-env.sh.

* Include the connector to the relational database into a location reachable by sqoop.

I put mysql-connector-java-5.1.12-bin.jar under resources/sqoop.

* Set the environment

$ bin/dse-env.sh

* Start DataStax Enterprise, as an Analytics node.

$ sudo bin/dse cassandra -t

where cassandra starts the Cassandra process plus CassandraFS and the -t option starts the Hadoop JobTracker and TaskTracker processes.

if you start without the -t flag, the below exception will be thrown during the further operations that are discussed below.

No jobtracker found

Unable to run : jobtracker not found

Hence do not miss the -t flag.

* Start cassandra cli to view the cassandra keyrings and you will be able to view the data in cassandra, once you migrate using sqoop as given below.

$ bin/cassandra-cli -host localhost -port 9160

Confirm that it is connected to the test cluster that is created on the port 9160, by the below from the CLI.

[default@unknown] describe cluster;

Cluster Information:

Snitch: com.datastax.bdp.snitch.DseDelegateSnitch

Partitioner: org.apache.cassandra.dht.RandomPartitioner

Schema versions:

f5a19a50-b616-11e1-0000-45b29245ddff: [127.0.1.1]

If you have missed mentioning the host/port (starting the cli by just bin/cassandra-cli ) or given it wrong, you will get the response as "Not connected to a cassandra instance."

$ bin/dse sqoop import --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --table Category --split-by categoryName --cassandra-keyspace shopping_cart_db --cassandra-column-family Category_cf --cassandra-row-key categoryName --cassandra-thrift-host localhost --cassandra-create-schema

Above command will now migrate the table "Category" in the shopping_cart_db with the primary key categoryName, into a cassandra keyspace named shopping_cart, with the cassandra row key categoryName. You may use the --direct mysql specific option, which is faster. In my above command, I have everything runs on localhost.

+--------------+-------------+------+-----+---------+-------+

| Field | Type | Null | Key | Default | Extra |

+--------------+-------------+------+-----+---------+-------+

| categoryName | varchar(50) | NO | PRI | NULL | |

| description | text | YES | | NULL | |

| image | blob | YES | | NULL | |

+--------------+-------------+------+-----+---------+-------+

This also creates the respective java class (Category.java), inside the working directory.

To import all the tables in the database, instead of a single table.

$ bin/dse sqoop import-all-tables -m 1 --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --cassandra-thrift-host localhost --cassandra-create-schema --direct

Here "-m 1" tag ensures a sequential import. If not specified, the below exception will be thrown.

ERROR tool.ImportAllTablesTool: Error during import: No primary key could be found for table Category. Please specify one with --split-by or perform a sequential import with '-m 1'.

To check whether the keyspace is created,

[default@unknown] show keyspaces;

................

Keyspace: shopping_cart_db:

Replication Strategy: org.apache.cassandra.locator.SimpleStrategy

Durable Writes: true

Options: [replication_factor:1]

Column Families:

ColumnFamily: Category_cf

Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

Default column value validator: org.apache.cassandra.db.marshal.UTF8Type

Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

Row cache size / save period in seconds / keys to save : 0.0/0/all

Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider

Key cache size / save period in seconds: 200000.0/14400

GC grace seconds: 864000

Compaction min/max thresholds: 4/32

Read repair chance: 1.0

Replicate on write: true

Bloom Filter FP chance: default

Built indexes: []

Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

.............

[default@unknown] describe shopping_cart_db;

Keyspace: shopping_cart_db:

Replication Strategy: org.apache.cassandra.locator.SimpleStrategy

Durable Writes: true

Options: [replication_factor:1]

Column Families:

ColumnFamily: Category_cf

Key Validation Class: org.apache.cassandra.db.marshal.UTF8Type

Default column value validator: org.apache.cassandra.db.marshal.UTF8Type

Columns sorted by: org.apache.cassandra.db.marshal.UTF8Type

Row cache size / save period in seconds / keys to save : 0.0/0/all

Row Cache Provider: org.apache.cassandra.cache.SerializingCacheProvider

Key cache size / save period in seconds: 200000.0/14400

GC grace seconds: 864000

Compaction min/max thresholds: 4/32

Read repair chance: 1.0

Replicate on write: true

Bloom Filter FP chance: default

Built indexes: []

Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy

You may also use hive to view the databases created in cassandra, in an sql-like manner.

* Start Hive

$ bin/dse hive

hive> show databases;

OK

default

shopping_cart_db

When the entire database is imported as above, separate java classes will be created for each of the tables.

$ bin/dse sqoop import-all-tables -m 1 --connect jdbc:mysql://127.0.0.1:3306/shopping_cart_db --username root --password root --cassandra-thrift-host localhost --cassandra-create-schema --direct

12/06/15 15:42:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.

12/06/15 15:42:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.

12/06/15 15:42:11 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Category` AS t LIMIT 1

12/06/15 15:42:11 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Category.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:13 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Category.jar

12/06/15 15:42:13 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:13 INFO mapreduce.ImportJobBase: Beginning import of Category

12/06/15 15:42:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

12/06/15 15:42:15 INFO mapred.JobClient: Running job: job_201206151241_0007

12/06/15 15:42:16 INFO mapred.JobClient: map 0% reduce 0%

12/06/15 15:42:25 INFO mapred.JobClient: map 100% reduce 0%

12/06/15 15:42:25 INFO mapred.JobClient: Job complete: job_201206151241_0007

12/06/15 15:42:25 INFO mapred.JobClient: Counters: 18

12/06/15 15:42:25 INFO mapred.JobClient: Job Counters

12/06/15 15:42:25 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=6480

12/06/15 15:42:25 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:25 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:25 INFO mapred.JobClient: Launched map tasks=1

12/06/15 15:42:25 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:25 INFO mapred.JobClient: File Output Format Counters

12/06/15 15:42:25 INFO mapred.JobClient: Bytes Written=2848

12/06/15 15:42:25 INFO mapred.JobClient: FileSystemCounters

12/06/15 15:42:25 INFO mapred.JobClient: FILE_BYTES_WRITTEN=21419

12/06/15 15:42:25 INFO mapred.JobClient: CFS_BYTES_WRITTEN=2848

12/06/15 15:42:25 INFO mapred.JobClient: CFS_BYTES_READ=87

12/06/15 15:42:25 INFO mapred.JobClient: File Input Format Counters

12/06/15 15:42:25 INFO mapred.JobClient: Bytes Read=0

12/06/15 15:42:25 INFO mapred.JobClient: Map-Reduce Framework

12/06/15 15:42:25 INFO mapred.JobClient: Map input records=1

12/06/15 15:42:25 INFO mapred.JobClient: Physical memory (bytes) snapshot=119435264

12/06/15 15:42:25 INFO mapred.JobClient: Spilled Records=0

12/06/15 15:42:25 INFO mapred.JobClient: CPU time spent (ms)=630

12/06/15 15:42:25 INFO mapred.JobClient: Total committed heap usage (bytes)=121241600

12/06/15 15:42:25 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2085318656

12/06/15 15:42:25 INFO mapred.JobClient: Map output records=36

12/06/15 15:42:25 INFO mapred.JobClient: SPLIT_RAW_BYTES=87

12/06/15 15:42:25 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 11.4492 seconds (0 bytes/sec)

12/06/15 15:42:25 INFO mapreduce.ImportJobBase: Retrieved 36 records.

12/06/15 15:42:25 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:25 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Customer` AS t LIMIT 1

12/06/15 15:42:25 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Customer.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:25 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Customer.jar

12/06/15 15:42:26 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:26 INFO mapreduce.ImportJobBase: Beginning import of Customer

12/06/15 15:42:26 INFO mapred.JobClient: Running job: job_201206151241_0008

12/06/15 15:42:27 INFO mapred.JobClient: map 0% reduce 0%

12/06/15 15:42:35 INFO mapred.JobClient: map 100% reduce 0%

12/06/15 15:42:35 INFO mapred.JobClient: Job complete: job_201206151241_0008

12/06/15 15:42:35 INFO mapred.JobClient: Counters: 17

12/06/15 15:42:35 INFO mapred.JobClient: Job Counters

12/06/15 15:42:35 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=6009

12/06/15 15:42:35 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:35 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:35 INFO mapred.JobClient: Launched map tasks=1

12/06/15 15:42:35 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:35 INFO mapred.JobClient: File Output Format Counters

12/06/15 15:42:35 INFO mapred.JobClient: Bytes Written=0

12/06/15 15:42:35 INFO mapred.JobClient: FileSystemCounters

12/06/15 15:42:35 INFO mapred.JobClient: FILE_BYTES_WRITTEN=21489

12/06/15 15:42:35 INFO mapred.JobClient: CFS_BYTES_READ=87

12/06/15 15:42:35 INFO mapred.JobClient: File Input Format Counters

12/06/15 15:42:35 INFO mapred.JobClient: Bytes Read=0

12/06/15 15:42:35 INFO mapred.JobClient: Map-Reduce Framework

12/06/15 15:42:35 INFO mapred.JobClient: Map input records=1

12/06/15 15:42:35 INFO mapred.JobClient: Physical memory (bytes) snapshot=164855808

12/06/15 15:42:35 INFO mapred.JobClient: Spilled Records=0

12/06/15 15:42:35 INFO mapred.JobClient: CPU time spent (ms)=510

12/06/15 15:42:35 INFO mapred.JobClient: Total committed heap usage (bytes)=121241600

12/06/15 15:42:35 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2082869248

12/06/15 15:42:35 INFO mapred.JobClient: Map output records=0

12/06/15 15:42:35 INFO mapred.JobClient: SPLIT_RAW_BYTES=87

12/06/15 15:42:35 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.3143 seconds (0 bytes/sec)

12/06/15 15:42:35 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:42:35 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:35 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `OrderEntry` AS t LIMIT 1

12/06/15 15:42:35 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderEntry.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderEntry.jar

12/06/15 15:42:36 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:36 INFO mapreduce.ImportJobBase: Beginning import of OrderEntry

12/06/15 15:42:36 INFO mapred.JobClient: Running job: job_201206151241_0009

12/06/15 15:42:37 INFO mapred.JobClient: map 0% reduce 0%

12/06/15 15:42:45 INFO mapred.JobClient: map 100% reduce 0%

12/06/15 15:42:45 INFO mapred.JobClient: Job complete: job_201206151241_0009

12/06/15 15:42:45 INFO mapred.JobClient: Counters: 17

12/06/15 15:42:45 INFO mapred.JobClient: Job Counters

12/06/15 15:42:45 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=6381

12/06/15 15:42:45 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:45 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:45 INFO mapred.JobClient: Launched map tasks=1

12/06/15 15:42:45 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:45 INFO mapred.JobClient: File Output Format Counters

12/06/15 15:42:45 INFO mapred.JobClient: Bytes Written=0

12/06/15 15:42:45 INFO mapred.JobClient: FileSystemCounters

12/06/15 15:42:45 INFO mapred.JobClient: FILE_BYTES_WRITTEN=21569

12/06/15 15:42:45 INFO mapred.JobClient: CFS_BYTES_READ=87

12/06/15 15:42:45 INFO mapred.JobClient: File Input Format Counters

12/06/15 15:42:45 INFO mapred.JobClient: Bytes Read=0

12/06/15 15:42:45 INFO mapred.JobClient: Map-Reduce Framework

12/06/15 15:42:45 INFO mapred.JobClient: Map input records=1

12/06/15 15:42:45 INFO mapred.JobClient: Physical memory (bytes) snapshot=137252864

12/06/15 15:42:45 INFO mapred.JobClient: Spilled Records=0

12/06/15 15:42:45 INFO mapred.JobClient: CPU time spent (ms)=520

12/06/15 15:42:45 INFO mapred.JobClient: Total committed heap usage (bytes)=121241600

12/06/15 15:42:45 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2014703616

12/06/15 15:42:45 INFO mapred.JobClient: Map output records=0

12/06/15 15:42:45 INFO mapred.JobClient: SPLIT_RAW_BYTES=87

12/06/15 15:42:45 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2859 seconds (0 bytes/sec)

12/06/15 15:42:45 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:42:45 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `OrderItem` AS t LIMIT 1

12/06/15 15:42:45 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderItem.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:45 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/OrderItem.jar

12/06/15 15:42:46 WARN manager.CatalogQueryManager: The table OrderItem contains a multi-column primary key. Sqoop will default to the column orderNumber only for this job.

12/06/15 15:42:46 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:46 INFO mapreduce.ImportJobBase: Beginning import of OrderItem

12/06/15 15:42:46 INFO mapred.JobClient: Running job: job_201206151241_0010

12/06/15 15:42:47 INFO mapred.JobClient: map 0% reduce 0%

12/06/15 15:42:55 INFO mapred.JobClient: map 100% reduce 0%

12/06/15 15:42:55 INFO mapred.JobClient: Job complete: job_201206151241_0010

12/06/15 15:42:55 INFO mapred.JobClient: Counters: 17

12/06/15 15:42:55 INFO mapred.JobClient: Job Counters

12/06/15 15:42:55 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=5949

12/06/15 15:42:55 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:42:55 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:42:55 INFO mapred.JobClient: Launched map tasks=1

12/06/15 15:42:55 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0

12/06/15 15:42:55 INFO mapred.JobClient: File Output Format Counters

12/06/15 15:42:55 INFO mapred.JobClient: Bytes Written=0

12/06/15 15:42:55 INFO mapred.JobClient: FileSystemCounters

12/06/15 15:42:55 INFO mapred.JobClient: FILE_BYTES_WRITTEN=21524

12/06/15 15:42:55 INFO mapred.JobClient: CFS_BYTES_READ=87

12/06/15 15:42:55 INFO mapred.JobClient: File Input Format Counters

12/06/15 15:42:55 INFO mapred.JobClient: Bytes Read=0

12/06/15 15:42:55 INFO mapred.JobClient: Map-Reduce Framework

12/06/15 15:42:55 INFO mapred.JobClient: Map input records=1

12/06/15 15:42:55 INFO mapred.JobClient: Physical memory (bytes) snapshot=116674560

12/06/15 15:42:55 INFO mapred.JobClient: Spilled Records=0

12/06/15 15:42:55 INFO mapred.JobClient: CPU time spent (ms)=590

12/06/15 15:42:55 INFO mapred.JobClient: Total committed heap usage (bytes)=121241600

12/06/15 15:42:55 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2014703616

12/06/15 15:42:55 INFO mapred.JobClient: Map output records=0

12/06/15 15:42:55 INFO mapred.JobClient: SPLIT_RAW_BYTES=87

12/06/15 15:42:55 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2539 seconds (0 bytes/sec)

12/06/15 15:42:55 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:42:55 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:42:55 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Payment` AS t LIMIT 1

12/06/15 15:42:55 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Payment.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:42:55 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Payment.jar

12/06/15 15:42:56 WARN manager.CatalogQueryManager: The table Payment contains a multi-column primary key. Sqoop will default to the column orderNumber only for this job.

12/06/15 15:42:56 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:42:56 INFO mapreduce.ImportJobBase: Beginning import of Payment

12/06/15 15:42:56 INFO mapred.JobClient: Running job: job_201206151241_0011

12/06/15 15:42:57 INFO mapred.JobClient: map 0% reduce 0%

12/06/15 15:43:05 INFO mapred.JobClient: map 100% reduce 0%

12/06/15 15:43:05 INFO mapred.JobClient: Job complete: job_201206151241_0011

12/06/15 15:43:05 INFO mapred.JobClient: Counters: 17

12/06/15 15:43:05 INFO mapred.JobClient: Job Counters

12/06/15 15:43:05 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=5914

12/06/15 15:43:05 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:43:05 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:43:05 INFO mapred.JobClient: Launched map tasks=1

12/06/15 15:43:05 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0

12/06/15 15:43:05 INFO mapred.JobClient: File Output Format Counters

12/06/15 15:43:05 INFO mapred.JobClient: Bytes Written=0

12/06/15 15:43:05 INFO mapred.JobClient: FileSystemCounters

12/06/15 15:43:05 INFO mapred.JobClient: FILE_BYTES_WRITTEN=21518

12/06/15 15:43:05 INFO mapred.JobClient: CFS_BYTES_READ=87

12/06/15 15:43:05 INFO mapred.JobClient: File Input Format Counters

12/06/15 15:43:05 INFO mapred.JobClient: Bytes Read=0

12/06/15 15:43:05 INFO mapred.JobClient: Map-Reduce Framework

12/06/15 15:43:05 INFO mapred.JobClient: Map input records=1

12/06/15 15:43:05 INFO mapred.JobClient: Physical memory (bytes) snapshot=137998336

12/06/15 15:43:05 INFO mapred.JobClient: Spilled Records=0

12/06/15 15:43:05 INFO mapred.JobClient: CPU time spent (ms)=520

12/06/15 15:43:05 INFO mapred.JobClient: Total committed heap usage (bytes)=121241600

12/06/15 15:43:05 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2082865152

12/06/15 15:43:05 INFO mapred.JobClient: Map output records=0

12/06/15 15:43:05 INFO mapred.JobClient: SPLIT_RAW_BYTES=87

12/06/15 15:43:05 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2642 seconds (0 bytes/sec)

12/06/15 15:43:05 INFO mapreduce.ImportJobBase: Retrieved 0 records.

12/06/15 15:43:05 INFO tool.CodeGenTool: Beginning code generation

12/06/15 15:43:05 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `Product` AS t LIMIT 1

12/06/15 15:43:06 INFO orm.CompilationManager: HADOOP_HOME is /home/pradeeban/programs/dse-2.1/resources/hadoop/bin/..

Note: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/Product.java uses or overrides a deprecated API.

Note: Recompile with -Xlint:deprecation for details.

12/06/15 15:43:06 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-pradeeban/compile/926ddf787c73be06c4e2ad1f8fc522f1/ Product.jar

12/06/15 15:43:06 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import

12/06/15 15:43:06 INFO mapreduce.ImportJobBase: Beginning import of Product

12/06/15 15:43:07 INFO mapred.JobClient: Running job: job_201206151241_0012

12/06/15 15:43:08 INFO mapred.JobClient: map 0% reduce 0%

12/06/15 15:43:16 INFO mapred.JobClient: map 100% reduce 0%

12/06/15 15:43:16 INFO mapred.JobClient: Job complete: job_201206151241_0012

12/06/15 15:43:16 INFO mapred.JobClient: Counters: 18

12/06/15 15:43:16 INFO mapred.JobClient: Job Counters

12/06/15 15:43:16 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=5961

12/06/15 15:43:16 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0

12/06/15 15:43:16 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0

12/06/15 15:43:16 INFO mapred.JobClient: Launched map tasks=1

12/06/15 15:43:16 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0

12/06/15 15:43:16 INFO mapred.JobClient: File Output Format Counters

12/06/15 15:43:16 INFO mapred.JobClient: Bytes Written=248262

12/06/15 15:43:16 INFO mapred.JobClient: FileSystemCounters

12/06/15 15:43:16 INFO mapred.JobClient: FILE_BYTES_WRITTEN=21527

12/06/15 15:43:16 INFO mapred.JobClient: CFS_BYTES_WRITTEN=248262

12/06/15 15:43:16 INFO mapred.JobClient: CFS_BYTES_READ=87

12/06/15 15:43:16 INFO mapred.JobClient: File Input Format Counters

12/06/15 15:43:16 INFO mapred.JobClient: Bytes Read=0

12/06/15 15:43:16 INFO mapred.JobClient: Map-Reduce Framework

12/06/15 15:43:16 INFO mapred.JobClient: Map input records=1

12/06/15 15:43:16 INFO mapred.JobClient: Physical memory (bytes) snapshot=144871424

12/06/15 15:43:16 INFO mapred.JobClient: Spilled Records=0

12/06/15 15:43:16 INFO mapred.JobClient: CPU time spent (ms)=1030

12/06/15 15:43:16 INFO mapred.JobClient: Total committed heap usage (bytes)=121241600

12/06/15 15:43:16 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2085318656

12/06/15 15:43:16 INFO mapred.JobClient: Map output records=300

12/06/15 15:43:16 INFO mapred.JobClient: SPLIT_RAW_BYTES=87

12/06/15 15:43:16 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 9.2613 seconds (0 bytes/sec)

12/06/15 15:43:16 INFO mapreduce.ImportJobBase: Retrieved 300 records.

I found DataStax an interesting project to explore more. I have blogged on the issues that I faced on this as a learner, and how easily can they be fixed - Issues that you may encounter during the migration to Cassandra using DataStax/Sqoop and the fixes

.

推荐阅读
  • 在Hive中合理配置Map和Reduce任务的数量对于优化不同场景下的性能至关重要。本文探讨了如何控制Hive任务中的Map数量,分析了当输入数据超过128MB时是否会自动拆分,以及Map数量是否越多越好的问题。通过实际案例和实验数据,本文提供了具体的配置建议,帮助用户在不同场景下实现最佳性能。 ... [详细]
  • 一、Hadoop来历Hadoop的思想来源于Google在做搜索引擎的时候出现一个很大的问题就是这么多网页我如何才能以最快的速度来搜索到,由于这个问题Google发明 ... [详细]
  • 我们在之前的文章中已经初步介绍了Cloudera。hadoop基础----hadoop实战(零)-----hadoop的平台版本选择从版本选择这篇文章中我们了解到除了hadoop官方版本外很多 ... [详细]
  • 从0到1搭建大数据平台
    从0到1搭建大数据平台 ... [详细]
  • Hadoop的分布式架构改进与应用
    nsitionalENhttp:www.w3.orgTRxhtml1DTDxhtml1-transitional.dtd ... [详细]
  • Hadoop源码解析1Hadoop工程包架构解析
    1 Hadoop中各工程包依赖简述   Google的核心竞争技术是它的计算平台。Google的大牛们用了下面5篇文章,介绍了它们的计算设施。   GoogleCluster:ht ... [详细]
  • Presto:高效即席查询引擎的深度解析与应用
    本文深入解析了Presto这一高效的即席查询引擎,详细探讨了其架构设计及其优缺点。Presto通过内存到内存的数据处理方式,显著提升了查询性能,相比传统的MapReduce查询,不仅减少了数据传输的延迟,还提高了查询的准确性和效率。然而,Presto在大规模数据处理和容错机制方面仍存在一定的局限性。本文还介绍了Presto在实际应用中的多种场景,展示了其在大数据分析领域的强大潜力。 ... [详细]
  • 如何高效启动大数据应用之旅?
    在前一篇文章中,我探讨了大数据的定义及其与数据挖掘的区别。本文将重点介绍如何高效启动大数据应用项目,涵盖关键步骤和最佳实践,帮助读者快速踏上大数据之旅。 ... [详细]
  • HBase在金融大数据迁移中的应用与挑战
    随着最后一台设备的下线,标志着超过10PB的HBase数据迁移项目顺利完成。目前,新的集群已在新机房稳定运行超过两个月,监控数据显示,新集群的查询响应时间显著降低,系统稳定性大幅提升。此外,数据消费的波动也变得更加平滑,整体性能得到了显著优化。 ... [详细]
  • 在Linux系统中,原本已安装了多个版本的Python 2,并且还安装了Anaconda,其中包含了Python 3。本文详细介绍了如何通过配置环境变量,使系统默认使用指定版本的Python,以便在不同版本之间轻松切换。此外,文章还提供了具体的实践步骤和注意事项,帮助用户高效地管理和使用不同版本的Python环境。 ... [详细]
  • hive和mysql的区别是什么[mysql教程]
    hive和mysql的区别有:1、查询语言不同,hive是hql语言,MySQL是sql语句;2、数据存储位置不同,hive把数据存储在hdfs上,MySQL把数据存储在自己的系统 ... [详细]
  • 阿里云大数据计算服务MaxCompute (原名 ODPS)
     MaxCompute是阿里EB级计算平台,经过十年磨砺,它成为阿里巴巴集团数据中台的计算核心和阿里云大数据的基础服务。去年MaxCompute做了哪些工作,这些工作背后的原因是什 ... [详细]
  • 大数据Hadoop生态(20)MapReduce框架原理OutputFormat的开发笔记
    本文介绍了大数据Hadoop生态(20)MapReduce框架原理OutputFormat的开发笔记,包括outputFormat接口实现类、自定义outputFormat步骤和案例。案例中将包含nty的日志输出到nty.log文件,其他日志输出到other.log文件。同时提供了一些相关网址供参考。 ... [详细]
  • Maven构建Hadoop,
    Maven构建Hadoop工程阅读目录序Maven安装构建示例下载系列索引 序  上一篇,我们编写了第一个MapReduce,并且成功的运行了Job,Hadoop1.x是通过ant ... [详细]
  • 什么是大数据lambda架构
    一、什么是Lambda架构Lambda架构由Storm的作者[NathanMarz]提出,根据维基百科的定义,Lambda架构的设计是为了在处理大规模数 ... [详细]
author-avatar
丿车荣璇
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有