大数据之 Hadoop-11-Sqoop

一、什么是Sqoop

1、概念

Sqoop是一种用于在 Hadoop和关系型数据库((RDBMS,如 MySQL或Oracle)之间传输数据的工具。使用Sqoop可以批量将数据从关系型数据库导入到Hadoop分布式文件系统(HDFS)及其相关系统(如 HBase和Hive)中,也可以把 Hadoop文件系统及其相关系统中的数据导出到关系型数据库中,如下图所示。

file

2、Sqoop基本架构

Sqoop是使用Java语言编写的,Sqoop 的架构非常简单,整合了HDFS、HBase和 Hive。底层使用MapReduce进行数据传递,从而提供并行操作和容错功能,如图下所示。

file
Sqoop基本架构

Sqoop 接收到客户端的请求命令后,通过命令翻译器(Task Translater)将命令转换为对应的MapReduce任务,然后通过 MapReduce任务将数据在RDBMS和Hadoop系统之间进行导入与导出。

Sqoop导入操作的输入是一个RDBMS数据库表,输出是包含表数据的一系列HDFS文件。导入操作是并行的,可以同时启动多个map任务,每个map任务分别逐行读取表中的一部分数据,并且将这部分数据输出到一个HDFS文件中((即一个map任务对应一个HDFS文件)

Sqoop导出操作是将数据从HDFS 或者 Hive导出到关系型数据库中。导出过程并行地读取HDFS上的文件,将每一行内容转化成一条记录添加到关系型数据库表中。除了导入和导出操作,Sqoop还可以查询数据库结构和表信息等。

3、Sqoop开发流程

在实际开发中,若数据存储于关系型数据库中,当数据量达到一定规模后需要对其进行分析或统计,此时关系型数据库可能成为瓶颈,这时可以将数据从关系型数据库导入到 HBase 中,而后通过数据仓库 Hive对 HBase中的数据进行统计与分析,并将分析结果存入到Hive表中。最后通过Sqoop将分析结果导出到关系型数据库中作为业务的辅助数据或者用于Web页面的展示。
Sqoop 的业务开发流程如下图所示。

file

二、使用Sqoop

要使用Sqoop,需要指定要使用的工具和控制工具的参数。可以通过进入Sqoop安装目录运行bin/sqoop命令来运行Sqoop。
Sqoop命令的运行语法如下:

$ sqoop tool-name [tool-arguments]

Sqoop内置了很多工具,可以通过执行以下命令,显示所有可用工具的列表:

$ sqoop help

1、数据导入

数据导入工具import,可以将关系型数据库中的数据导入到Hadoop平台,便于后续分析。

1.1 连接到数据库

要使用import工具,首先要连接到数据库,如下:

$ sqoop import --connect jdbc:mysql://database.example.com/employees

上述命令指明连接到服务器database.example.com上的数据库employees。每个map节点都会使用该连接字符串连接到数据库。
需要注意的是,服务器域名不能写localhost,因为各个map节点一般不在同一台服务器上。指定好数据库连接字符串后,需要指定数据库的用户名和密码,如下

$ sqoop import --connect jdbc:mysql://database.example.com/employees \
--username zhangsan --password 123456

上述命令中的反斜杠“\”表示换行显示命令,与Linux Shell用法一致。--password参数后直接指定了密码,这种方式将密码写入命令中不安全,推荐将密码写入一个文件,这个文件可以位于本地也可位于HDFS 上,并将该文件的权限设置为400,通过参数--password-file 指定到该文件。命令如下:

$ sqoop import --connect jdbc:mysql://database.example.com/ employees \
--username zhangsan --password-file ${user.home]/.password

此外,需要将关系型数据库的驱动jar包放入 Sqoop安装主目录下的lib目录中才能连接成功。如果连接MySQL数据库,不需要指定连接驱动类,Sqoop 会默认为其指定。如果连接其他数据库(例如SQL Server),需要以如下方式使用--driver参数指定连接的驱动类

$ sqoop import --driver com.microsoft.jdbc.sqlserver.SQLServerDriver \
--connect <connect-string> ·.

1.2 选择导入

使用--table参数选择需要导入的表或视图。例如,导入表employees:--table employees。
默认情况下,将选择表中的所有列并按照列的自然顺序进行导入,也可以使用--columns参数选择指定的列并控制列的顺序。例如,--columns "name,employee_id,jobtitle"。
还可以通过向 import 语句添加一个SQL WHERE子句来控制导入哪些行。例如,--where "id>400”,则源表中仅id大于400的记录将被导入。
此外,Sqoop在使用import 工具时,可以使用--split-by 参数指定关系型数据库中的某一列作为分区导入,默认是主键。Sqoop 会根据--split-by参数指定的列对数据进行切分,然后将切分出来的不同区域的数据分配到不同的map任务中。每一个map负责把对应区域的数据导入到Hadoop平台。由此可知,数据导入和导出的事务是以Mapper任务为单位的。

1.3 查询导入

Sqoop也可以使用 --query 参数指定一个查询SQL语句,将查询到的结果集进行导入,从而可以代替--table、--columns和--where参数。当使用--query参数时,必须使用 --target-dir 参数指定数据存放的目标位置。如果需要使用多个map将查询结果进行并行导入,那么每一个map需要执行相同的查询SQL语句,但是查询的数据范围不同,即 WHERE条件不同。此时,必须从查询语句中指定一个条件表达式$CONDITIONS并指定参数--split-by。每一个map会将条件表达式替换为实
际的SQL。例如,如下查询:

$sqoop import \
--query 'SELECT a.*,b.* FROM a JOIN b on(a.id == b.id) WHERE $CONDITIONS' \
--split-by a.id --target-dir /user/foo/joinresults

若不想进行并行导入而仍然使用--query参数,则需要将map数量置为1:

$sqoop import \
--query 'SELECT a.*, b.* FROM a JOIN b on(a.id == b.id) WHERE $CONDITIONS'\
-m 1 --target-dir /user/foo/joinresults

上述参数中的-m含义为设置map任务的数量,等同于--m和--num-mapper

1.4 导入到HBase

通过使用 --hbase-table 参数,可以将数据导入到 HBase 中的表,而不是 HDFS 中的目录。
--hbase-table参数的值为HBase的表名称。源表的每一行数据将被转换为一个HBase Put探作输出到目标表中,且每个输出列都将放置在同一个列族中,必须使用参数 --column-famly 指定列族的名称。HBase表的行键来自源表的其中一列。默认情况下,Sqoop将使用 --split-by 参数指定的列作为 HBase表的行键列。如果没有指定,则将使用源表的主键列(如果有的话)。此外,还可以使用--hbase-row-key参数手动指定行键列。

如果源表具有组合键(主键由多列组成),则必须指定参数--hbase-row-key,且参数--hbase-row-key的值必须是以逗号分隔的组合键的列名。而HBase的行键将通过使用下划线作为分隔符拼接组合键列的值来生成。

如果 HBase表和列族不存在,Sqoop将抛出异常。因此,在运行导入之前,应该创建HBase表和列族。但是如果指定参数--hbase-create-table,Sqoop将使用HBase的默认配置来创建不存在的HBase表和列族。

将数据导入到HBase的示例代码如下:

sqoop import \
--connect jdbc:mysql://example.com/testdb?characterEncoding-UTF-8 \
--username root --password 123456\
--guery "SELECT * FROM user info WHERE 1=1 AND \$CCONDITIONS" \
--hbase-table user_info \
--column-family baseinfo \
--hbase-row-key userId \
--split-by addedTime \
--m 2

1.5 导入到Hive

由于Sqoop导入工具的主要功能是将数据上传到HDFS 文件中,因此当使用Sqoop将数据导入到Hive中时,Sqoop 会首先将数据上传到HDFS文件中,然后通过生成和执行CREATE TABLE语句在Hive中创建表,最后使用LOAD DATA INPATH语句将HDFS 数据文件移动到Hive的数据仓库目录。
将数据导入到Hive中非常简单,只需在Sqoop导入命令中加入--hive-import参数即可。如果Hive表已经存在,可以指定--hive-overwrite 参数,以替换 Hive 中原有的表。在将数据导入HDFS后,Sqoop将生成一个Hive脚本,其中包含一个使用Hive类型定义列的CREATE TABLE 操作,以及一个LOAD DATA INPATH语句将数据文件移动到Hive的数据仓库目录。脚本将通过在运行 Sqoop 的计算机上调用已安装的Hive来执行。如果 Hive命令没有放入环境变量SPATH 中,可以使用--hive-home参数来指定Hive的安装主目录,或者在Sqoop 的配置文件 sqoop-env.sh 中进行指定。Sqoop将从指定的主目录中找到Hive的执行命令,例如 $HIVE_HOME/bin/hive

$ sqoop import\
--connect jdbc:mysql://example.com: 3306/testdb \ 
--hive-import \
--username hive\
--password hive \
--table t_users \
--hive-table t_users \
--hive-overwrite \
--input-fields-terminated-by '\t' \
--null-string '\\N' \
--null-non-string '\\N'

2、数据导出

三、Sqoop安装与配置

Sqoop是基于Hadoop系统的一款数据转移工具,因此在安装Sqoop之前需要先安装Hadoop。Hadoop的安装在前面已经详细讲解过,此处不再赘述。

1、下载Sqoop

从Apache官网 http://sqoop.apache.org 下载Sqoop 的稳定版本,本书使用的是 sqoop-1.4.7.bin_hadoop-2.6.0.tar.gz
下载地址:http://archive.apache.org/dist/sqoop/

2、安装Sqoop

(1)将下载的Sqoop安装文件上传到 centos01 服务器的 /opt/softwares目录并将其解压到目录
/opt/modules/

$ cd cd /opt/softwares
$ wget http://archive.apache.org/dist/sqoop/1.4.7/sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz
$ tar -zxvf sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz -C /opt/modules/

(2)将解压后生成的文件夹重命名为sqoop-1.4.7。

$ mv sqoop-1.4.7.bin__hadoop-2.6.0/ sqoop-1.4.7

查看安装的文件:

[root@centos01 modules]# ls -l
总用量 8
drwxr-xr-x. 15 root   root   4096 7月  17 12:27 bak-zookeeper-3.5.9
drwxr-xr-x. 12 hadoop hadoop  185 7月   7 08:37 hadoop-3.1.3
drwxr-xr-x.  7 hadoop hadoop  245 4月   2 2019 jdk1.8.0_211
drwxr-xr-x.  7 root   root    101 7月  29 23:49 kafka_2.12-2.5.0
drwxr-xr-x.  9 kaiyi  kaiyi  4096 12月 19 2017 sqoop-1.4.7
drwxr-xr-x.  8 root   root    161 7月  17 13:42 zookeeper-3.5.9

(3)为了以后操作方便,可在环境变量配置文件 /etc/profile 中加入以下内容:

#SQOOP_HOME
export SQOOP_HOME=/opt/modules/sqoop-1.4.7
export PATH=$PATH:$SQOOP_HOME/bin

内容添加完毕后,执行source /etc/profile 命令对环境变量文件进行刷新操作。

查看 /etc/profile 文件:

[root@centos01 modules]# cat /etc/profile
# /etc/profile

# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc

# It's NOT a good idea to change this file unless you know what you
# are doing. It's much better to create a custom.sh shell script in
# /etc/profile.d/ to make custom changes to your environment, as this
# will prevent the need for merging in future updates.

pathmunge () {
    case ":${PATH}:" in
        *:"$1":*)
            ;;
        *)
            if [ "$2" = "after" ] ; then
                PATH=$PATH:$1
            else
                PATH=$1:$PATH
            fi
    esac
}

if [ -x /usr/bin/id ]; then
    if [ -z "$EUID" ]; then
        # ksh workaround
        EUID=`/usr/bin/id -u`
        UID=`/usr/bin/id -ru`
    fi
    USER="`/usr/bin/id -un`"
    LOGNAME=$USER
    MAIL="/var/spool/mail/$USER"
fi

# Path manipulation
if [ "$EUID" = "0" ]; then
    pathmunge /usr/sbin
    pathmunge /usr/local/sbin
else
    pathmunge /usr/local/sbin after
    pathmunge /usr/sbin after
fi

HOSTNAME=`/usr/bin/hostname 2>/dev/null`
HISTSIZE=1000
if [ "$HISTCONTROL" = "ignorespace" ] ; then
    export HISTCONTROL=ignoreboth
else
    export HISTCONTROL=ignoredups
fi

export PATH USER LOGNAME MAIL HOSTNAME HISTSIZE HISTCONTROL

# By default, we want umask to get set. This sets it for login shell
# Current threshold for system reserved uid/gids is 200
# You could check uidgid reservation validity in
# /usr/share/doc/setup-*/uidgid file
if [ $UID -gt 199 ] && [ "`/usr/bin/id -gn`" = "`/usr/bin/id -un`" ]; then
    umask 002
else
    umask 022
fi

for i in /etc/profile.d/*.sh /etc/profile.d/sh.local ; do
    if [ -r "$i" ]; then
        if [ "${-#*i}" != "$-" ]; then 
            . "$i"
        else
            . "$i" >/dev/null
        fi
    fi
done

unset i
unset -f pathmunge
#JAVA_HOME
export JAVA_HOME=/opt/modules/jdk1.8.0_211
export PATH=$PATH:$JAVA_HOME/bin

#HADOOP_HOME
export HADOOP_HOME=/opt/modules/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

#SQOOP_HOME
export SQOOP_HOME=/opt/modules/sqoop-1.4.7
export PATH=$PATH:$SQOOP_HOME/bin

(4)进入Sqoop安装目录下的conf文件夹,复制文件 sqoop-env-template.shsqoop-env.sh,并修改文件 sqoop-env.sh,在其中加入以下内容,指定 Hadoop的安装目录

cd /opt/modules/sqoop-1.4.7/conf
cp sqoop-env-template.sh sqoop-env.sh

修改配置

export HADOOP_COMMON_HOME=/opt/modules/hadoop-3.1.3
export HADOOP_MAPRED_HOME=/opt/modules/hadoop-3.1.3

打印配置文件sqoop-env.sh

[root@centos01 conf]# cat sqoop-env.sh
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# included in all the hadoop scripts with source command
# should not be executable directly
# also should not be passed any arguments, since we need original $*

# Set Hadoop-specific environment variables here.

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/opt/modules/hadoop-3.1.3

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/opt/modules/hadoop-3.1.3

#set the path to where bin/hbase is available
#export HBASE_HOME=

#Set the path to where bin/hive is available
#export HIVE_HOME=

#Set the path for where zookeper config dir is
#export ZOOCFGDIR=
[root@centos01 conf]# 

(5) 将MySQL驱动包 mysql-connector-java-5.1.20-bin.jar 上传到Sqoop安装目录下的 lib 文件夹中。

3、测试是否安装成功

执行如下命令,查询本地已安装的MySQL 数据库的数据库列表:

$ sqoop list-databases --connect jdbc:mysql://192.168.169:3306/test --username root --password 123456
$ sqoop list-databases --connect jdbc:mysql://98.142.143.144:3307/ry-config --username root --password 123445

上述代码中各参数含义如下。

  • --connect:数据库的连接URL。
  • --username:数据库用户名。
  • --password:数据库密码。
[root@centos01 ~]# sqoop list-databases --connect jdbc:mysql://98.142.143.144:3307/ry-config --username root --password 123456
Warning: /opt/modules/sqoop-1.4.7/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /opt/modules/sqoop-1.4.7/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /opt/modules/sqoop-1.4.7/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /opt/modules/sqoop-1.4.7/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2021-08-28 21:03:21,747 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
2021-08-28 21:03:21,866 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2021-08-28 21:03:22,384 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
coin-exchange
mms_db
mysql
performance_schema
ry-config
sys
test
vms
[root@centos01 ~]# 

四、将MySQL表导入到HDFS

file
将该表数据导入到 HDFS

1、启动Hadoop

# 切换到hadoop账户
$ su hadoop 

# 进入到hadoop安装目录并启动
[hadoop@centos01 sbin]$ cd /opt/modules/hadoop-3.1.3/sbin
[hadoop@centos01 sbin]$ start-all.sh

查看启动节点:

[hadoop@centos02 root]$ jps
8147 Jps
7924 DataNode
8037 NodeManager

2、导入

sqoop import \
--connect 'jdbc:mysql://98.142.143.144:3306/demodb?characterEncoding=UTF-8' \
--username quantive\
--password Quant888 \
--table stock_prices \
--columns id,ticker,as_of_date,open_price,high_price,low_price,close_price,created_at,updated_at \
--target-dir /sqoop/mysql/stocks

导入出现错误:
file

..

Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2021-09-11 19:15:48,472 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7
2021-09-11 19:15:48,683 ERROR tool.BaseSqoopTool: Error parsing arguments for import:
2021-09-11 19:15:48,683 ERROR tool.BaseSqoopTool: Unrecognized argument: quant
2021-09-11 19:15:48,683 ERROR tool.BaseSqoopTool: Unrecognized argument: --password
2021-09-11 19:15:48,683 ERROR tool.BaseSqoopTool: Unrecognized argument: Quant888--table
2021-09-11 19:15:48,684 ERROR tool.BaseSqoopTool: Unrecognized argument: stock_prices--columns
2021-09-11 19:15:48,684 ERROR tool.BaseSqoopTool: Unrecognized argument: id,ticker,as_of_date,open_price,high_price,low_price,close_price,created_at,updated_at--target-dir
2021-09-11 19:15:48,684 ERROR tool.BaseSqoopTool: Unrecognized argument: /sqoop/mysql/stocks

Try --help for usage instructions.

解决方案:
参考博文:
sqoop出现Error parsing arguments for import

原来是因为格式的问题 \ 前边没有加空格导致,修复后的导入语句:

sqoop import \
--connect 'jdbc:mysql://98.142.13.15:3307/demodb?characterEncoding=UTF-8' \
--username root \
--password Quan888 \
--table stock_prices \
--columns id,ticker,as_of_date,open_price,high_price,low_price,close_price,created_at,updated_at \
--target-dir /sqoop/mysql/stocks

在重新导入时,上边的问题解决了,但是又出现了新问题:

...

2021-09-11 23:08:25,327 INFO db.DBInputFormat: Using read commited transaction isolation
2021-09-11 23:08:25,560 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`id`), MAX(`id`) FROM `stock_prices`
2021-09-11 23:08:25,759 INFO db.IntegerSplitter: Split size: 34; Num splits: 4 from: 1 to: 138
2021-09-11 23:08:26,202 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021-09-11 23:08:26,262 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021-09-11 23:08:26,300 INFO mapreduce.JobSubmitter: number of splits:4
2021-09-11 23:08:26,564 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
2021-09-11 23:08:26,625 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1631372716361_0001
2021-09-11 23:08:26,625 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-09-11 23:08:26,993 INFO conf.Configuration: resource-types.xml not found
2021-09-11 23:08:26,993 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-09-11 23:08:27,493 INFO impl.YarnClientImpl: Submitted application application_1631372716361_0001
2021-09-11 23:08:27,544 INFO mapreduce.Job: The url to track the job: http://centos01:8088/proxy/application_1631372716361_0001/
2021-09-11 23:08:27,544 INFO mapreduce.Job: Running job: job_1631372716361_0001
2021-09-11 23:08:33,645 INFO mapreduce.Job: Job job_1631372716361_0001 running in uber mode : false
2021-09-11 23:08:33,646 INFO mapreduce.Job:  map 0% reduce 0%
2021-09-11 23:08:33,668 INFO mapreduce.Job: Job job_1631372716361_0001 failed with state FAILED due to: Application application_1631372716361_0001 failed 2 times due to AM Container for appattempt_1631372716361_0001_000002 exited with  exitCode: 1
Failing this attempt.Diagnostics: [2021-09-11 23:08:32.769]Exception from container-launch.
Container id: container_1631372716361_0001_02_000001
Exit code: 1

[2021-09-11 23:08:32.773]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster

[2021-09-11 23:08:32.784]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
Last 4096 bytes of stderr :
错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster

For more detailed output, check the application tracking page: http://centos01:8088/cluster/app/application_1631372716361_0001 Then click on links to logs of each attempt.
. Failing the application.

2021-09-11 23:08:33,737 INFO mapreduce.ImportJobBase: Retrieved 0 records.
2021-09-11 23:08:33,737 ERROR tool.ImportTool: Import failed: Import job failed!

yarn执行MapReduce任务时,找不到主类导致的:

错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster

解决

在命令行输入:hadoop classpath

[hadoop@centos01 sbin]$ hadoop classpath
/opt/modules/hadoop-3.1.3/etc/hadoop:/opt/modules/hadoop-3.1.3/share/hadoop/common/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/common/*:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs/*:/opt/modules/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/mapreduce/*:/opt/modules/hadoop-3.1.3/share/hadoop/yarn:/opt/modules/hadoop-3.1.3/share/hadoop/yarn/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/yarn/*
[hadoop@centos01 sbin]$ 

把上述输出的值添加到 yarn-site.xml 文件对应的属性 yarn.application.classpath 下面,eg:

<property>
        <name>yarn.application.classpath</name>
                <value>/opt/modules/hadoop-3.1.3/etc/hadoop:/opt/modules/hadoop-3.1.3/share/hadoop/common/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/common/*:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs/*:/opt/modules/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/mapreduce/*:/opt/modules/hadoop-3.1.3/share/hadoop/yarn:/opt/modules/hadoop-3.1.3/share/hadoop/yarn/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/yarn/*</value>
</property>

修改后的 yarn-site.xml 文件:

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
 <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
 </property>

 <!--在etc/hadoop/yarn-site.xml文件中,修改检查虚拟内存的属性为false-->
 <property>
  <name>yarn.nodemanager.vmem-check-enabled</name>
  <value>false</value>
 </property>

  <!--引入hadoop路径-->
  <property>
    <name>yarn.application.classpath</name>
    <value>
    /opt/modules/hadoop-3.1.3/etc/hadoop:/opt/modules/hadoop-3.1.3/share/hadoop/common/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/common/*:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/hdfs/*:/opt/modules/hadoop-3.1.3/share/hadoop/mapreduce/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/mapreduce/*:/opt/modules/hadoop-3.1.3/share/hadoop/yarn:/opt/modules/hadoop-3.1.3/share/hadoop/yarn/lib/*:/opt/modules/hadoop-3.1.3/share/hadoop/yarn/*
    </value>
  </property>
</configuration>

重启yarn,重新跑MapReduce任务。

yarn --daemon stop resourcemanager
yarn --daemon stop nodemanager

yarn --daemon start resourcemanager
yarn --daemon start nodemanager

然后重新执行上述的导入命令,由于之前已经创建过 hdfs 文件夹,所以,在执行导入命令前需要删掉:

#列出 hdfs 下的文件
#$ hadoop dfs -ls /
#列出 hdfs / 路径下的所有文件,文件夹  
#$ hadoop dfs -ls -R /
[hadoop@centos01 hadoop]$ hdfs dfs -rmr /sqoop
rmr: DEPRECATED: Please use '-rm -r' instead.
Deleted /sqoop

[hadoop@centos01 hadoop]$ hdfs dfs -ls /
Found 3 items
-rw-r--r--   3 dell   supergroup       2586 2021-07-15 07:17 /TODO-20210430.txt
drwxr-xr-x   - hadoop supergroup          0 2021-07-10 08:58 /input
drwx------   - hadoop supergroup          0 2021-07-10 08:38 /tmp

重新执行导入命令:

sqoop import \
--connect 'jdbc:mysql://98.12.14.14:3307/demodb?characterEncoding=UTF-8' \
--username root \
--password Quant888 \
--table stock_prices \
--columns id,ticker,as_of_date,open_price,high_price,low_price,close_price,created_at,updated_at \
--target-dir /sqoop/mysql/stocks

在执行的时候,又报这样的错 -_-! :

INFO mapreduce.Job: Job job_1631375340678_0001 failed with state FAILED due to: Task failed task_163

修改 mapred-site.xml 文件:

原配置:

<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

新增:

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>centos01:9001</value>
  </property>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>

然后重启hadoop,再重新执行导入命令。

[hadoop@centos01 sbin]$ cd /opt/modules/hadoop-3.1.3/sbin
[hadoop@centos01 sbin]$ stop-all.sh
[hadoop@centos01 sbin]$ start-all.sh

[hadoop@centos02 root]$ jps
9000 Jps
8778 DataNode
8890 NodeManager

再次执行:

sqoop import \
--connect 'jdbc:mysql://98.12.13.15:3307/demodb?characterEncoding=UTF-8' \
--username root \
--password Quant88 \
--table stock_prices \
--columns id,ticker,as_of_date,open_price,high_price,low_price,close_price,created_at,updated_at \
--target-dir /sqoop/mysql/stocks

终于执行成功了 ^_^

...
2021-09-12 00:39:12,776 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1631378285014_0001
2021-09-12 00:39:12,776 INFO mapreduce.JobSubmitter: Executing with tokens: []
2021-09-12 00:39:13,058 INFO conf.Configuration: resource-types.xml not found
2021-09-12 00:39:13,059 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2021-09-12 00:39:13,544 INFO impl.YarnClientImpl: Submitted application application_1631378285014_0001
2021-09-12 00:39:13,597 INFO mapreduce.Job: The url to track the job: http://centos01:8088/proxy/application_1631378285014_0001/
2021-09-12 00:39:13,597 INFO mapreduce.Job: Running job: job_1631378285014_0001
2021-09-12 00:39:26,924 INFO mapreduce.Job: Job job_1631378285014_0001 running in uber mode : false
2021-09-12 00:39:26,926 INFO mapreduce.Job:  map 0% reduce 0%
2021-09-12 00:39:47,254 INFO mapreduce.Job:  map 25% reduce 0%
2021-09-12 00:39:51,316 INFO mapreduce.Job:  map 75% reduce 0%
2021-09-12 00:39:52,325 INFO mapreduce.Job:  map 100% reduce 0%
2021-09-12 00:39:52,343 INFO mapreduce.Job: Job job_1631378285014_0001 completed successfully
2021-09-12 00:39:52,418 INFO mapreduce.Job: Counters: 33
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=906740
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=403
        HDFS: Number of bytes written=10128
        HDFS: Number of read operations=24
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=8
    Job Counters 
        Killed map tasks=1
        Launched map tasks=4
        Other local map tasks=4
        Total time spent by all maps in occupied slots (ms)=81087
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=81087
        Total vcore-milliseconds taken by all map tasks=81087
        Total megabyte-milliseconds taken by all map tasks=83033088
    Map-Reduce Framework
        Map input records=77
        Map output records=77
        Input split bytes=403
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=537
        CPU time spent (ms)=3260
        Physical memory (bytes) snapshot=456445952
        Virtual memory (bytes) snapshot=10958757888
        Total committed heap usage (bytes)=121896960
        Peak Map Physical memory (bytes)=115388416
        Peak Map Virtual memory (bytes)=2742849536
    File Input Format Counters 
        Bytes Read=0
    File Output Format Counters 
        Bytes Written=10128
2021-09-12 00:39:52,429 INFO mapreduce.ImportJobBase: Transferred 9.8906 KB in 48.1741 seconds (210.2375 bytes/sec)
2021-09-12 00:39:52,433 INFO mapreduce.ImportJobBase: Retrieved 77 records.

查看导入结果

[hadoop@centos01 sbin]$ hdfs dfs -ls /sqoop/mysql/stocks
Found 5 items
-rw-r--r--   2 hadoop supergroup          0 2021-09-12 00:39 /sqoop/mysql/stocks/_SUCCESS
-rw-r--r--   2 hadoop supergroup       4584 2021-09-12 00:39 /sqoop/mysql/stocks/part-m-00000
-rw-r--r--   2 hadoop supergroup       4468 2021-09-12 00:39 /sqoop/mysql/stocks/part-m-00001
-rw-r--r--   2 hadoop supergroup          0 2021-09-12 00:39 /sqoop/mysql/stocks/part-m-00002
-rw-r--r--   2 hadoop supergroup       1076 2021-09-12 00:39 /sqoop/mysql/stocks/part-m-00003

随机查看一个文件内容,这里查看 part-m-00003

[hadoop@centos01 sbin]$ hdfs dfs -cat /sqoop/mysql/stocks/part-m-00003
2021-09-12 00:45:31,462 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
128,BABA,2021-09-07,174.14999389648438,176.83999633789062,173.3699951171875,175.16000366210938,2021-09-07 14:02:35.0,2021-09-11 05:05:05.0
129,JD,2021-09-07,81.08999633789062,84.08000183105469,80.94000244140625,83.23999786376953,2021-09-07 14:02:35.0,2021-09-11 05:05:05.0
131,BABA,2021-09-08,174.25,174.3699951171875,169.25999450683594,170.7100067138672,2021-09-08 14:31:27.0,2021-09-11 05:05:05.0
132,JD,2021-09-08,82.29000091552734,82.79000091552734,80.55999755859375,81.7300033569336,2021-09-08 14:31:27.0,2021-09-11 05:05:05.0
134,BABA,2021-09-09,167.3000030517578,169.0399932861328,165.5800018310547,167.32000732421875,2021-09-10 05:05:05.0,2021-09-11 05:05:05.0
135,JD,2021-09-09,78.75499725341797,81.16999816894531,77.91999816894531,80.26000213623047,2021-09-10 05:05:05.0,2021-09-11 05:05:05.0
137,BABA,2021-09-10,171.1999969482422,171.63999938964844,167.61000061035156,168.10000610351562,2021-09-11 05:05:05.0,2021-09-11 05:05:05.0
138,JD,2021-09-10,82.19999694824219,83.08999633789062,80.48999786376953,80.63999938964844,2021-09-11 05:05:05.0,2021-09-11 05:05:05.0

可见MySQL中表的数据导入成功了。


相关文章:
sqoop出现Error parsing arguments for import
错误: 找不到或无法加载主类 org.apache.hadoop.mapreduce.v2.app.MRAppMaster
Hadoop错误:failed with state FAILED due to: Application

为者常成,行者常至