[hadoop@master conf]$ vi flume-env.sh<br>export JAVA_HOME=/usr/loocal/src/jdk<br>#export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop
复制代码
使用 flume-ng version 命令验证安装是否成功,若能够正常查询 Flume 组件版本为 1.6.0,则表示安装成功。
[hadoop@master conf]$ flume-ng version<br>Error: Could not find or load main class org.apache.flume.tools.GetJavaProperty<br>Flume 1.6.0<br>Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git<br>Revision: 2561a23240a71ba20bf288c7c2cda88f443c2080<br>Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015<br>From source with checksum b29e416802ce9ece3269d34233baf43f
[[hadoop@master flume]$ flume-ng agent --conf-file xxx.conf --name a1<br>Warning: No configuration directory set! Use --conf <dir> to override.<br>Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS access<br>Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpath<br>Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpath<br>Info: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE access<br>Info: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpath<br>Info: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpath<br>Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpath<br>Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpath<br>Info: Including Hive libraries found via (/usr/local/src/hive) for Hive access<br>...<br>23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.append.accepted == 0<br>23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.append.received == 0<br>23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.events.accepted == 17<br>23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.events.received == 17<br>23/04/21 16:02:35 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SOURCE, name: r1. src.open-connection.count == 0<br>23/04/21 16:02:35 INFO source.SpoolDirectorySource: SpoolDir source r1 stopped. Metrics: SOURCE:r1{src.events.accepted=17, src.open-connection.count=0, src.append.received=0, src.append-batch.received=1, src.append-batch.accepted=1, src.append.accepted=0, src.events.received=17}
复制代码
ctrl+c 退出 flume 传输
#首先得开启所有节点<br>[hadoop@master flume]$ start-all.sh<br>This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh<br>Starting namenodes on [master]<br>master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.out<br>192.168.88.201: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.example.com.out<br>192.168.88.200: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.example.com.out<br>Starting secondary namenodes [0.0.0.0]<br>0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.example.com.out<br>starting yarn daemons<br>starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.example.com.out<br>192.168.88.200: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.example.com.out<br>192.168.88.201: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.example.com.out<br>[hadoop@master flume]$ ss -antl<br>State Recv-Q Send-Q Local Address:Port Peer Address:Port <br>LISTEN 0 128 192.168.88.101:9000 *:* <br>LISTEN 0 128 *:50090 *:* <br>LISTEN 0 128 *:50070 *:* <br>LISTEN 0 128 *:22 *:* <br>LISTEN 0 128 ::ffff:192.168.88.101:8030 :::* <br>LISTEN 0 128 ::ffff:192.168.88.101:8031 :::* <br>LISTEN 0 128 ::ffff:192.168.88.101:8032 :::* <br>LISTEN 0 128 ::ffff:192.168.88.101:8033 :::* <br>LISTEN 0 80 :::3306 :::* <br>LISTEN 0 128 :::22 :::* <br>LISTEN 0 128 ::ffff:192.168.88.101:8088 :::*
[root@master ~]# su - hadoop<br>Last login: Fri Apr 21 15:26:05 CST 2023 on pts/0<br>Last failed login: Fri Apr 21 16:02:08 CST 2023 from slave1 on ssh:notty<br>There were 4 failed login attempts since the last successful login.
[hadoop@master ~]$ start-all.sh <br>This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh<br>Starting namenodes on [master]<br>master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.example.com.out<br>192.168.88.201: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.example.com.out<br>192.168.88.200: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.example.com.out<br>Starting secondary namenodes [0.0.0.0]<br>0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.example.com.out<br>starting yarn daemons<br>starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.example.com.out<br>192.168.88.200: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.example.com.out<br>192.168.88.201: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.example.com.out
复制代码
步骤四:关闭 Hadoop
[hadoop@master hadoop]$ stop-all.sh <br>This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh<br>Stopping namenodes on [master]<br>master: stopping namenode<br>192.168.88.201: stopping datanode<br>192.168.88.200: stopping datanode<br>Stopping secondary namenodes [0.0.0.0]<br>0.0.0.0: stopping secondarynamenode<br>stopping yarn daemons<br>stopping resourcemanager<br>192.168.88.200: stopping nodemanager<br>192.168.88.201: stopping nodemanager<br>no proxyserver to stop
[hadoop@master hadoop]$ cd /usr/local/src/hbase/<br>[hadoop@master hbase]$ hbase version<br>HBase 1.2.1<br>Source code repository git://asf-dev/home/busbey/projects/hbase revision=8d8a7107dc4ccbf36a92f64675dc60392f85c015<br>Compiled by busbey on Wed Mar 30 11:19:21 CDT 2016<br>From source with checksum f4bb4a14bb4e0b72b46f729dae98a772
复制代码
结果显示 HBase1.2.1,说明 HBase 正在运行,版本号为 1.2.1。
如果没有启动,则执行命令 start-hbase.sh 启动 HBase。<br>[hadoop@master hbase]$ start-hbase.sh <br>master: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-master.example.com.out<br>slave1: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-slave1.example.com.out<br>slave2: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-slave2.example.com.out<br>starting master, logging to /usr/local/src/hbase/logs/hbase-hadoop-master-master.example.com.out<br>slave2: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave2.example.com.out<br>slave1: starting regionserver, logging to /usr/local/src/hbase/logs/hbase-hadoop-regionserver-slave1.example.com.out
复制代码
步骤二:查看 HBase 版本信息
执行命令hbase shell,进入HBase命令交互界面
[hadoop@master hbase]$ hbase shell<br>SLF4J: Class path contains multiple SLF4J bindings.<br>SLF4J: Found binding in [jar:file:/usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]<br>SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]<br>SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.<br>SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]<br>HBase Shell; enter 'help<RETURN>' for list of supported commands.<br>Type "exit<RETURN>" to leave the HBase Shell<br>Version 1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016<br><br>hbase(main):001:0>
复制代码
输入 version,查询 HBase 版本
hbase(main):001:0> version<br>1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016
显示更多的关于 Master、Slave1 和 Slave2 主机的服务端口、请求时间等详细信息。
如果需要查询更多关于 HBase 状态,执行命令 help 'status'
hbase(main):004:0> help 'status'<br>Show cluster status. Can be 'summary', 'simple', 'detailed', or 'replication'. The<br>default is 'summary'. Examples:<br><br> hbase> status<br> hbase> status 'simple'<br> hbase> status 'summary'<br> hbase> status 'detailed'<br> hbase> status 'replication'<br> hbase> status 'replication', 'source'<br> hbase> status 'replication', 'sink'<br> <br>hbase(main):005:0> quit<br>[hadoop@master hbase]$
[hadoop@master hbase]$ stop-hbase.sh <br>stopping hbase.................<br>slave1: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid<br>slave2: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid<br>master: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
复制代码
没有错误提示,显示$提示符时,即停止了 HBase 服务。
4、通过命令查看 Hive 状态
步骤一:启动 Hive
切换到/usr/local/src/hive 目录,输入 hive,回车。
[hadoop@master hbase]$ cd /usr/local/src/hive/<br>[hadoop@master hive]$ hive<br>SLF4J: Class path contains multiple SLF4J bindings.<br>SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/hive-jdbc-2.0.0-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class]<br>SLF4J: Found binding in [jar:file:/usr/local/src/hive/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]<br>SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]<br>SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.<br>SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]<br><br>Logging initialized using configuration in jar:file:/usr/local/src/hive/lib/hive-common-2.0.0.jar!/hive-log4j2.properties<br>Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.<br>hive>
hive> insert into stu values (001,"liuyaling");<br>WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.<br>Query ID = hadoop_20230423222915_a95e9891-fdf5-4739-a63e-fcadecc85e28<br>Total jobs = 3<br>Launching Job 1 out of 3<br>Number of reduce tasks is set to 0 since there's no reduce operator<br>Starting Job = job_1682258121749_0001, Tracking URL = http://master:8088/proxy/application_1682258121749_0001/<br>Kill Command = /usr/local/src/hadoop/bin/hadoop job -kill job_1682258121749_0001<br>Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0<br>2023-04-23 22:31:36,420 Stage-1 map = 0%, reduce = 0%<br>2023-04-23 22:31:43,892 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 2.72 sec<br>MapReduce Total cumulative CPU time: 2 seconds 720 msec<br>Ended Job = job_1682258121749_0001<br>Stage-4 is selected by condition resolver.<br>Stage-3 is filtered out by condition resolver.<br>Stage-5 is filtered out by condition resolver.<br>Moving data to: hdfs://master:9000/user/hive/warehouse/stu/.hive-staging_hive_2023-04-23_22-30-32_985_529079703757687911-1/-ext-10000<br>Loading data to table default.stu<br>MapReduce Jobs Launched: <br>Stage-Stage-1: Map: 1 Cumulative CPU: 2.72 sec HDFS Read: 4135 HDFS Write: 79 SUCCESS<br>Total MapReduce CPU Time Spent: 2 seconds 720 msec<br>OK<br>Time taken: 72.401 seconds
复制代码
按照以上操作,继续插入两条信息:id 和 name 分别为 1002、1003 和 yanhaoxiang、tnt。
(5)插入数据后查看表的信息
hive> show tables;<br>OK<br>stu<br>test<br>values__tmp__table__1<br>values__tmp__table__2<br>Time taken: 0.026 seconds, Fetched: 4 row(s)
[hadoop@master ~]$ cd /usr/local/src/sqoop/<br>[hadoop@master sqoop]$ ./bin/sqoop-version <br>Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.<br>Please set $HCAT_HOME to the root of your HCatalog installation.<br>Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.<br>Please set $ACCUMULO_HOME to the root of your Accumulo installation.<br>23/04/23 22:40:59 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7<br>Sqoop 1.4.7<br>git commit id 2328971411f57f0cb683dfb79d19d4d19d185dd8<br>Compiled by maugli on Thu Dec 21 15:59:58 STD 2017
[hadoop@master sqoop]$ bin/sqoop list-databases --connect jdbc:mysql://master:3306/ --username root --password Password@123!<br>Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.<br>Please set $HCAT_HOME to the root of your HCatalog installation.<br>Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.<br>Please set $ACCUMULO_HOME to the root of your Accumulo installation.<br>23/04/23 22:42:16 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7<br>23/04/23 22:42:16 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.<br>23/04/23 22:42:16 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.<br>Sun Apr 23 22:42:16 CST 2023 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification.<br>information_schema<br>hive<br>mysql<br>performance_schema<br>sample<br>sys
[hadoop@master sqoop]$ sqoop help<br>Warning: /usr/local/src/sqoop/../hcatalog does not exist! HCatalog jobs will fail.<br>Please set $HCAT_HOME to the root of your HCatalog installation.<br>Warning: /usr/local/src/sqoop/../accumulo does not exist! Accumulo imports will fail.<br>Please set $ACCUMULO_HOME to the root of your Accumulo installation.<br>23/04/23 22:42:37 INFO sqoop.Sqoop: Running Sqoop version: 1.4.7<br>usage: sqoop COMMAND [ARGS]<br><br>Available commands:<br> codegen Generate code to interact with database records<br> create-hive-table Import a table definition into Hive<br> eval Evaluate a SQL statement and display the results<br> export Export an HDFS directory to a database table<br> help List available commands<br> import Import a table from a database to HDFS<br> import-all-tables Import tables from a database to HDFS<br> import-mainframe Import datasets from a mainframe server to HDFS<br> job Work with saved jobs<br> list-databases List available databases on a server<br> list-tables List available tables in a database<br> merge Merge results of incremental imports<br> metastore Run a standalone Sqoop metastore<br> version Display version information<br><br>See 'sqoop help COMMAND' for information on a specific command.
步骤一:检查 Flume 安装是否成功,执行 flume-ng version 命令,查看 Flume 的版本。
[hadoop@master sqoop]$ cd /usr/local/src/flume/<br>[hadoop@master flume]$ flume-ng version<br>Flume 1.6.0<br>Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git<br>Revision: 2561a23240a71ba20bf288c7c2cda88f443c2080<br>Compiled by hshreedharan on Mon May 11 11:15:44 PDT 2015<br>From source with checksum b29e416802ce9ece3269d34233baf43f<br>[hadoop@master flume]$
复制代码
步骤二:添加 example.conf 到/usr/local/src/flume
[hadoop@master flume]$ vim /usr/local/src/flume/example.conf<br>a1.sources=r1<br>a1.sinks=k1<br>a1.channels=c1<br>a1.sources.r1.type=spooldir<br>a1.sources.r1.spoolDir=/usr/local/src/flume/<br>a1.sources.r1.fileHeader=true<br>a1.sinks.k1.type=hdfs<br>a1.sinks.k1.hdfs.path=hdfs://master:9000/flume<br>a1.sinks.k1.hdfs.rollsize=1048760<br>a1.sinks.k1.hdfs.rollCount=0<br>a1.sinks.k1.hdfs.rollInterval=900<br>a1.sinks.k1.hdfs.useLocalTimeStamp=true<br>a1.channels.c1.type=file<br>a1.channels.c1.capacity=1000<br>a1.channels.c1.transactionCapacity=100<br>a1.sources.r1.channels = c1<br>a1.sinks.k1.channel = c1
复制代码
步骤三:启动 Flume Agent a1 日志控制台
[hadoop@master flume]$ flume-ng agent --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console<br>Warning: No configuration directory set! Use --conf <dir> to override.<br>Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS access<br>Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpath<br>Info: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpath<br>Info: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE access<br>Info: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpath<br>Info: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpathxxxxxxxxxx [hadoop@master flume]$ flume-ng agent --conf-file example.conf --name a1 -Dflume.root.logger=INFO,consoleWarning: No configuration directory set! Use --conf <dir> to override.Info: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS accessInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpathInfo: Excluding /usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpathInfo: Including HBASE libraries found via (/usr/local/src/hbase/bin/hbase) for HBASE accessInfo: Excluding /usr/local/src/hbase/lib/slf4j-api-1.7.7.jar from classpathInfo: Excluding /usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar from classpath[hadoop@master flume]$ /usr/local/src/flume/bin/flume-ng agent --conf ./conf --conf-file ./example.conf --name a1 -Dflume.root.logger=INFO,consoleInfo: Sourcing environment configuration script /usr/local/src/flume/conf/flume-env.shInfo: Including Hadoop libraries found via (/usr/local/src/hadoop/bin/hadoop) for HDFS access.../lib/native:/usr/local/src/hadoop/lib/native org.apache.flume.node.Application --conf-file ./example.conf --name a1/usr/local/src/flume/bin/flume-ng: line 241: /usr/loocal/src/jdk/bin/java: No such file or directory<br>...<br>23/04/23 22:52:56 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: k1. sink.connection.failed.count == 0<br>23/04/23 22:52:56 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: k1. sink.event.drain.attempt == 1918<br>23/04/23 22:52:56 INFO instrumentation.MonitoredCounterGroup: Shutdown Metric for type: SINK, name: k1. sink.event.drain.sucess == 1918<br>[hadoop@master flume]$ ^C