PancrasL的博客

部署Spark

2020-12-28

See the source image

部署单机环境下的Spark

  • 在官网下载Spark安装包
1
2
3
4
# 官网地址
https://www.apache.org/dyn/closer.lua/spark/spark-2.4.7/spark-2.4.7-bin-hadoop2.7.tgz
# 下载链接
https://mirrors.tuna.tsinghua.edu.cn/apache/spark/spark-2.4.7/spark-2.4.7-bin-hadoop2.7.tgz
  • 解压缩
1
$ tar -zxvf spark-2.4.7-bin-hadoop2.7.tgz
  • 安装Java
1
$ sudo apt install openjdk-8-jdk -y
  • 启动Spark Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
$ cd spark-2.4.7-bin-hadoop2.7
$ bin/spark-shell
20/12/23 18:19:08 WARN Utils: Your hostname, Mi-Lv resolves to a loopback address: 127.0.1.1; using 192.168.52.1 instead (on interface eth1)
20/12/23 18:19:08 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
20/12/23 18:19:08 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://192.168.52.1:4040
Spark context available as 'sc' (master = local[*], app id = local-1608718759318).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.4.7
/_/

Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_275)
Type in expressions to have them evaluated.
Type :help for more information.

scala>
  • word count程序
1
2
scala> sc.textFile("/home/pancras/input.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).collect
res1: Array[(String, Int)] = Array((hello,2), (morning,1), (world,2), (good,1))