最近在写点Hadoop的mapred程序,由于在命令行+vim的环境下,每次编译和打包程序都很繁琐,下面自制了一个编译+打包的shell程序。
说明:
目录列表为myproject/jc.sh
myproject/source/
myproject/classes/
1.在source目录编辑xxx.java的代码
2.在myproject下运行jc.sh
3.jc.sh会在classes生成对应class文件,在myprojece下生成jar包
jc.sh如下:
点击(此处)折叠或打开
-
#!/bin/bash
-
#echo "$# $0 $1 $2"
-
-
HH=/root/hadoop/hadoop-0.20.203.0
-
SRC=$1
-
OBJ_JAR=$2
-
-
if [ $# -ne 2 ];then
-
echo "usage: jc.sh source.java package.jar"
-
exit 0
-
elif [ ${1##*.} != "java" ];then
-
echo "Notice: source.java java!"
-
exit 0
-
elif [ ${2##*.} != "jar" ];then
-
echo "Notice: package.jar jar!"
-
exit 0
-
else
-
-
javac -classpath $HH/hadoop-core-0.20.203.0.jar:$HH/lib/commons-cli-1.2.jar:$HH/commons-lang3-3.1.jar -d classes/ source/$SRC
-
jar -vcf $OBJ_JAR -C classes/ .
-
- fi
root@cloud2:~/hadoop/hadoop-0.20.203.0/slp_workspace# ls source
encode.java textInput.java
root@cloud2:~/hadoop/hadoop-0.20.203.0/slp_workspace# ./jc.sh
usage: jc.sh source.java package.jar
root@cloud2:~/hadoop/hadoop-0.20.203.0/slp_workspace# ./jc.sh encode.java temp.jar
标明清单(manifest)
增加:WordCount.class(读入= 1830) (写出= 988)(压缩了 46%)
增加:WordCount$TokenizerMapper.class(读入= 1736) (写出= 752)(压缩了 56%)
增加:slp/(读入= 0) (写出= 0)(存储了 0%)
增加:slp/input/(读入= 0) (写出= 0)(存储了 0%)
增加:slp/input/TextInput.class(读入= 1813) (写出= 945)(压缩了 47%)
增加:slp/input/TextInput$TokenizerMapper.class(读入= 1425) (写出= 540)(压缩了 62%)
增加:slp/input/TextInput$InputMapper.class(读入= 1417) (写出= 534)(压缩了 62%)
增加:slp/input/textInput$InputMapper.class(读入= 1417) (写出= 534)(压缩了 62%)
增加:slp/input/textInput.class(读入= 2053) (写出= 1039)(压缩了 49%)
增加:slp/input/textInput$MergeReducer.class(读入= 1730) (写出= 699)(压缩了 59%)
增加:slp/encode/(读入= 0) (写出= 0)(存储了 0%)
增加:slp/encode/encode.class(读入= 3729) (写出= 2034)(压缩了 45%)
增加:slp/encode/encode$MergeReducer.class(读入= 1723) (写出= 696)(压缩了 59%)
增加:slp/encode/encode$InputMapper.class(读入= 1553) (写出= 636)(压缩了 59%)
增加:WordCount$IntSumReducer.class(读入= 1735) (写出= 736)(压缩了 57%)