热门标签 | HotTags
当前位置:  开发笔记 > 编程语言 > 正文

在阿里云EMR环境下部署Kylin

本人耗时约一天。1下载Kylinwgethttp:mirrors.tuna.tsinghua.edu.cnapachekylinapache-kylin-2.6.3ap

本人耗时约一天。

1 下载Kylin

wget http://mirrors.tuna.tsinghua.edu.cn/apache/kylin/apache-kylin-2.6.3/apache-kylin-2.6.3-bin-hbase1x.tar.gz

tar zxvf apache-kylin-2.6.3-bin-hbase1x.tar.gz

cd apache-kylin-2.6.3-bin-hbase1x

下文以解压到/home/hadoop为例。

 

2 设置环境变量

export KYLIN_HOME=`pwd`

export HIVE_COnF=/etc/ecm/hive-conf(你的EMR Hive配置文件路径)

export HADOOP_HOME=/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0

export HIVE_HOME=/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin

export SPARK_HOME=/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8

 

3 准备必要文件:

3.1 把你的EMR Hadoop路径/opt/apps/ecm/service/hadoop/2.8.5-1.1.0/package/hadoop-2.8.5-1.1.0拷到/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0(具体路径可能不同)

3.2 下载一个Hive 1.2.1

wget https://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-bin.tar.gz

并解压到/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin

3.3 把你的EMR Spark路径/opt/apps/ecm/service/spark/2.3.2-1.2.0/package/spark-2.3.2-1.2.0-bin-hadoop2.8拷到/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8(具体路径可能不同)

3.4 建目录hadoop-conf并执行:

ln -s /etc/ecm/hadoop-conf/core-site.xml $KYLIN_HOME/hadoop-conf/core-site.xml

ln -s /etc/ecm/hadoop-conf/hdfs-site.xml $KYLIN_HOME/hadoop-conf/hdfs-site.xml

ln -s /etc/ecm/hadoop-conf/yarn-site.xml $KYLIN_HOME/hadoop-conf/yarn-site.xml

ln -s /etc/ecm/hbase-conf/hbase-site.xml $KYLIN_HOME/hadoop-conf/hbase-site.xml

ln -s /etc/ecm/hive-conf/hive-site.xml $KYLIN_HOME/hadoop-conf/hive-site.xml

再编辑conf/kylin.properties

改一句:kylin.env.hadoop-conf-dir=/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-conf

 

4 替换Tomcat

自带的tomcat有问题,把里面的webapps/kylin.war拷出来备份,再删掉tomcat目录,然后下载一个Tomcat 8:

wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz

解压并重命名为tomcat,再把kylin.war拷进webapps。

修改conf/server.xml中的Web端口为7070。

执行:

rm ./tomcat/webapps/kylin/WEB-INF/lib/slf4j-api-1.7.21.jar

cp ./hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar ./tomcat/webapps/kylin/WEB-INF/lib/

 

5 Hack Hive

命令及Java修改如下:

 

cd /tmp/

mkdir hive-jdbc-1.2.1-standalone

cd hive-jdbc-1.2.1-standalone/

cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-jdbc-1.2.1-standalone.jar .

unzip hive-jdbc-1.2.1-standalone.jar

rm *.jar


cd /tmp/

mkdir hive-metastore-1.2.1.spark2

cd hive-metastore-1.2.1.spark2/

cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8/jars/hive-metastore-1.2.1.spark2.jar .

unzip hive-metastore-1.2.1.spark2.jar 

rm *.jar


cd /tmp/

mkdir hive-metastore-1.2.1

cd hive-metastore-1.2.1

cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar .

unzip hive-metastore-1.2.1.jar 

rm *.jar


cd /tmp/

mkdir hive-exec-1.2.1

cd hive-exec-1.2.1/

cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar .

unzip hive-exec-1.2.1.jar 

rm *.jar


cd /tmp/

mkdir hive-common-1.2.1

cd hive-common-1.2.1/

cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar .

unzip hive-common-1.2.1.jar 

rm *.jar

 

cd /tmp/

mkdir hive-exec-1.2.1.spark2

cd hive-exec-1.2.1.spark2/

cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8/jars/hive-exec-1.2.1.spark2.jar .

unzip hive-exec-1.2.1.spark2.jar 

rm *.jar


cd /tmp/

wget https://archive.apache.org/dist/hive/hive-1.2.1/apache-hive-1.2.1-src.tar.gz

tar zxvf apache-hive-1.2.1-src.tar.gz 

 

cd /tmp/apache-hive-1.2.1-src

cd metastore/src/java/

cp org/apache/hadoop/hive/metastore/MetaStoreUtils.java /tmp/

cp org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java /tmp/

cp org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java /tmp/

rm -r org

mkdir -p org/apache/hadoop/hive/metastore/utils

cp /tmp/MetaStoreUtils.java org/apache/hadoop/hive/metastore/utils/

cp /tmp/HiveMetaStoreClient.java org/apache/hadoop/hive/metastore/

cp /tmp/RetryingMetaStoreClient.java org/apache/hadoop/hive/metastore/

vi org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java

vi org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java

vi org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java

javac -cp .:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libthrift-0.9.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-lang-2.6.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/guava-14.0.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/hadoop-common-2.8.5.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-logging-1.1.3.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-cli-1.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-shims-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-shims-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libfb303-0.9.2.jar org/apache/hadoop/hive/metastore/HiveMetaStoreClient.java

javac -cp .:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libthrift-0.9.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-lang-2.6.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/guava-14.0.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/hadoop-common-2.8.5.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-logging-1.1.3.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-cli-1.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/ org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java

javac -cp .:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libthrift-0.9.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-lang-2.6.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/guava-14.0.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/hadoop-common-2.8.5.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-logging-1.1.3.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-cli-1.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-shims-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-shims-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libfb303-0.9.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/jsr305-3.0.0.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-jdbc-1.2.1-standalone.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.5.jar org/apache/hadoop/hive/metastore/utils/MetaStoreUtils.java


cd /tmp/apache-hive-1.2.1-src

cd ql/src/java/

cp org/apache/hadoop/hive/ql/io/AcidUtils.java /tmp/

rm -r org/apache/hadoop/hive/ql/*

mkdir -p org/apache/hadoop/hive/ql/io

cp /tmp/AcidUtils.java org/apache/hadoop/hive/ql/io/

vi org/apache/hadoop/hive/ql/io/AcidUtils.java

javac -cp .:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libthrift-0.9.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-lang-2.6.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/guava-14.0.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0/share/hadoop/common/hadoop-common-2.8.5.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-logging-1.1.3.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-cli-1.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-shims-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-shims-common-1.2.1.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/libfb303-0.9.2.jar:/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar org/apache/hadoop/hive/ql/io/AcidUtils.java


cd /tmp/apache-hive-1.2.1-src

cd common/src/java/

cp org/apache/hive/common/util/ShutdownHookManager.java /tmp/

rm -r org/

cp /tmp/ShutdownHookManager.java org/apache/hive/common/util/

vi org/apache/hive/common/util/ShutdownHookManager.java

javac -cp /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/commons-logging-1.1.3.jar org/apache/hive/common/util/ShutdownHookManager.java

 

cd /tmp/hive-common-1.2.1/

cp /tmp/apache-hive-1.2.1-src/common/src/java/org/apache/hive/common/util/ShutdownHookManager* org/apache/hive/common/util/

zip -r /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-common-1.2.1.jar *

cd ../hive-exec-1.2.1/

cp /tmp/apache-hive-1.2.1-src/common/src/java/org/apache/hive/common/util/ShutdownHookManager* org/apache/hive/common/util/

cp /tmp/apache-hive-1.2.1-src/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils* org/apache/hadoop/hive/ql/io/

zip -r -q /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar *

zip -r -q /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8/jars/hive-exec-1.2.1.jar *

cd ../hive-exec-1.2.1.spark2/

cp /tmp/apache-hive-1.2.1-src/common/src/java/org/apache/hive/common/util/ShutdownHookManager* org/apache/hive/common/util/

cp /tmp/apache-hive-1.2.1-src/ql/src/java/org/apache/hadoop/hive/ql/io/AcidUtils* org/apache/hadoop/hive/ql/io/

zip -r -q /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8/jars/hive-exec-1.2.1.spark2.jar *

cd /tmp/hive-jdbc-1.2.1-standalone/

cp /tmp/apache-hive-1.2.1-src/common/src/java/org/apache/hive/common/util/ShutdownHookManager* org/apache/hive/common/util/

cp /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient* org/apache/hadoop/hive/metastore/

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/utils org/apache/hadoop/hive/metastore/

zip -r -q /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-jdbc-1.2.1-standalone.jar *

cd /tmp/hive-metastore-1.2.1

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/utils org/apache/hadoop/hive/metastore/

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.* org/apache/hadoop/hive/metastore/

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient* org/apache/hadoop/hive/metastore/

zip -r -q /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/apache-hive-1.2.1-bin/lib/hive-metastore-1.2.1.jar *

cd ../hive-metastore-1.2.1.spark2/

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/utils org/apache/hadoop/hive/metastore/

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.* org/apache/hadoop/hive/metastore/

cp -r /tmp/apache-hive-1.2.1-src/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStoreClient* org/apache/hadoop/hive/metastore/

zip -r -q /home/hadoop/apache-kylin-2.6.3-bin-hbase1x/spark-2.3.2-1.2.0-bin-hadoop2.8/jars/hive-metastore-1.2.1.spark2.jar *

 

Java文件修改:

5.1 AcidUtils.java

加入:

  public static class AcidOperationalProperties {
    private int description = 0x00;
    public static final int SPLIT_UPDATE_BIT = 0x01;
    public static final String SPLIT_UPDATE_STRING = "split_update";
    public static final int HASH_BASED_MERGE_BIT = 0x02;
    public static final String HASH_BASED_MERGE_STRING = "hash_merge";
    public static final int INSERT_ONLY_BIT = 0x04;
    public static final String INSERT_ONLY_STRING = "insert_only";
    public static final String DEFAULT_VALUE_STRING = "default";
    public static final String INSERTONLY_VALUE_STRING = "insert_only";

    private AcidOperationalProperties() {
    }


    /**
 *      * Returns an acidOperationalProperties object that represents default ACID behavior for tables
 *           * that do no explicitly specify/override the default behavior.
 *                * @return the acidOperationalProperties object.
 *                     */
    public static AcidOperationalProperties getDefault() {
      AcidOperationalProperties obj = new AcidOperationalProperties();
      obj.setSplitUpdate(true);
      obj.setHashBasedMerge(false);
      obj.setInsertOnly(false);
      return obj;
    }

    /**
 *      * Returns an acidOperationalProperties object for tables that uses ACID framework but only
 *           * supports INSERT operation and does not require ORC or bucketing
 *                * @return the acidOperationalProperties object
 *                     */
    public static AcidOperationalProperties getInsertOnly() {
      AcidOperationalProperties obj = new AcidOperationalProperties();
      obj.setInsertOnly(true);
      return obj;
    }

    /**
 *      * Returns an acidOperationalProperties object that is represented by an encoded string.
 *           * @param propertiesStr an encoded string representing the acidOperationalProperties.
 *                * @return the acidOperationalProperties object.
 *                     */
    public static AcidOperationalProperties parseString(String propertiesStr) {
      if (propertiesStr == null) {
        return AcidOperationalProperties.getDefault();
      }
      if (propertiesStr.equalsIgnoreCase(DEFAULT_VALUE_STRING)) {
        return AcidOperationalProperties.getDefault();
      }
      if (propertiesStr.equalsIgnoreCase(INSERTONLY_VALUE_STRING)) {
        return AcidOperationalProperties.getInsertOnly();
      }
      AcidOperationalProperties obj = new AcidOperationalProperties();
      String[] options = propertiesStr.split("\\|");
      for (String option : options) {
        if (option.trim().length() == 0) continue; // ignore empty strings
        switch (option) {
          case SPLIT_UPDATE_STRING:
            obj.setSplitUpdate(true);
            break;
          case HASH_BASED_MERGE_STRING:
            obj.setHashBasedMerge(true);
            break;
          default:
            throw new IllegalArgumentException(
                "Unexpected value " + option + " for ACID operational properties!");
        }
      }
      return obj;
    }

    /**
 *      * Returns an acidOperationalProperties object that is represented by an encoded 32-bit integer.
 *           * @param properties an encoded 32-bit representing the acidOperationalProperties.
 *                * @return the acidOperationalProperties object.
 *                     */
    public static AcidOperationalProperties parseInt(int properties) {
      AcidOperationalProperties obj = new AcidOperationalProperties();
      if ((properties & SPLIT_UPDATE_BIT)  > 0) {
        obj.setSplitUpdate(true);
      }
      if ((properties & HASH_BASED_MERGE_BIT)  > 0) {
        obj.setHashBasedMerge(true);
      }
      if ((properties & INSERT_ONLY_BIT) > 0) {
        obj.setInsertOnly(true);
      }
      return obj;
    }

    /**
 *      * Sets the split update property for ACID operations based on the boolean argument.
 *           * When split update is turned on, an update ACID event is interpreted as a combination of
 *                * delete event followed by an update event.
 *                     * @param isSplitUpdate a boolean property that turns on split update when true.
 *                          * @return the acidOperationalProperties object.
 *                               */
    public AcidOperationalProperties setSplitUpdate(boolean isSplitUpdate) {
      description = (isSplitUpdate
              ? (description | SPLIT_UPDATE_BIT) : (description & ~SPLIT_UPDATE_BIT));
      return this;
    }

    /**
 *      * Sets the hash-based merge property for ACID operations that combines delta files using
 *           * GRACE hash join based approach, when turned on. (Currently unimplemented!)
 *                * @param isHashBasedMerge a boolean property that turns on hash-based merge when true.
 *                     * @return the acidOperationalProperties object.
 *                          */
    public AcidOperationalProperties setHashBasedMerge(boolean isHashBasedMerge) {
      description = (isHashBasedMerge
              ? (description | HASH_BASED_MERGE_BIT) : (description & ~HASH_BASED_MERGE_BIT));
      return this;
    }

    public AcidOperationalProperties setInsertOnly(boolean isInsertOnly) {
      description = (isInsertOnly
              ? (description | INSERT_ONLY_BIT) : (description & ~INSERT_ONLY_BIT));
      return this;
    }

    public boolean isSplitUpdate() {
      return (description & SPLIT_UPDATE_BIT) > 0;
    }

    public boolean isHashBasedMerge() {
      return (description & HASH_BASED_MERGE_BIT) > 0;
    }

    public boolean isInsertOnly() {
      return (description & INSERT_ONLY_BIT) > 0;
    }

    public int toInt() {
      return description;
    }

    @Override
    public String toString() {
      StringBuilder str = new StringBuilder();
      if (isSplitUpdate()) {
        str.append("|" + SPLIT_UPDATE_STRING);
      }
      if (isHashBasedMerge()) {
        str.append("|" + HASH_BASED_MERGE_STRING);
      }
      if (isInsertOnly()) {
        str.append("|" + INSERT_ONLY_STRING);
      }
      return str.toString();
    }
  }

  public static AcidOperationalProperties getAcidOperationalProperties(
          java.util.Map parameters) {
      return AcidOperationalProperties.getDefault();
  }

  public static void setAcidOperationalProperties(java.util.Map parameters,
      boolean isTxnTable, AcidOperationalProperties properties) {
  }

public static boolean isTablePropertyTransactional(java.util.Map m) { return false; }

 

5.2 HiveMetaStoreClient.java

加入:

  public HiveMetaStoreClient(org.apache.hadoop.conf.Configuration conf, HiveMetaHookLoader hookLoader, Boolean b) throws MetaException {
    this((HiveConf) conf, hookLoader);
  }


5.3 
RetryingMetaStoreClient.java

加入:

  public static IMetaStoreClient getProxy(org.apache.hadoop.conf.Configuration hiveConf, Class[] constructorArgTypes,
      Object[] constructorArgs, String mscClassName) throws MetaException {
    return getProxy((HiveConf) hiveConf, constructorArgTypes, constructorArgs, mscClassName);
  }

 

5.4 ShutdownHookManager.java

加入:

  public static void addShutdownHook(Runnable shutdownHook) {
    addShutdownHook(shutdownHook, 1);
  } 

5.5 MetaStoreUtils.java 

加入:

package org.apache.hadoop.hive.metastore.utils;(包声明后加utils并添加到对应目录下。原Java文件不变)

import
org.apache.hadoop.hive.metastore.*; public static String getColumnNameDelimiter(List fieldSchemas) { // we first take a look if any fieldSchemas contain COMMA for (int i = 0; i ) { if (fieldSchemas.get(i).getName().contains(",")) { return String.valueOf('\0'); } } return String.valueOf(','); }

 

6 启动Kylin

$KYLIN_HOME/bin/kylin.sh start

访问http://机器名:7070/kylin/

使用admin/KYLIN登录。如果404则是出现了错误,在$KYLIN_HOME/logs下有日志。

一些错误的解决:

6.1 报找不到.keystore

直接mkdir conf/.keystore

6.2 报找不到contrib/capacity-scheduler/*.jar

直接mkdir -p hadoop-2.8.5-1.1.0/contrib/capacity-scheduler/

在该目录下touch dummy再zip -r dummy.jar dummy

6.3 报More than one fragment with the name org_apache_tomcat_websocket

删除tomcat/webapps/kylin/WEB-INF/lib/jul-to-slf4j-1.7.5.jar及jcl-over-slf4j-1.7.21.jar

6.4 报找不到HiveHook之类

cp /usr/lib/hive-current/lib/meta-hive-hook-1.0.1.jar apache-hive-1.2.1-bin/lib/

6.5 报loader constraint violation

rm tomcat/webapps/kylin/WEB-INF/lib/slf4j-api-1.7.21.jar

cp ./hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar ./tomcat/webapps/kylin/WEB-INF/lib/

rm ./spark-2.3.2-1.2.0-bin-hadoop2.8/jars/slf4j-api-1.7.16.jar 

cp ./hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-api-1.7.10.jar ./spark-2.3.2-1.2.0-bin-hadoop2.8/jars/

rm ./spark-2.3.2-1.2.0-bin-hadoop2.8/jars/jul-to-slf4j-1.7.16.jar

rm ./spark-2.3.2-1.2.0-bin-hadoop2.8/jars/jcl-over-slf4j-1.7.16.jar

rm ./apache-hive-1.2.1-bin/hcatalog/share/webhcat/svr/lib/jul-to-slf4j-1.7.5.jar

rm ./hadoop-2.8.5-1.1.0/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/jul-to-slf4j-1.7.10.jar

rm ./hadoop-2.8.5-1.1.0/share/hadoop/kms/tomcat/webapps/kms/WEB-INF/lib/slf4j-log4j12-1.7.10.jar

rm ./spark-2.3.2-1.2.0-bin-hadoop2.8/jars/slf4j-log4j12-1.7.16.jar

cp ./hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar ./spark-2.3.2-1.2.0-bin-hadoop2.8/jars/

cp hadoop-2.8.5-1.1.0/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar tomcat/webapps/kylin/WEB-INF/lib/

6.6 报找不到derbyLocale之类的jar包。

不用管,忽略即可。

 

7 准备样例数据

${KYLIN_HOME}/bin/sample.sh

如未成功请重新登录并修改环境变量:

export KYLIN_HOME=`pwd`

export HIVE_COnF=/etc/ecm/hive-conf(你的EMR Hive配置文件路径)

export HADOOP_HOME=/home/hadoop/apache-kylin-2.6.3-bin-hbase1x/hadoop-2.8.5-1.1.0

(即去掉步骤2中的最后两行)

到界面/System/执行Reload Metadata并刷新页面。

此时可以看到Model页出现kylin_sales_cube且状态为DISABLED。

 

8 Build样例数据

在界面/Model/action选择Build,选择2012年1月1日至2012年1月2日并确定,再去Monitor页查看结果。

约10分钟后成功,此时去Insight页面,在最上方的-- Choose Project --下拉框选择learn_kylin后,执行select count(*) from KYLIN_SALES应该会正确返回结果。

如果失败请查看Yarn错误日志或联系本人钉钉13699124376。


推荐阅读
author-avatar
Kira玄玄
这个家伙很懒,什么也没留下!
PHP1.CN | 中国最专业的PHP中文社区 | DevBox开发工具箱 | json解析格式化 |PHP资讯 | PHP教程 | 数据库技术 | 服务器技术 | 前端开发技术 | PHP框架 | 开发工具 | 在线工具
Copyright © 1998 - 2020 PHP1.CN. All Rights Reserved | 京公网安备 11010802041100号 | 京ICP备19059560号-4 | PHP1.CN 第一PHP社区 版权所有