2015-11-04 1 views
0

Я пытался построить разъем искрового Кассандрой и следовала этой ссылка:сборки Сбой datastax разъема искровой Кассандра

http://www.planetcassandra.org/blog/kindling-an-introduction-to-spark-with-cassandra/ 

Что дальше в ссылке просит загрузить разъем от мерзавца и построить с использованием SBT. Но, когда я пытаюсь запустить команду ./sbt/sbt assembly. Он бросает следующее исключение:

Launching sbt from sbt/sbt-launch-0.13.8.jar 
[info] Loading project definition from /home/naresh/Desktop/spark-cassandra-connector/project 
Using releases: https://oss.sonatype.org/service/local/staging/deploy/maven2 for releases 
Using snapshots: https://oss.sonatype.org/content/repositories/snapshots for snapshots 

    Scala: 2.10.5 [To build against Scala 2.11 use '-Dscala-2.11=true'] 
    Scala Binary: 2.10 
    Java: target=1.7 user=1.7.0_79 

[info] Set current project to root (in build file:/home/naresh/Desktop/spark-cassandra-connector/) 
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist 
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist 
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist 
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist 
    [warn] Credentials file /home/hduser/.ivy2/.credentials does not exist 
    [info] Compiling 140 Scala sources and 1 Java source to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/target/scala-2.10/classes... 
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:48: not found: value processTableIdentifier 
    [error]  val id = processTableIdentifier(tableIdentifier).reverse.lift 
    [error]   ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraCatalog.scala:134: value toSeq is not a member of org.apache.spark.sql.catalyst.TableIdentifier 
    [error]  cachedDataSourceTables.refresh(tableIdent.toSeq) 
    [error]            ^
    [error] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector/src/main/scala/org/apache/spark/sql/cassandra/CassandraSQLContext.scala:94: not found: value BroadcastNestedLoopJoin 
    [error]  BroadcastNestedLoopJoin 
    [error]  ^
    [error] three errors found 
    [info] Compiling 11 Scala sources to /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/target/scala-2.10/classes... 
    [warn] /home/naresh/Desktop/spark-cassandra-connector/spark-cassandra-connector-embedded/src/main/scala/com/datastax/spark/connector/embedded/SparkTemplate.scala:69: value actorSystem in class SparkEnv is deprecated: Actor system is no longer supported as of 1.4.0 
    [warn] def actorSystem: ActorSystem = SparkEnv.get.actorSystem 
    [warn]            ^
    [warn] one warning found 
    [error] (spark-cassandra-connector/compile:compileIncremental) Compilation failed 
    [error] Total time: 27 s, completed 4 Nov, 2015 12:34:33 PM 

ответ

0

Это работает для меня, запустить mvn -DskipTests clean package

  • вы можете найти в build spark commandREADME.md файл с искрой Dir.
  • Перед бежать, что команда Вам нужно настроить Maven использовать больше памяти, чем обычно, устанавливающего MAVEN_OPTS export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
+0

Я строил ее Скале с использованием SBT не через мавенна – Naresh

 Смежные вопросы

  • Нет связанных вопросов^_^