2017-02-21 13 views
1

У меня есть этот CSV-файл (test.csv), содержащий следующие:PySpark3 нет атрибута 'tzinfo' ошибки при разборе ГГГГММДДччммсса в TimestampType()


COLUMN_STRING;COLUMN_INT;COLUMN_TIMESTAMP 
String_Value_1;123456;20131226224757 
String_Value_2;234567;20141227234858 
String_Value_3;345678;20151228214555 

Я пытаюсь импорт 3-й столбец временной метки ГГГГММДДччммсс в TimestampType() с помощью следующего кода:


from pyspark.sql.types import * 
data = sc.textFile('test.csv')\ 
       .map(lambda s: s.split(";"))\ 
       .filter(lambda v: v[0] != 'COLUMN_STRING') \ 
       .map(lambda v: (v[0], int(v[1]), v[2])) 

schema = StructType([StructField('COLUMN_STRING',StringType(),False), 
      StructField('COLUMN_INT',IntegerType(),False), 
      StructField('COLUMN_TIMESTAMP',TimestampType(),False)]) 

df = sqlContext.createDataFrame(data, schema) 
df.take(2) 

И когда я запускаю его в PySpark3 на Спарк 1.6.3 Cluster (HDP3.5), я получаю ошибку о «„ул“объект не имеет атрибута„tzinfo“» ...

Это упрощенный пример. Я должен найти, как импортировать такие yyyyMMddhhmsms, как в TimestampType(), без изменения исходных данных, потому что это упрощенный тест (мои фактические данные являются массивными, поэтому модифицирующий источник НЕ является опцией).

Включено сообщение об ошибке ниже. произошло

Любая помощь приветствуется/Спасибо, MT


ошибка при вызове

z:org.apache.spark.sql.execution.EvaluatePython.takeAndServe. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.0.0.13): org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1831) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply$mcI$sp(python.scala:126) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2087) 
    at org.apache.spark.sql.execution.EvaluatePython$.takeAndServe(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython.takeAndServe(python.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    ... 1 more 

Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/dataframe.py", line 306, in take 
    self._jdf, num) 
    File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__ 
    answer, self.gateway_client, self.target_id, self.name) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/utils.py", line 45, in deco 
    return f(*a, **kw) 
    File "/usr/hdp/current/spark-client/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value 
    format(target_id, ".", name), value) 
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.sql.execution.EvaluatePython.takeAndServe. 
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, 10.0.0.13): org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    at java.lang.Thread.run(Thread.java:745) 

Driver stacktrace: 
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1433) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1421) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1420) 
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1420) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:801) 
    at scala.Option.foreach(Option.scala:236) 
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:801) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601) 
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590) 
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) 
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:622) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1831) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844) 
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857) 
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply$mcI$sp(python.scala:126) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython$$anonfun$takeAndServe$1.apply(python.scala:124) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) 
    at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2087) 
    at org.apache.spark.sql.execution.EvaluatePython$.takeAndServe(python.scala:124) 
    at org.apache.spark.sql.execution.EvaluatePython.takeAndServe(python.scala) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381) 
    at py4j.Gateway.invoke(Gateway.java:259) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:209) 
    at java.lang.Thread.run(Thread.java:745) 
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last): 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 111, in main 
    process() 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/worker.py", line 106, in process 
    serializer.dump_stream(func(split_index, iterator), outfile) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/serializers.py", line 263, in dump_stream 
    vs = list(itertools.islice(iterator, batch)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in toInternal 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/pyspark/sql/types.py", line 541, in <genexpr> 
    return tuple(f.toInternal(v) for f, v in zip(self.fields, obj)) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 435, in toInternal 
    return self.dataType.toInternal(obj) 
    File "/usr/hdp/current/spark-client/python/lib/pyspark.zip/pyspark/sql/types.py", line 190, in toInternal 
    seconds = (calendar.timegm(dt.utctimetuple()) if dt.tzinfo 
AttributeError: 'str' object has no attribute 'tzinfo' 

    at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166) 
    at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207) 
    at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125) 
    at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) 
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:313) 
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:277) 
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66) 
    at org.apache.spark.scheduler.Task.run(Task.scala:89) 
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
    ... 1 more 

ответ

1

Схема должна отражать типы у вас есть, так что если вы хотите использовать RDD вы должны разобрать его первым:

.map(lambda v: (
    v[0], int(v[1]), datetime.datetime.strptime(v[2], "%Y%m%d%H%M%S"))) 

 Смежные вопросы

  • Нет связанных вопросов^_^