Skip to content

Commit 001354c

Browse files
feat: Enable Arrow-based columnar data transfers when to pandas in sparksource retrieval job
Signed-off-by: tanlocnguyen <tanlocnguyen296@gmail.com>
1 parent e6fc736 commit 001354c

File tree

1 file changed

+5
-0
lines changed
  • sdk/python/feast/infra/offline_stores/contrib/spark_offline_store

1 file changed

+5
-0
lines changed

sdk/python/feast/infra/offline_stores/contrib/spark_offline_store/spark.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -338,6 +338,11 @@ def to_spark_df(self) -> pyspark.sql.DataFrame:
338338

339339
def _to_df_internal(self, timeout: Optional[int] = None) -> pd.DataFrame:
340340
"""Return dataset as Pandas DataFrame synchronously"""
341+
spark_session = get_spark_session_or_start_new_with_repoconfig(
342+
self._config.offline_store
343+
)
344+
spark_session.conf.set("spark.sql.execution.arrow.fallback.enabled", "true")
345+
spark_session.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
341346
return self.to_spark_df().toPandas()
342347

343348
def _to_arrow_internal(self, timeout: Optional[int] = None) -> pyarrow.Table:

0 commit comments

Comments
 (0)