Class JavaPairDStream<K,V>
Object
org.apache.spark.streaming.api.java.JavaPairDStream<K,V>
- All Implemented Interfaces:
Serializable,JavaDStreamLike<scala.Tuple2<K,V>, JavaPairDStream<K, V>, JavaPairRDD<K, V>>
- Direct Known Subclasses:
JavaPairInputDStream
A Java-friendly interface to a DStream of key-value pairs, which provides extra methods
like
reduceByKey and join.- See Also:
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptioncache()Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)classTag()<W> JavaPairDStream<K,scala.Tuple2<Iterable<V>, Iterable<W>>> cogroup(JavaPairDStream<K, W> other) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<Iterable<V>, Iterable<W>>> cogroup(JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<Iterable<V>, Iterable<W>>> cogroup(JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream.<C> JavaPairDStream<K,C> combineByKey(Function<V, C> createCombiner, Function2<C, V, C> mergeValue, Function2<C, C, C> mergeCombiners, Partitioner partitioner) Combine elements of each key in DStream's RDDs using custom function.<C> JavaPairDStream<K,C> combineByKey(Function<V, C> createCombiner, Function2<C, V, C> mergeValue, Function2<C, C, C> mergeCombiners, Partitioner partitioner, boolean mapSideCombine) Combine elements of each key in DStream's RDDs using custom function.Method that generates an RDD for the given Durationdstream()Return a new DStream containing only the elements that satisfy a predicate.<U> JavaPairDStream<K,U> flatMapValues(FlatMapFunction<V, U> f) Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.static <K,V> JavaPairDStream<K, V> fromJavaDStream(JavaDStream<scala.Tuple2<K, V>> dstream) static <K,V> JavaPairDStream<K, V> fromPairDStream(DStream<scala.Tuple2<K, V>> dstream, scala.reflect.ClassTag<K> evidence$1, scala.reflect.ClassTag<V> evidence$2) <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, Optional<W>>> fullOuterJoin(JavaPairDStream<K, W> other) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<Optional<V>, Optional<W>>> fullOuterJoin(JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<Optional<V>, Optional<W>>> fullOuterJoin(JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream.Return a new DStream by applyinggroupByKeyto each RDD.groupByKey(int numPartitions) Return a new DStream by applyinggroupByKeyto each RDD.groupByKey(Partitioner partitioner) Return a new DStream by applyinggroupByKeyon each RDD ofthisDStream.groupByKeyAndWindow(Duration windowDuration) Return a new DStream by applyinggroupByKeyover a sliding window.groupByKeyAndWindow(Duration windowDuration, Duration slideDuration) Return a new DStream by applyinggroupByKeyover a sliding window.groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyinggroupByKeyover a sliding window onthisDStream.groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, Partitioner partitioner) Return a new DStream by applyinggroupByKeyover a sliding window onthisDStream.<W> JavaPairDStream<K,scala.Tuple2<V, W>> join(JavaPairDStream<K, W> other) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<V, W>> join(JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<V, W>> join(JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream.scala.reflect.ClassTag<K><W> JavaPairDStream<K,scala.Tuple2<V, Optional<W>>> leftOuterJoin(JavaPairDStream<K, W> other) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<V, Optional<W>>> leftOuterJoin(JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<V, Optional<W>>> leftOuterJoin(JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream.<U> JavaPairDStream<K,U> Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.<StateType,MappedType>
JavaMapWithStateDStream<K,V, StateType, MappedType> mapWithState(StateSpec<K, V, StateType, MappedType> spec) Return aJavaMapWithStateDStreamby applying a function to every key-value element ofthisstream, while maintaining some state data for each unique key.persist()Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER)persist(StorageLevel storageLevel) Persist the RDDs of this DStream with the given storage levelreduceByKey(Function2<V, V, V> func) Return a new DStream by applyingreduceByKeyto each RDD.reduceByKey(Function2<V, V, V> func, int numPartitions) Return a new DStream by applyingreduceByKeyto each RDD.reduceByKey(Function2<V, V, V> func, Partitioner partitioner) Return a new DStream by applyingreduceByKeyto each RDD.reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration) Return a new DStream by reducing over a using incremental computation.reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions, Function<scala.Tuple2<K, V>, Boolean> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window.reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner, Function<scala.Tuple2<K, V>, Boolean> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window.Create a new DStream by applyingreduceByKeyover a sliding window onthisDStream.Return a new DStream by applyingreduceByKeyover a sliding window.reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyingreduceByKeyover a sliding window.reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner) Return a new DStream by applyingreduceByKeyover a sliding window.repartition(int numPartitions) Return a new DStream with an increased or decreased level of parallelism.<W> JavaPairDStream<K,scala.Tuple2<Optional<V>, W>> rightOuterJoin(JavaPairDStream<K, W> other) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<Optional<V>, W>> rightOuterJoin(JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream.<W> JavaPairDStream<K,scala.Tuple2<Optional<V>, W>> rightOuterJoin(JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream.voidsaveAsHadoopFiles(String prefix, String suffix) Save each RDD inthisDStream as a Hadoop file.<F extends org.apache.hadoop.mapred.OutputFormat<?,?>>
voidsaveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass) Save each RDD inthisDStream as a Hadoop file.<F extends org.apache.hadoop.mapred.OutputFormat<?,?>>
voidsaveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass, org.apache.hadoop.mapred.JobConf conf) Save each RDD inthisDStream as a Hadoop file.voidsaveAsNewAPIHadoopFiles(String prefix, String suffix) Save each RDD inthisDStream as a Hadoop file.<F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>>
voidsaveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass) Save each RDD inthisDStream as a Hadoop file.<F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>>
voidsaveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass, org.apache.hadoop.conf.Configuration conf) Save each RDD inthisDStream as a Hadoop file.static <K> JavaPairDStream<K,Long> scalaToJavaLong(JavaPairDStream<K, Object> dstream, scala.reflect.ClassTag<K> evidence$3) JavaDStream<scala.Tuple2<K,V>> Convert to a JavaDStreamunion(JavaPairDStream<K, V> that) Return a new DStream by unifying data of another DStream with this DStream.<S> JavaPairDStream<K,S> Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.<S> JavaPairDStream<K,S> Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key.<S> JavaPairDStream<K,S> updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>> updateFunc, Partitioner partitioner) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.<S> JavaPairDStream<K,S> updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>> updateFunc, Partitioner partitioner, JavaPairRDD<K, S> initialRDD) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key.scala.reflect.ClassTag<V>Return a new DStream which is computed based on windowed batches of this DStream.Return a new DStream which is computed based on windowed batches of this DStream.Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.spark.streaming.api.java.JavaDStreamLike
checkpoint, context, count, countByValue, countByValue, countByValueAndWindow, countByValueAndWindow, countByWindow, flatMap, flatMapToPair, foreachRDD, foreachRDD, glom, map, mapPartitions, mapPartitionsToPair, mapToPair, print, print, reduce, reduceByWindow, reduceByWindow, scalaIntToJavaLong, slice, transform, transform, transformToPair, transformToPair, transformWith, transformWith, transformWithToPair, transformWithToPair
-
Constructor Details
-
JavaPairDStream
-
-
Method Details
-
fromPairDStream
public static <K,V> JavaPairDStream<K,V> fromPairDStream(DStream<scala.Tuple2<K, V>> dstream, scala.reflect.ClassTag<K> evidence$1, scala.reflect.ClassTag<V> evidence$2) -
fromJavaDStream
-
scalaToJavaLong
public static <K> JavaPairDStream<K,Long> scalaToJavaLong(JavaPairDStream<K, Object> dstream, scala.reflect.ClassTag<K> evidence$3) -
dstream
-
kManifest
-
vManifest
-
wrapRDD
-
filter
Return a new DStream containing only the elements that satisfy a predicate. -
cache
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER) -
persist
Persist RDDs of this DStream with the default storage level (MEMORY_ONLY_SER) -
persist
Persist the RDDs of this DStream with the given storage level -
repartition
Return a new DStream with an increased or decreased level of parallelism. Each RDD in the returned DStream has exactly numPartitions partitions.- Parameters:
numPartitions- (undocumented)- Returns:
- (undocumented)
-
compute
Method that generates an RDD for the given Duration -
window
Return a new DStream which is computed based on windowed batches of this DStream. The new DStream generates RDDs with the same interval as this DStream.- Parameters:
windowDuration- width of the window; must be a multiple of this DStream's interval.- Returns:
-
window
Return a new DStream which is computed based on windowed batches of this DStream.- Parameters:
windowDuration- duration (i.e., width) of the window; must be a multiple of this DStream's intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's interval- Returns:
- (undocumented)
-
union
Return a new DStream by unifying data of another DStream with this DStream.- Parameters:
that- Another DStream having the same interval (i.e., slideDuration) as this DStream.- Returns:
- (undocumented)
-
groupByKey
Return a new DStream by applyinggroupByKeyto each RDD. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Returns:
- (undocumented)
-
groupByKey
Return a new DStream by applyinggroupByKeyto each RDD. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
numPartitions- (undocumented)- Returns:
- (undocumented)
-
groupByKey
Return a new DStream by applyinggroupByKeyon each RDD ofthisDStream. Therefore, the values for each key inthisDStream's RDDs are grouped into a single sequence to generate the RDDs of the new DStream. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
partitioner- (undocumented)- Returns:
- (undocumented)
-
reduceByKey
Return a new DStream by applyingreduceByKeyto each RDD. The values for each key are merged using the associative and commutative reduce function. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
func- (undocumented)- Returns:
- (undocumented)
-
reduceByKey
Return a new DStream by applyingreduceByKeyto each RDD. The values for each key are merged using the supplied reduce function. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
func- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
-
reduceByKey
Return a new DStream by applyingreduceByKeyto each RDD. The values for each key are merged using the supplied reduce function. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
func- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
combineByKey
public <C> JavaPairDStream<K,C> combineByKey(Function<V, C> createCombiner, Function2<C, V, C> mergeValue, Function2<C, C, C> mergeCombiners, Partitioner partitioner) Combine elements of each key in DStream's RDDs using custom function. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions for more information.- Parameters:
createCombiner- (undocumented)mergeValue- (undocumented)mergeCombiners- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
combineByKey
public <C> JavaPairDStream<K,C> combineByKey(Function<V, C> createCombiner, Function2<C, V, C> mergeValue, Function2<C, C, C> mergeCombiners, Partitioner partitioner, boolean mapSideCombine) Combine elements of each key in DStream's RDDs using custom function. This is similar to the combineByKey for RDDs. Please refer to combineByKey in org.apache.spark.rdd.PairRDDFunctions for more information.- Parameters:
createCombiner- (undocumented)mergeValue- (undocumented)mergeCombiners- (undocumented)partitioner- (undocumented)mapSideCombine- (undocumented)- Returns:
- (undocumented)
-
groupByKeyAndWindow
Return a new DStream by applyinggroupByKeyover a sliding window. This is similar toDStream.groupByKey()but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
windowDuration- width of the window; must be a multiple of this DStream's batching interval- Returns:
- (undocumented)
-
groupByKeyAndWindow
public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration) Return a new DStream by applyinggroupByKeyover a sliding window. Similar toDStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
windowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval- Returns:
- (undocumented)
-
groupByKeyAndWindow
public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyinggroupByKeyover a sliding window onthisDStream. Similar toDStream.groupByKey(), but applies it over a sliding window. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
windowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching intervalnumPartitions- Number of partitions of each RDD in the new DStream.- Returns:
- (undocumented)
-
groupByKeyAndWindow
public JavaPairDStream<K,Iterable<V>> groupByKeyAndWindow(Duration windowDuration, Duration slideDuration, Partitioner partitioner) Return a new DStream by applyinggroupByKeyover a sliding window onthisDStream. Similar toDStream.groupByKey(), but applies it over a sliding window.- Parameters:
windowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching intervalpartitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Duration windowDuration) Create a new DStream by applyingreduceByKeyover a sliding window onthisDStream. Similar toDStream.reduceByKey(), but applies it over a sliding window. The new DStream generates RDDs with the same interval as this DStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
reduceFunc- associative and commutative reduce functionwindowDuration- width of the window; must be a multiple of this DStream's batching interval- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration) Return a new DStream by applyingreduceByKeyover a sliding window. This is similar toDStream.reduceByKey()but applies it over a sliding window. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
reduceFunc- associative and commutative reduce functionwindowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions) Return a new DStream by applyingreduceByKeyover a sliding window. This is similar toDStream.reduceByKey()but applies it over a sliding window. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
reduceFunc- associative and commutative reduce functionwindowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching intervalnumPartitions- Number of partitions of each RDD in the new DStream.- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner) Return a new DStream by applyingreduceByKeyover a sliding window. Similar toDStream.reduceByKey(), but applies it over a sliding window.- Parameters:
reduceFunc- associative rand commutative educe functionwindowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching intervalpartitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration) Return a new DStream by reducing over a using incremental computation. The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
reduceFunc- associative and commutative reduce functioninvReduceFunc- inverse function; such that for all y, invertible x:invReduceFunc(reduceFunc(x, y), x) = ywindowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching interval- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, int numPartitions, Function<scala.Tuple2<K, V>, Boolean> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window. The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions". Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
reduceFunc- associative and commutative reduce functioninvReduceFunc- inverse functionwindowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching intervalnumPartitions- number of partitions of each RDD in the new DStream.filterFunc- function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter- Returns:
- (undocumented)
-
reduceByKeyAndWindow
public JavaPairDStream<K,V> reduceByKeyAndWindow(Function2<V, V, V> reduceFunc, Function2<V, V, V> invReduceFunc, Duration windowDuration, Duration slideDuration, Partitioner partitioner, Function<scala.Tuple2<K, V>, Boolean> filterFunc) Return a new DStream by applying incrementalreduceByKeyover a sliding window. The reduced value of over a new window is calculated using the old window's reduce value : 1. reduce the new values that entered the window (e.g., adding new counts) 2. "inverse reduce" the old values that left the window (e.g., subtracting old counts) This is more efficient that reduceByKeyAndWindow without "inverse reduce" function. However, it is applicable to only "invertible reduce functions".- Parameters:
reduceFunc- associative and commutative reduce functioninvReduceFunc- inverse functionwindowDuration- width of the window; must be a multiple of this DStream's batching intervalslideDuration- sliding interval of the window (i.e., the interval after which the new DStream will generate RDDs); must be a multiple of this DStream's batching intervalpartitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.filterFunc- function to filter expired key-value pairs; only pairs that satisfy the function are retained set this to null if you do not want to filter- Returns:
- (undocumented)
-
mapWithState
public <StateType,MappedType> JavaMapWithStateDStream<K,V, mapWithStateStateType, MappedType> (StateSpec<K, V, StateType, MappedType> spec) Return aJavaMapWithStateDStreamby applying a function to every key-value element ofthisstream, while maintaining some state data for each unique key. The mapping function and other specification (e.g. partitioners, timeouts, initial state data, etc.) of this transformation can be specified usingStateSpecclass. The state data is accessible in as a parameter of typeStatein the mapping function.Example of using
mapWithState:// A mapping function that maintains an integer state and return a string Function3<String, Optional<Integer>, State<Integer>, String> mappingFunction = new Function3<String, Optional<Integer>, State<Integer>, String>() { @Override public Optional<String> call(Optional<Integer> value, State<Integer> state) { // Use state.exists(), state.get(), state.update() and state.remove() // to manage state, and return the necessary string } }; JavaMapWithStateDStream<String, Integer, Integer, String> mapWithStateDStream = keyValueDStream.mapWithState(StateSpec.function(mappingFunc));- Parameters:
spec- Specification of this transformation- Returns:
- (undocumented)
-
updateStateByKey
public <S> JavaPairDStream<K,S> updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>> updateFunc) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
updateFunc- State update function. Ifthisfunction returns None, then corresponding state key-value pair will be eliminated.- Returns:
- (undocumented)
-
updateStateByKey
public <S> JavaPairDStream<K,S> updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>> updateFunc, int numPartitions) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of each key. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
updateFunc- State update function. Ifthisfunction returns None, then corresponding state key-value pair will be eliminated.numPartitions- Number of partitions of each RDD in the new DStream.- Returns:
- (undocumented)
-
updateStateByKey
public <S> JavaPairDStream<K,S> updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>> updateFunc, Partitioner partitioner) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
updateFunc- State update function. Ifthisfunction returns None, then corresponding state key-value pair will be eliminated.partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.- Returns:
- (undocumented)
-
updateStateByKey
public <S> JavaPairDStream<K,S> updateStateByKey(Function2<List<V>, Optional<S>, Optional<S>> updateFunc, Partitioner partitioner, JavaPairRDD<K, S> initialRDD) Return a new "state" DStream where the state for each key is updated by applying the given function on the previous state of the key and the new values of the key. org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
updateFunc- State update function. Ifthisfunction returns None, then corresponding state key-value pair will be eliminated.partitioner- Partitioner for controlling the partitioning of each RDD in the new DStream.initialRDD- initial state value of each key.- Returns:
- (undocumented)
-
mapValues
Return a new DStream by applying a map function to the value of each key-value pairs in 'this' DStream without changing the key.- Parameters:
f- (undocumented)- Returns:
- (undocumented)
-
flatMapValues
Return a new DStream by applying a flatmap function to the value of each key-value pairs in 'this' DStream without changing the key.- Parameters:
f- (undocumented)- Returns:
- (undocumented)
-
cogroup
public <W> JavaPairDStream<K,scala.Tuple2<Iterable<V>, cogroupIterable<W>>> (JavaPairDStream<K, W> other) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
other- (undocumented)- Returns:
- (undocumented)
-
cogroup
public <W> JavaPairDStream<K,scala.Tuple2<Iterable<V>, cogroupIterable<W>>> (JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
other- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
-
cogroup
public <W> JavaPairDStream<K,scala.Tuple2<Iterable<V>, cogroupIterable<W>>> (JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'cogroup' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
other- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
join
Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
other- (undocumented)- Returns:
- (undocumented)
-
join
Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
other- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
-
join
public <W> JavaPairDStream<K,scala.Tuple2<V, joinW>> (JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
other- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
leftOuterJoin
Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
other- (undocumented)- Returns:
- (undocumented)
-
leftOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<V, leftOuterJoinOptional<W>>> (JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
other- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
-
leftOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<V, leftOuterJoinOptional<W>>> (JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'left outer join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
other- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
rightOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, rightOuterJoinW>> (JavaPairDStream<K, W> other) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
other- (undocumented)- Returns:
- (undocumented)
-
rightOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, rightOuterJoinW>> (JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
other- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
-
rightOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, rightOuterJoinW>> (JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'right outer join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
other- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
fullOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, fullOuterJoinOptional<W>>> (JavaPairDStream<K, W> other) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs with Spark's default number of partitions.- Parameters:
other- (undocumented)- Returns:
- (undocumented)
-
fullOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, fullOuterJoinOptional<W>>> (JavaPairDStream<K, W> other, int numPartitions) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream. Hash partitioning is used to generate the RDDs withnumPartitionspartitions.- Parameters:
other- (undocumented)numPartitions- (undocumented)- Returns:
- (undocumented)
-
fullOuterJoin
public <W> JavaPairDStream<K,scala.Tuple2<Optional<V>, fullOuterJoinOptional<W>>> (JavaPairDStream<K, W> other, Partitioner partitioner) Return a new DStream by applying 'full outer join' between RDDs ofthisDStream andotherDStream. The supplied org.apache.spark.Partitioner is used to control the partitioning of each RDD.- Parameters:
other- (undocumented)partitioner- (undocumented)- Returns:
- (undocumented)
-
saveAsHadoopFiles
Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
prefix- (undocumented)suffix- (undocumented)
-
saveAsHadoopFiles
public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
prefix- (undocumented)suffix- (undocumented)keyClass- (undocumented)valueClass- (undocumented)outputFormatClass- (undocumented)
-
saveAsHadoopFiles
public <F extends org.apache.hadoop.mapred.OutputFormat<?,?>> void saveAsHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass, org.apache.hadoop.mapred.JobConf conf) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
prefix- (undocumented)suffix- (undocumented)keyClass- (undocumented)valueClass- (undocumented)outputFormatClass- (undocumented)conf- (undocumented)
-
saveAsNewAPIHadoopFiles
Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
prefix- (undocumented)suffix- (undocumented)
-
saveAsNewAPIHadoopFiles
public <F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>> void saveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
prefix- (undocumented)suffix- (undocumented)keyClass- (undocumented)valueClass- (undocumented)outputFormatClass- (undocumented)
-
saveAsNewAPIHadoopFiles
public <F extends org.apache.hadoop.mapreduce.OutputFormat<?,?>> void saveAsNewAPIHadoopFiles(String prefix, String suffix, Class<?> keyClass, Class<?> valueClass, Class<F> outputFormatClass, org.apache.hadoop.conf.Configuration conf) Save each RDD inthisDStream as a Hadoop file. The file name at each batch interval is generated based onprefixandsuffix: "prefix-TIME_IN_MS.suffix".- Parameters:
prefix- (undocumented)suffix- (undocumented)keyClass- (undocumented)valueClass- (undocumented)outputFormatClass- (undocumented)conf- (undocumented)
-
toJavaDStream
Convert to a JavaDStream -
classTag
-