public class FlumeUtils
extends Object
Constructor and Description |
---|
FlumeUtils() |
Modifier and Type | Method and Description |
---|---|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(JavaStreamingContext jssc,
java.net.InetSocketAddress[] addresses,
StorageLevel storageLevel)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(JavaStreamingContext jssc,
java.net.InetSocketAddress[] addresses,
StorageLevel storageLevel,
int maxBatchSize,
int parallelism)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(JavaStreamingContext jssc,
String hostname,
int port)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(JavaStreamingContext jssc,
String hostname,
int port,
StorageLevel storageLevel)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static ReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(StreamingContext ssc,
scala.collection.Seq<java.net.InetSocketAddress> addresses,
StorageLevel storageLevel)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static ReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(StreamingContext ssc,
scala.collection.Seq<java.net.InetSocketAddress> addresses,
StorageLevel storageLevel,
int maxBatchSize,
int parallelism)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static ReceiverInputDStream<SparkFlumeEvent> |
createPollingStream(StreamingContext ssc,
String hostname,
int port,
StorageLevel storageLevel)
Creates an input stream that is to be used with the Spark Sink deployed on a Flume agent.
|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createStream(JavaStreamingContext jssc,
String hostname,
int port)
Creates a input stream from a Flume source.
|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createStream(JavaStreamingContext jssc,
String hostname,
int port,
StorageLevel storageLevel)
Creates a input stream from a Flume source.
|
static JavaReceiverInputDStream<SparkFlumeEvent> |
createStream(JavaStreamingContext jssc,
String hostname,
int port,
StorageLevel storageLevel,
boolean enableDecompression)
Creates a input stream from a Flume source.
|
static ReceiverInputDStream<SparkFlumeEvent> |
createStream(StreamingContext ssc,
String hostname,
int port,
StorageLevel storageLevel)
Create a input stream from a Flume source.
|
static ReceiverInputDStream<SparkFlumeEvent> |
createStream(StreamingContext ssc,
String hostname,
int port,
StorageLevel storageLevel,
boolean enableDecompression)
Create a input stream from a Flume source.
|
public static ReceiverInputDStream<SparkFlumeEvent> createStream(StreamingContext ssc, String hostname, int port, StorageLevel storageLevel)
ssc
- StreamingContext objecthostname
- Hostname of the slave machine to which the flume data will be sentport
- Port of the slave machine to which the flume data will be sentstorageLevel
- Storage level to use for storing the received objectspublic static ReceiverInputDStream<SparkFlumeEvent> createStream(StreamingContext ssc, String hostname, int port, StorageLevel storageLevel, boolean enableDecompression)
ssc
- StreamingContext objecthostname
- Hostname of the slave machine to which the flume data will be sentport
- Port of the slave machine to which the flume data will be sentstorageLevel
- Storage level to use for storing the received objectsenableDecompression
- should netty server decompress input streampublic static JavaReceiverInputDStream<SparkFlumeEvent> createStream(JavaStreamingContext jssc, String hostname, int port)
hostname
- Hostname of the slave machine to which the flume data will be sentport
- Port of the slave machine to which the flume data will be sentpublic static JavaReceiverInputDStream<SparkFlumeEvent> createStream(JavaStreamingContext jssc, String hostname, int port, StorageLevel storageLevel)
hostname
- Hostname of the slave machine to which the flume data will be sentport
- Port of the slave machine to which the flume data will be sentstorageLevel
- Storage level to use for storing the received objectspublic static JavaReceiverInputDStream<SparkFlumeEvent> createStream(JavaStreamingContext jssc, String hostname, int port, StorageLevel storageLevel, boolean enableDecompression)
hostname
- Hostname of the slave machine to which the flume data will be sentport
- Port of the slave machine to which the flume data will be sentstorageLevel
- Storage level to use for storing the received objectsenableDecompression
- should netty server decompress input streampublic static ReceiverInputDStream<SparkFlumeEvent> createPollingStream(StreamingContext ssc, String hostname, int port, StorageLevel storageLevel)
hostname
- Address of the host on which the Spark Sink is runningport
- Port of the host at which the Spark Sink is listeningstorageLevel
- Storage level to use for storing the received objectspublic static ReceiverInputDStream<SparkFlumeEvent> createPollingStream(StreamingContext ssc, scala.collection.Seq<java.net.InetSocketAddress> addresses, StorageLevel storageLevel)
addresses
- List of InetSocketAddresses representing the hosts to connect to.storageLevel
- Storage level to use for storing the received objectspublic static ReceiverInputDStream<SparkFlumeEvent> createPollingStream(StreamingContext ssc, scala.collection.Seq<java.net.InetSocketAddress> addresses, StorageLevel storageLevel, int maxBatchSize, int parallelism)
addresses
- List of InetSocketAddresses representing the hosts to connect to.maxBatchSize
- Maximum number of events to be pulled from the Spark sink in a
single RPC callparallelism
- Number of concurrent requests this stream should send to the sink. Note
that having a higher number of requests concurrently being pulled will
result in this stream using more threadsstorageLevel
- Storage level to use for storing the received objectspublic static JavaReceiverInputDStream<SparkFlumeEvent> createPollingStream(JavaStreamingContext jssc, String hostname, int port)
hostname
- Hostname of the host on which the Spark Sink is runningport
- Port of the host at which the Spark Sink is listeningpublic static JavaReceiverInputDStream<SparkFlumeEvent> createPollingStream(JavaStreamingContext jssc, String hostname, int port, StorageLevel storageLevel)
hostname
- Hostname of the host on which the Spark Sink is runningport
- Port of the host at which the Spark Sink is listeningstorageLevel
- Storage level to use for storing the received objectspublic static JavaReceiverInputDStream<SparkFlumeEvent> createPollingStream(JavaStreamingContext jssc, java.net.InetSocketAddress[] addresses, StorageLevel storageLevel)
addresses
- List of InetSocketAddresses on which the Spark Sink is running.storageLevel
- Storage level to use for storing the received objectspublic static JavaReceiverInputDStream<SparkFlumeEvent> createPollingStream(JavaStreamingContext jssc, java.net.InetSocketAddress[] addresses, StorageLevel storageLevel, int maxBatchSize, int parallelism)
addresses
- List of InetSocketAddresses on which the Spark Sink is runningmaxBatchSize
- The maximum number of events to be pulled from the Spark sink in a
single RPC callparallelism
- Number of concurrent requests this stream should send to the sink. Note
that having a higher number of requests concurrently being pulled will
result in this stream using more threadsstorageLevel
- Storage level to use for storing the received objects