public class DefaultSource extends java.lang.Object implements org.apache.spark.sql.execution.datasources.FileFormat, DataSourceRegister
libsvm package implements Spark SQL data source API for loading LIBSVM data as DataFrame.
The loaded DataFrame has two columns: label containing labels stored as doubles and
features containing feature vectors stored as Vectors.
To use LIBSVM data source, you need to set "libsvm" as the format in DataFrameReader and
optionally specify options, for example:
// Scala
val df = spark.read.format("libsvm")
.option("numFeatures", "780")
.load("data/mllib/sample_libsvm_data.txt")
// Java
DataFrame df = spark.read().format("libsvm")
.option("numFeatures, "780")
.load("data/mllib/sample_libsvm_data.txt");
LIBSVM data source supports the following options: - "numFeatures": number of features. If unspecified or nonpositive, the number of features will be determined automatically at the cost of one additional pass. This is also useful when the dataset is already split into multiple files and you want to load them separately, because some features may not present in certain files, which leads to inconsistent feature dimensions. - "vectorType": feature vector type, "sparse" (default) or "dense".
| Constructor and Description |
|---|
DefaultSource() |
| Modifier and Type | Method and Description |
|---|---|
scala.Function1<org.apache.spark.sql.execution.datasources.PartitionedFile,scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>> |
buildReader(SparkSession sparkSession,
StructType dataSchema,
StructType partitionSchema,
StructType requiredSchema,
scala.collection.Seq<Filter> filters,
scala.collection.immutable.Map<java.lang.String,java.lang.String> options,
org.apache.hadoop.conf.Configuration hadoopConf) |
scala.Option<StructType> |
inferSchema(SparkSession sparkSession,
scala.collection.immutable.Map<java.lang.String,java.lang.String> options,
scala.collection.Seq<org.apache.hadoop.fs.FileStatus> files) |
scala.collection.immutable.Map<java.lang.String,java.lang.String> |
prepareRead(SparkSession sparkSession,
scala.collection.immutable.Map<java.lang.String,java.lang.String> options,
scala.collection.Seq<org.apache.hadoop.fs.FileStatus> files) |
org.apache.spark.sql.execution.datasources.OutputWriterFactory |
prepareWrite(SparkSession sparkSession,
org.apache.hadoop.mapreduce.Job job,
scala.collection.immutable.Map<java.lang.String,java.lang.String> options,
StructType dataSchema) |
java.lang.String |
shortName()
The string that represents the format that this data source provider uses.
|
java.lang.String |
toString() |
public java.lang.String shortName()
DataSourceRegister
override def shortName(): String = "parquet"
shortName in interface DataSourceRegisterpublic java.lang.String toString()
toString in class java.lang.Objectpublic scala.Option<StructType> inferSchema(SparkSession sparkSession, scala.collection.immutable.Map<java.lang.String,java.lang.String> options, scala.collection.Seq<org.apache.hadoop.fs.FileStatus> files)
inferSchema in interface org.apache.spark.sql.execution.datasources.FileFormatpublic scala.collection.immutable.Map<java.lang.String,java.lang.String> prepareRead(SparkSession sparkSession, scala.collection.immutable.Map<java.lang.String,java.lang.String> options, scala.collection.Seq<org.apache.hadoop.fs.FileStatus> files)
prepareRead in interface org.apache.spark.sql.execution.datasources.FileFormatpublic org.apache.spark.sql.execution.datasources.OutputWriterFactory prepareWrite(SparkSession sparkSession, org.apache.hadoop.mapreduce.Job job, scala.collection.immutable.Map<java.lang.String,java.lang.String> options, StructType dataSchema)
prepareWrite in interface org.apache.spark.sql.execution.datasources.FileFormatpublic scala.Function1<org.apache.spark.sql.execution.datasources.PartitionedFile,scala.collection.Iterator<org.apache.spark.sql.catalyst.InternalRow>> buildReader(SparkSession sparkSession, StructType dataSchema, StructType partitionSchema, StructType requiredSchema, scala.collection.Seq<Filter> filters, scala.collection.immutable.Map<java.lang.String,java.lang.String> options, org.apache.hadoop.conf.Configuration hadoopConf)
buildReader in interface org.apache.spark.sql.execution.datasources.FileFormat