This class provides a complete interface to CSV
files and data. It offers tools to enable you to read and write to and from Strings or IO objects, as needed.
The most generic interface of the library is:
csv = CSV.new(string_or_io, **options) # Reading: IO object should be open for read csv.read # => array of rows # or csv.each do |row| # ... end # or row = csv.shift # Writing: IO object should be open for write csv << row
There are several specialized class methods for one-statement reading or writing, described in the Specialized Methods section.
If a String is passed into ::new
, it is internally wrapped into a StringIO object.
options
can be used for specifying the particular CSV
flavor (column separators, row separators, value quoting and so on), and for data conversion, see Data Conversion section for the description of the latter.
# From a file: all at once arr_of_rows = CSV.read("path/to/file.csv", **options) # iterator-style: CSV.foreach("path/to/file.csv", **options) do |row| # ... end # From a string arr_of_rows = CSV.parse("CSV,data,String", **options) # or CSV.parse("CSV,data,String", **options) do |row| # ... end
# To a file CSV.open("path/to/file.csv", "wb") do |csv| csv << ["row", "of", "CSV", "data"] csv << ["another", "row"] # ... end # To a String csv_string = CSV.generate do |csv| csv << ["row", "of", "CSV", "data"] csv << ["another", "row"] # ... end
# Core extensions for converting one line csv_string = ["CSV", "data"].to_csv # to CSV csv_array = "CSV,String".parse_csv # from CSV # CSV() method CSV { |csv_out| csv_out << %w{my data here} } # to $stdout CSV(csv = "") { |csv_str| csv_str << %w{my data here} } # to a String CSV($stderr) { |csv_err| csv_err << %w{my data here} } # to $stderr CSV($stdin) { |csv_in| csv_in.each { |row| p row } } # from $stdin
CSV
with headers¶ ↑CSV
allows to specify column names of CSV
file, whether they are in data, or provided separately. If headers are specified, reading methods return an instance of CSV::Table
, consisting of CSV::Row
.
# Headers are part of data data = CSV.parse(<<~ROWS, headers: true) Name,Department,Salary Bob,Engineering,1000 Jane,Sales,2000 John,Management,5000 ROWS data.class #=> CSV::Table data.first #=> #<CSV::Row "Name":"Bob" "Department":"Engineering" "Salary":"1000"> data.first.to_h #=> {"Name"=>"Bob", "Department"=>"Engineering", "Salary"=>"1000"} # Headers provided by developer data = CSV.parse('Bob,Engineering,1000', headers: %i[name department salary]) data.first #=> #<CSV::Row name:"Bob" department:"Engineering" salary:"1000">
CSV
allows to provide a set of data converters e.g. transformations to try on input data. Converter could be a symbol from CSV::Converters
constant's keys, or lambda.
# Without any converters: CSV.parse('Bob,2018-03-01,100') #=> [["Bob", "2018-03-01", "100"]] # With built-in converters: CSV.parse('Bob,2018-03-01,100', converters: %i[numeric date]) #=> [["Bob", #<Date: 2018-03-01>, 100]] # With custom converters: CSV.parse('Bob,2018-03-01,100', converters: [->(v) { Time.parse(v) rescue v }]) #=> [["Bob", 2018-03-01 00:00:00 +0200, "100"]]
CSV
and Character Encodings (M17n or Multilingualization)¶ ↑This new CSV
parser is m17n savvy. The parser works in the Encoding of the IO or String object being read from or written to. Your data is never transcoded (unless you ask Ruby to transcode it for you) and will literally be parsed in the Encoding it is in. Thus CSV
will return Arrays or Rows of Strings in the Encoding of your data. This is accomplished by transcoding the parser itself into your Encoding.
Some transcoding must take place, of course, to accomplish this multiencoding support. For example, :col_sep
, :row_sep
, and :quote_char
must be transcoded to match your data. Hopefully this makes the entire process feel transparent, since CSV's defaults should just magically work for your data. However, you can set these values manually in the target Encoding to avoid the translation.
It's also important to note that while all of CSV's core parser is now Encoding agnostic, some features are not. For example, the built-in converters will try to transcode data to UTF-8 before making conversions. Again, you can provide custom converters that are aware of your Encodings to avoid this translation. It's just too hard for me to support native conversions in all of Ruby's Encodings.
Anyway, the practical side of this is simple: make sure IO and String objects passed into CSV
have the proper Encoding set and everything should just work. CSV
methods that allow you to open IO objects (CSV::foreach()
, CSV::open()
, CSV::read()
, and CSV::readlines()
) do allow you to specify the Encoding.
One minor exception comes when generating CSV
into a String with an Encoding that is not ASCII compatible. There's no existing data for CSV
to use to prepare itself and thus you will probably need to manually specify the desired Encoding for most of those cases. It will try to guess using the fields in a row of output though, when using CSV::generate_line()
or Array#to_csv().
I try to point out any other Encoding issues in the documentation of methods as they come up.
This has been tested to the best of my ability with all non-“dummy” Encodings Ruby ships with. However, it is brave new code and may have some bugs. Please feel free to report any issues you find with it.
The encoding used by all converters.
This Hash holds the built-in converters of CSV
that can be accessed by name. You can select Converters
with CSV.convert()
or through the options
Hash passed to CSV::new()
.
:integer
Converts any field Integer() accepts.
:float
Converts any field Float() accepts.
:numeric
A combination of :integer
and :float
.
:date
Converts any field Date::parse() accepts.
:date_time
Converts any field DateTime::parse() accepts.
:all
All built-in converters. A combination of :date_time
and :numeric
.
All built-in converters transcode field data to UTF-8 before attempting a conversion. If your data cannot be transcoded to UTF-8 the conversion will fail and the field will remain unchanged.
This Hash is intentionally left unfrozen and users should feel free to add values to it that can be accessed by all CSV
objects.
To add a combo field, the value should be an Array of names. Combo fields can be nested with other combo fields.
The options used when no overrides are given by calling code. They are:
:col_sep
","
:row_sep
:auto
:quote_char
'"'
:field_size_limit
nil
:converters
nil
:unconverted_fields
nil
:headers
false
:return_headers
false
:header_converters
nil
:skip_blanks
false
:force_quotes
false
:skip_lines
nil
:liberal_parsing
false
:quote_empty
true
A Regexp used to find and convert some common Date formats.
A Regexp used to find and convert some common DateTime formats.
A FieldInfo
Struct contains details about a field's position in the data source it was read from. CSV
will pass this Struct to some blocks that make decisions based on field structure. See CSV.convert_fields() for an example.
index
The zero-based index of the field in its row.
line
The line of the data source this row is from.
header
The header for the column, when available.
This Hash holds the built-in header converters of CSV
that can be accessed by name. You can select HeaderConverters
with CSV.header_convert()
or through the options
Hash passed to CSV::new()
.
:downcase
Calls downcase() on the header String.
:symbol
Leading/trailing spaces are dropped, string is downcased, remaining spaces are replaced with underscores, non-word characters are dropped, and finally to_sym() is called.
All built-in header converters transcode header data to UTF-8 before attempting a conversion. If your data cannot be transcoded to UTF-8 the conversion will fail and the header will remain unchanged.
This Hash is intentionally left unfrozen and users should feel free to add values to it that can be accessed by all CSV
objects.
To add a combo field, the value should be an Array of names. Combo fields can be nested with other combo fields.
The version of the installed library.
The Encoding CSV
is parsing or writing in. This will be the Encoding you receive parsed data in and/or the Encoding data will be written in.
This method is a convenience for building Unix-like filters for CSV
data. Each row is yielded to the provided block which can alter it as needed. After the block returns, the row is appended to output
altered or not.
The input
and output
arguments can be anything CSV::new()
accepts (generally String or IO objects). If not given, they default to ARGF
and $stdout
.
The options
parameter is also filtered down to CSV::new()
after some clever key parsing. Any key beginning with :in_
or :input_
will have that leading identifier stripped and will only be used in the options
Hash for the input
object. Keys starting with :out_
or :output_
affect only output
. All other keys are assigned to both objects.
The :output_row_sep
option
defaults to $INPUT_RECORD_SEPARATOR
($/
).
# File csv.rb, line 469 def filter(input=nil, output=nil, **options) # parse options for input, output, or both in_options, out_options = Hash.new, {row_sep: $INPUT_RECORD_SEPARATOR} options.each do |key, value| case key.to_s when /\Ain(?:put)?_(.+)\Z/ in_options[$1.to_sym] = value when /\Aout(?:put)?_(.+)\Z/ out_options[$1.to_sym] = value else in_options[key] = value out_options[key] = value end end # build input and output wrappers input = new(input || ARGF, **in_options) output = new(output || $stdout, **out_options) # read, yield, write input.each do |row| yield row output << row end end
This method is intended as the primary interface for reading CSV
files. You pass a path
and any options
you wish to set for the read. Each row of file will be passed to the provided block
in turn.
The options
parameter can be anything CSV::new()
understands. This method also understands an additional :encoding
parameter that you can use to specify the Encoding of the data in the file to be read. You must provide this unless your data is in Encoding::default_external(). CSV
will use this to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read. For example, encoding: "UTF-32BE:UTF-8"
would read UTF-32BE data from the file but transcode it to UTF-8 before CSV
parses it.
# File csv.rb, line 508 def foreach(path, mode="r", **options, &block) return to_enum(__method__, path, mode, **options) unless block_given? open(path, mode, **options) do |csv| csv.each(&block) end end
This method wraps a String you provide, or an empty default String, in a CSV
object which is passed to the provided block. You can use the block to append CSV
rows to the String and when the block exits, the final String will be returned.
Note that a passed String is modified by this method. Call dup() before passing if you need a new String.
The options
parameter can be anything CSV::new()
understands. This method understands an additional :encoding
parameter when not passed a String to set the base Encoding for the output. CSV
needs this hint if you plan to output non-ASCII compatible data.
# File csv.rb, line 533 def generate(str=nil, **options) # add a default empty String, if none was given if str str = StringIO.new(str) str.seek(0, IO::SEEK_END) else encoding = options[:encoding] str = +"" str.force_encoding(encoding) if encoding end csv = new(str, **options) # wrap yield csv # yield for appending csv.string # return final String end
This method is a shortcut for converting a single row (Array) into a CSV
String.
The options
parameter can be anything CSV::new()
understands. This method understands an additional :encoding
parameter to set the base Encoding for the output. This method will try to guess your Encoding from the first non-nil
field in row
, if possible, but you may need to use this parameter as a backup plan.
The :row_sep
option
defaults to $INPUT_RECORD_SEPARATOR
($/
) when calling this method.
# File csv.rb, line 561 def generate_line(row, **options) options = {row_sep: $INPUT_RECORD_SEPARATOR}.merge(options) str = +"" if options[:encoding] str.force_encoding(options[:encoding]) elsif field = row.find {|f| f.is_a?(String)} str.force_encoding(field.encoding) end (new(str, **options) << row).string end
This method will return a CSV
instance, just like CSV::new()
, but the instance will be cached and returned for all future calls to this method for the same data
object (tested by Object#object_id()) with the same options
.
If a block is given, the instance is passed to the block and the return value becomes the return value of the block.
# File csv.rb, line 429 def instance(data = $stdout, **options) # create a _signature_ for this method call, data object and options sig = [data.object_id] + options.values_at(*DEFAULT_OPTIONS.keys.sort_by { |sym| sym.to_s }) # fetch or create the instance for this signature @@instances ||= Hash.new instance = (@@instances[sig] ||= new(data, **options)) if block_given? yield instance # run block, if given, returning result else instance # or return the instance end end
This constructor will wrap either a String or IO object passed in data
for reading and/or writing. In addition to the CSV
instance methods, several IO methods are delegated. (See CSV::open()
for a complete list.) If you pass a String for data
, you can later retrieve it (after writing to it, for example) with CSV.string().
Note that a wrapped String will be positioned at the beginning (for reading). If you want it at the end (for writing), use CSV::generate()
. If you want any other positioning, pass a preset StringIO object instead.
You may set any reading and/or writing preferences in the options
Hash. Available options are:
:col_sep
The String placed between each field. This String will be transcoded into the data's Encoding before parsing.
:row_sep
The String appended to the end of each row. This can be set to the special :auto
setting, which requests that CSV
automatically discover this from the data. Auto-discovery reads ahead in the data looking for the next "\r\n"
, "\n"
, or "\r"
sequence. A sequence will be selected even if it occurs in a quoted field, assuming that you would have the same line endings there. If none of those sequences is found, data
is ARGF
, STDIN
, STDOUT
, or STDERR
, or the stream is only available for output, the default $INPUT_RECORD_SEPARATOR
($/
) is used. Obviously, discovery takes a little time. Set manually if speed is important. Also note that IO objects should be opened in binary mode on Windows if this feature will be used as the line-ending translation can cause problems with resetting the document position to where it was before the read ahead. This String will be transcoded into the data's Encoding before parsing.
:quote_char
The character used to quote fields. This has to be a single character String. This is useful for application that incorrectly use '
as the quote character instead of the correct "
. CSV
will always consider a double sequence of this character to be an escaped quote. This String will be transcoded into the data's Encoding before parsing.
:field_size_limit
This is a maximum size CSV
will read ahead looking for the closing quote for a field. (In truth, it reads to the first line ending beyond this size.) If a quote cannot be found within the limit CSV
will raise a MalformedCSVError
, assuming the data is faulty. You can use this limit to prevent what are effectively DoS attacks on the parser. However, this limit can cause a legitimate parse to fail and thus is set to nil
, or off, by default.
:converters
An Array of names from the Converters
Hash and/or lambdas that handle custom conversion. A single converter doesn't have to be in an Array. All built-in converters try to transcode fields to UTF-8 before converting. The conversion will fail if the data cannot be transcoded, leaving the field unchanged.
:unconverted_fields
If set to true
, an unconverted_fields() method will be added to all returned rows (Array or CSV::Row
) that will return the fields as they were before conversion. Note that :headers
supplied by Array or String were not fields of the document and thus will have an empty Array attached.
:headers
If set to :first_row
or true
, the initial row of the CSV
file will be treated as a row of headers. If set to an Array, the contents will be used as the headers. If set to a String, the String is run through a call of CSV::parse_line()
with the same :col_sep
, :row_sep
, and :quote_char
as this instance to produce an Array of headers. This setting causes CSV#shift()
to return rows as CSV::Row
objects instead of Arrays and CSV#read()
to return CSV::Table
objects instead of an Array of Arrays.
:return_headers
When false
, header rows are silently swallowed. If set to true
, header rows are returned in a CSV::Row
object with identical headers and fields (save that the fields do not go through the converters).
:write_headers
When true
and :headers
is set, a header row will be added to the output.
:header_converters
Identical in functionality to :converters
save that the conversions are only made to header rows. All built-in converters try to transcode headers to UTF-8 before converting. The conversion will fail if the data cannot be transcoded, leaving the header unchanged.
:skip_blanks
When setting a true
value, CSV
will skip over any empty rows. Note that this setting will not skip rows that contain column separators, even if the rows contain no actual data. If you want to skip rows that contain separators but no content, consider using :skip_lines
, or inspecting fields.compact.empty? on each row.
:force_quotes
When setting a true
value, CSV
will quote all CSV
fields it creates.
:skip_lines
When setting an object responding to match
, every line matching it is considered a comment and ignored during parsing. When set to a String, it is first converted to a Regexp. When set to nil
no line is considered a comment. If the passed object does not respond to match
, ArgumentError
is thrown.
:liberal_parsing
When setting a true
value, CSV
will attempt to parse input not conformant with RFC 4180, such as double quotes in unquoted fields.
:nil_value
When set an object, any values of an empty field is replaced by the set object, not nil.
:empty_value
When setting an object, any values of a blank string field is replaced by the set object.
:quote_empty
When setting a true
value, CSV
will quote empty values with double quotes. When false
, CSV
will emit an empty string for an empty field value.
:write_converters
Converts values on each line with the specified Proc
object(s), which receive a String
value and return a String
or nil
value. When an array is specified, each converter will be applied in order.
:write_nil_value
When a String
value, nil
value(s) on each line will be replaced with the specified value.
:write_empty_value
When a String
or nil
value, empty value(s) on each line will be replaced with the specified value.
:strip
When setting a true
value, CSV
will strip “trnfv” around the values. If you specify a string instead of true
, CSV
will strip string. The length of the string must be 1.
See CSV::DEFAULT_OPTIONS
for the default settings.
Options cannot be overridden in the instance methods for performance reasons, so be sure to set what you want here.
# File csv.rb, line 921 def initialize(data, col_sep: ",", row_sep: :auto, quote_char: '"', field_size_limit: nil, converters: nil, unconverted_fields: nil, headers: false, return_headers: false, write_headers: nil, header_converters: nil, skip_blanks: false, force_quotes: false, skip_lines: nil, liberal_parsing: false, internal_encoding: nil, external_encoding: nil, encoding: nil, nil_value: nil, empty_value: "", quote_empty: true, write_converters: nil, write_nil_value: nil, write_empty_value: "", strip: false) raise ArgumentError.new("Cannot parse nil as CSV") if data.nil? if data.is_a?(String) @io = StringIO.new(data) @io.set_encoding(encoding || data.encoding) else @io = data end @encoding = determine_encoding(encoding, internal_encoding) @base_fields_converter_options = { nil_value: nil_value, empty_value: empty_value, } @write_fields_converter_options = { nil_value: write_nil_value, empty_value: write_empty_value, } @initial_converters = converters @initial_header_converters = header_converters @initial_write_converters = write_converters @parser_options = { column_separator: col_sep, row_separator: row_sep, quote_character: quote_char, field_size_limit: field_size_limit, unconverted_fields: unconverted_fields, headers: headers, return_headers: return_headers, skip_blanks: skip_blanks, skip_lines: skip_lines, liberal_parsing: liberal_parsing, encoding: @encoding, nil_value: nil_value, empty_value: empty_value, strip: strip, } @parser = nil @parser_enumerator = nil @eof_error = nil @writer_options = { encoding: @encoding, force_encoding: (not encoding.nil?), force_quotes: force_quotes, headers: headers, write_headers: write_headers, column_separator: col_sep, row_separator: row_sep, quote_character: quote_char, quote_empty: quote_empty, } @writer = nil writer if @writer_options[:write_headers] end
This method opens an IO object, and wraps that with CSV
. This is intended as the primary interface for writing a CSV
file.
You must pass a filename
and may optionally add a mode
for Ruby's open(). You may also pass an optional Hash containing any options
CSV::new()
understands as the final argument.
This method works like Ruby's open() call, in that it will pass a CSV
object to a provided block and close it when the block terminates, or it will return the CSV
object when no block is provided. (Note: This is different from the Ruby 1.8 CSV
library which passed rows to the block. Use CSV::foreach()
for that behavior.)
You must provide a mode
with an embedded Encoding designator unless your data is in Encoding::default_external(). CSV
will check the Encoding of the underlying IO object (set by the mode
you pass) to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read just as you can with a normal call to IO::open(). For example, "rb:UTF-32BE:UTF-8"
would read UTF-32BE data from the file but transcode it to UTF-8 before CSV
parses it.
An opened CSV
object will delegate to many IO methods for convenience. You may call:
binmode()
binmode?()
close()
close_read()
close_write()
closed?()
eof()
eof?()
external_encoding()
fcntl()
fileno()
flock()
flush()
fsync()
internal_encoding()
ioctl()
isatty()
path()
pid()
pos()
pos=()
reopen()
seek()
stat()
sync()
sync=()
tell()
to_i
()
to_io
()
truncate()
tty?()
# File csv.rb, line 635 def open(filename, mode="r", **options) # wrap a File opened with the remaining +args+ with no newline # decorator file_opts = {universal_newline: false}.merge(options) begin f = File.open(filename, mode, **file_opts) rescue ArgumentError => e raise unless /needs binmode/.match?(e.message) and mode == "r" mode = "rb" file_opts = {encoding: Encoding.default_external}.merge(file_opts) retry end begin csv = new(f, **options) rescue Exception f.close raise end # handle blocks like Ruby's open(), not like the CSV library if block_given? begin yield csv ensure csv.close end else csv end end
This method can be used to easily parse CSV
out of a String. You may either provide a block
which will be called with each row of the String in turn, or just use the returned Array of Arrays (when no block
is given).
You pass your str
to read from, and an optional options
containing anything CSV::new()
understands.
# File csv.rb, line 679 def parse(str, **options, &block) csv = new(str, **options) return csv.each(&block) if block_given? # slurp contents, if no block is given begin csv.read ensure csv.close end end
This method is a shortcut for converting a single line of a CSV
String into an Array. Note that if line
contains multiple rows, anything beyond the first row is ignored.
The options
parameter can be anything CSV::new()
understands.
# File csv.rb, line 699 def parse_line(line, **options) new(line, **options).shift end
Use to slurp a CSV
file into an Array of Arrays. Pass the path
to the file and any options
CSV::new()
understands. This method also understands an additional :encoding
parameter that you can use to specify the Encoding of the data in the file to be read. You must provide this unless your data is in Encoding::default_external(). CSV
will use this to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read. For example, encoding: "UTF-32BE:UTF-8"
would read UTF-32BE data from the file but transcode it to UTF-8 before CSV
parses it.
# File csv.rb, line 714 def read(path, **options) open(path, **options) { |csv| csv.read } end
Alias for CSV::read()
.
# File csv.rb, line 719 def readlines(path, **options) read(path, **options) end
A shortcut for:
CSV.read( path, { headers: true, converters: :numeric, header_converters: :symbol }.merge(options) )
# File csv.rb, line 730 def table(path, **options) default_options = { headers: true, converters: :numeric, header_converters: :symbol, } options = default_options.merge(options) read(path, **options) end
The primary write method for wrapped Strings and IOs, row
(an Array or CSV::Row
) is converted to CSV
and appended to the data source. When a CSV::Row
is passed, only the row's fields() are appended to the output.
The data source must be open for writing.
# File csv.rb, line 1229 def <<(row) writer << row self end
# File csv.rb, line 1161 def binmode? if @io.respond_to?(:binmode?) @io.binmode? else false end end
The encoded :col_sep
used in parsing and writing. See CSV::new
for details.
# File csv.rb, line 1008 def col_sep parser.column_separator end
You can use this method to install a CSV::Converters
built-in, or provide a block that handles a custom conversion.
If you provide a block that takes one argument, it will be passed the field and is expected to return the converted value or the field itself. If your block takes two arguments, it will also be passed a CSV::FieldInfo
Struct, containing details about the field. Again, the block should return a converted field or the field itself.
# File csv.rb, line 1251 def convert(name = nil, &converter) parser_fields_converter.add_converter(name, &converter) end
Returns the current list of converters in effect. See CSV::new
for details. Built-in converters will be returned by name, while others will be returned as is.
# File csv.rb, line 1049 def converters parser_fields_converter.map do |converter| name = Converters.rassoc(converter) name ? name.first : converter end end
Yields each row of the data source in turn.
Support for Enumerable.
The data source must be open for reading.
# File csv.rb, line 1279 def each(&block) parser_enumerator.each(&block) end
# File csv.rb, line 1197 def eof? return false if @eof_error begin parser_enumerator.peek false rescue MalformedCSVError => error @eof_error = error false rescue StopIteration true end end
The limit for field size, if any. See CSV::new
for details.
# File csv.rb, line 1032 def field_size_limit parser.field_size_limit end
# File csv.rb, line 1169 def flock(*args) raise NotImplementedError unless @io.respond_to?(:flock) @io.flock(*args) end
Returns true
if all output fields are quoted. See CSV::new
for details.
# File csv.rb, line 1117 def force_quotes? @writer_options[:force_quotes] end
Identical to CSV#convert()
, but for header rows.
Note that this method must be called before header rows are read to have any effect.
# File csv.rb, line 1266 def header_convert(name = nil, &converter) header_fields_converter.add_converter(name, &converter) end
Returns the current list of converters in effect for headers. See CSV::new
for details. Built-in converters will be returned by name, while others will be returned as is.
# File csv.rb, line 1101 def header_converters header_fields_converter.map do |converter| name = HeaderConverters.rassoc(converter) name ? name.first : converter end end
Returns true
if the next row read will be a header row.
# File csv.rb, line 1299 def header_row? parser.header_row? end
Returns nil
if headers will not be used, true
if they will but have not yet been read, or the actual headers after they have been read. See CSV::new
for details.
# File csv.rb, line 1069 def headers if @writer @writer.headers else parsed_headers = parser.headers return parsed_headers if parsed_headers raw_headers = @parser_options[:headers] raw_headers = nil if raw_headers == false raw_headers end end
Returns a simplified description of the key CSV
attributes in an ASCII compatible String.
# File csv.rb, line 1328 def inspect str = ["#<", self.class.to_s, " io_type:"] # show type of wrapped IO if @io == $stdout then str << "$stdout" elsif @io == $stdin then str << "$stdin" elsif @io == $stderr then str << "$stderr" else str << @io.class.to_s end # show IO.path(), if available if @io.respond_to?(:path) and (p = @io.path) str << " io_path:" << p.inspect end # show encoding str << " encoding:" << @encoding.name # show other attributes ["lineno", "col_sep", "row_sep", "quote_char"].each do |attr_name| if a = __send__(attr_name) str << " " << attr_name << ":" << a.inspect end end ["skip_blanks", "liberal_parsing"].each do |attr_name| if a = __send__("#{attr_name}?") str << " " << attr_name << ":" << a.inspect end end _headers = headers str << " headers:" << _headers.inspect if _headers str << ">" begin str.join('') rescue # any encoding error str.map do |s| e = Encoding::Converter.asciicompat_encoding(s.encoding) e ? s.encode(e) : s.force_encoding("ASCII-8BIT") end.join('') end end
# File csv.rb, line 1174 def ioctl(*args) raise NotImplementedError unless @io.respond_to?(:ioctl) @io.ioctl(*args) end
Returns true
if illegal input is handled. See CSV::new
for details.
# File csv.rb, line 1122 def liberal_parsing? parser.liberal_parsing? end
The last row read from this file.
# File csv.rb, line 1147 def line parser.line end
The line number of the last row read from this file. Fields with nested line-end characters will not affect this count.
# File csv.rb, line 1136 def lineno if @writer @writer.lineno else parser.lineno end end
# File csv.rb, line 1179 def path @io.path if @io.respond_to?(:path) end
The encoded :quote_char
used in parsing and writing. See CSV::new
for details.
# File csv.rb, line 1024 def quote_char parser.quote_character end
Slurps the remaining rows and returns an Array of Arrays.
The data source must be open for reading.
# File csv.rb, line 1288 def read rows = to_a if parser.use_headers? Table.new(rows, headers: parser.headers) else rows end end
Returns true
if headers will be returned as a row of results. See CSV::new
for details.
# File csv.rb, line 1084 def return_headers? parser.return_headers? end
Rewinds the underlying IO object and resets CSV's lineno() counter.
# File csv.rb, line 1212 def rewind @parser = nil @parser_enumerator = nil @eof_error = nil @writer.rewind if @writer @io.rewind end
The encoded :row_sep
used in parsing and writing. See CSV::new
for details.
# File csv.rb, line 1016 def row_sep parser.row_separator end
The primary read method for wrapped Strings and IOs, a single row is pulled from the data source, parsed and returned as an Array of fields (if header rows are not used) or a CSV::Row
(when header rows are used).
The data source must be open for reading.
# File csv.rb, line 1310 def shift if @eof_error eof_error, @eof_error = @eof_error, nil raise eof_error end begin parser_enumerator.next rescue StopIteration nil end end
Returns true
blank lines are skipped by the parser. See CSV::new
for details.
# File csv.rb, line 1112 def skip_blanks? parser.skip_blanks? end
The regex marking a line as a comment. See CSV::new
for details.
# File csv.rb, line 1040 def skip_lines parser.skip_lines end
# File csv.rb, line 1183 def stat(*args) raise NotImplementedError unless @io.respond_to?(:stat) @io.stat(*args) end
# File csv.rb, line 1188 def to_i raise NotImplementedError unless @io.respond_to?(:to_i) @io.to_i end
# File csv.rb, line 1193 def to_io @io.respond_to?(:to_io) ? @io.to_io : @io end
Returns true
if unconverted_fields() to parsed results. See CSV::new
for details.
# File csv.rb, line 1060 def unconverted_fields? parser.unconverted_fields? end
Returns true
if headers are written in output. See CSV::new
for details.
# File csv.rb, line 1092 def write_headers? @writer_options[:write_headers] end