Maintenance of Ruby 2.0.0 ended on February 24, 2016. Read more
This class provides a complete interface to CSV files and data. It offers tools to enable you to read and write to and from Strings or IO objects, as needed.
CSV.foreach("path/to/file.csv") do |row| # use row here... end
arr_of_arrs = CSV.read("path/to/file.csv")
CSV.parse("CSV,data,String") do |row| # use row here... end
arr_of_arrs = CSV.parse("CSV,data,String")
CSV.open("path/to/file.csv", "wb") do |csv| csv << ["row", "of", "CSV", "data"] csv << ["another", "row"] # ... end
csv_string = CSV.generate do |csv| csv << ["row", "of", "CSV", "data"] csv << ["another", "row"] # ... end
csv_string = ["CSV", "data"].to_csv # to CSV csv_array = "CSV,String".parse_csv # from CSV
CSV { |csv_out| csv_out << %w{my data here} } # to $stdout CSV(csv = "") { |csv_str| csv_str << %w{my data here} } # to a String CSV($stderr) { |csv_err| csv_err << %w{my data here} } # to $stderr CSV($stdin) { |csv_in| csv_in.each { |row| p row } } # from $stdin
csv = CSV.new(io, options) # ... read (with gets() or each()) from and write (with <<) to csv here ...
This new CSV parser is m17n savvy. The parser works in the Encoding of the IO or String object being read from or written to. Your data is never transcoded (unless you ask Ruby to transcode it for you) and will literally be parsed in the Encoding it is in. Thus CSV will return Arrays or Rows of Strings in the Encoding of your data. This is accomplished by transcoding the parser itself into your Encoding.
Some transcoding must take place, of course, to accomplish this
multiencoding support. For example, :col_sep
,
:row_sep
, and :quote_char
must be transcoded to
match your data. Hopefully this makes the entire process feel transparent,
since CSV's defaults should just magically work for you data. However,
you can set these values manually in the target Encoding to avoid the
translation.
It's also important to note that while all of CSV's core parser is now Encoding agnostic, some features are not. For example, the built-in converters will try to transcode data to UTF-8 before making conversions. Again, you can provide custom converters that are aware of your Encodings to avoid this translation. It's just too hard for me to support native conversions in all of Ruby's Encodings.
Anyway, the practical side of this is simple: make sure IO and String objects passed into CSV have the proper Encoding set and everything should just work. CSV methods that allow you to open IO objects (CSV::foreach(), ::open, ::read, and ::readlines) do allow you to specify the Encoding.
One minor exception comes when generating CSV into a String with an Encoding that is not ASCII compatible. There's no existing data for CSV to use to prepare itself and thus you will probably need to manually specify the desired Encoding for most of those cases. It will try to guess using the fields in a row of output though, when using ::generate_line or Array#to_csv().
I try to point out any other Encoding issues in the documentation of methods as they come up.
This has been tested to the best of my ability with all non-“dummy” Encodings Ruby ships with. However, it is brave new code and may have some bugs. Please feel free to report any issues you find with it.
The encoding used by all converters.
This Hash holds the built-in converters of CSV that
can be accessed by name. You can select Converters with #convert or through the
options
Hash passed to ::new.
:integer
Converts any field Integer() accepts.
:float
Converts any field Float() accepts.
:numeric
A combination of :integer
and :float
.
:date
Converts any field Date::parse() accepts.
:date_time
Converts any field DateTime::parse() accepts.
:all
All built-in converters. A combination of :date_time
and
:numeric
.
All built-in converters transcode field data to UTF-8 before attempting a conversion. If your data cannot be transcoded to UTF-8 the conversion will fail and the field will remain unchanged.
This Hash is intentionally left unfrozen and users should feel free to add values to it that can be accessed by all CSV objects.
To add a combo field, the value should be an Array of names. Combo fields can be nested with other combo fields.
The options used when no overrides are given by calling code. They are:
:col_sep
","
:row_sep
:auto
:quote_char
'"'
:field_size_limit
nil
:converters
nil
:unconverted_fields
nil
:headers
false
:return_headers
false
:header_converters
nil
:skip_blanks
false
:force_quotes
false
:skip_lines
nil
A Regexp used to find and convert some common Date formats.
A Regexp used to find and convert some common DateTime formats.
A FieldInfo Struct contains details about a field's position in the data source it was read from. CSV will pass this Struct to some blocks that make decisions based on field structure. See CSV.convert_fields() for an example.
index
The zero-based index of the field in its row.
line
The line of the data source this row is from.
header
The header for the column, when available.
This Hash holds the built-in header converters of CSV that can be accessed by name. You can select HeaderConverters with #header_convert or through the
options
Hash passed to ::new.
:downcase
Calls downcase() on the header String.
:symbol
The header String is downcased, spaces are replaced with underscores, non-word characters are dropped, and finally to_sym() is called.
All built-in header converters transcode header data to UTF-8 before attempting a conversion. If your data cannot be transcoded to UTF-8 the conversion will fail and the header will remain unchanged.
This Hash is intetionally left unfrozen and users should feel free to add values to it that can be accessed by all CSV objects.
To add a combo field, the value should be an Array of names. Combo fields can be nested with other combo fields.
The version of the installed library.
The Encoding CSV is parsing or writing in. This will be the Encoding you receive parsed data in and/or the Encoding data will be written in.
This method is a convenience for building Unix-like filters for CSV data. Each row is yielded to the provided block
which can alter it as needed. After the block returns, the row is appended
to output
altered or not.
The input
and output
arguments can be anything ::new accepts (generally String or IO
objects). If not given, they default to ARGF
and
$stdout
.
The options
parameter is also filtered down to ::new after some clever key parsing. Any
key beginning with :in_
or :input_
will have that
leading identifier stripped and will only be used in the
options
Hash for the input
object. Keys starting
with :out_
or :output_
affect only
output
. All other keys are assigned to both objects.
The :output_row_sep
option
defaults to
$INPUT_RECORD_SEPARATOR
($/
).
# File csv.rb, line 1076 def self.filter(*args) # parse options for input, output, or both in_options, out_options = Hash.new, {row_sep: $INPUT_RECORD_SEPARATOR} if args.last.is_a? Hash args.pop.each do |key, value| case key.to_s when /\Ain(?:put)?_(.+)\Z/ in_options[$1.to_sym] = value when /\Aout(?:put)?_(.+)\Z/ out_options[$1.to_sym] = value else in_options[key] = value out_options[key] = value end end end # build input and output wrappers input = new(args.shift || ARGF, in_options) output = new(args.shift || $stdout, out_options) # read, yield, write input.each do |row| yield row output << row end end
This method is intended as the primary interface for reading CSV files. You pass a path
and any
options
you wish to set for the read. Each row of file will
be passed to the provided block
in turn.
The options
parameter can be anything ::new understands. This method also
understands an additional :encoding
parameter that you can use
to specify the Encoding of the data in the file to be read. You must
provide this unless your data is in Encoding::default_external(). CSV will use this to determine how to parse the data.
You may provide a second Encoding to have the data transcoded as it is
read. For example, encoding: "UTF-32BE:UTF-8"
would
read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.
# File csv.rb, line 1117 def self.foreach(path, options = Hash.new, &block) open(path, options) do |csv| csv.each(&block) end end
This method wraps a String you provide, or an empty default String, in a CSV object which is passed to the provided block. You can use the block to append CSV rows to the String and when the block exits, the final String will be returned.
Note that a passed String is modfied by this method. Call dup() before passing if you need a new String.
The options
parameter can be anything ::new understands. This method
understands an additional :encoding
parameter when not passed
a String to set the base Encoding for the output. CSV needs this hint if you plan to output non-ASCII
compatible data.
# File csv.rb, line 1141 def self.generate(*args) # add a default empty String, if none was given if args.first.is_a? String io = StringIO.new(args.shift) io.seek(0, IO::SEEK_END) args.unshift(io) else encoding = args[-1][:encoding] if args.last.is_a?(Hash) str = "" str.force_encoding(encoding) if encoding args.unshift(str) end csv = new(*args) # wrap yield csv # yield for appending csv.string # return final String end
This method is a shortcut for converting a single row (Array) into a CSV String.
The options
parameter can be anything ::new understands. This method
understands an additional :encoding
parameter to set the base
Encoding for the output. This method will try to guess your Encoding from
the first non-nil
field in row
, if possible, but
you may need to use this parameter as a backup plan.
The :row_sep
option
defaults to
$INPUT_RECORD_SEPARATOR
($/
) when calling this
method.
# File csv.rb, line 1171 def self.generate_line(row, options = Hash.new) options = {row_sep: $INPUT_RECORD_SEPARATOR}.merge(options) encoding = options.delete(:encoding) str = "" if encoding str.force_encoding(encoding) elsif field = row.find { |f| not f.nil? } str.force_encoding(String(field).encoding) end (new(str, options) << row).string end
This method will return a CSV instance, just like ::new, but the instance will be cached and
returned for all future calls to this method for the same data
object (tested by Object#object_id()) with the same options
.
If a block is given, the instance is passed to the block and the return value becomes the return value of the block.
# File csv.rb, line 1036 def self.instance(data = $stdout, options = Hash.new) # create a _signature_ for this method call, data object and options sig = [data.object_id] + options.values_at(*DEFAULT_OPTIONS.keys.sort_by { |sym| sym.to_s }) # fetch or create the instance for this signature @@instances ||= Hash.new instance = (@@instances[sig] ||= new(data, options)) if block_given? yield instance # run block, if given, returning result else instance # or return the instance end end
This constructor will wrap either a String or IO object passed in
data
for reading and/or writing. In addition to the CSV instance methods, several IO methods are delegated.
(See ::open for a complete list.) If
you pass a String for data
, you can later retrieve it (after
writing to it, for example) with CSV.string().
Note that a wrapped String will be positioned at at the beginning (for reading). If you want it at the end (for writing), use ::generate. If you want any other positioning, pass a preset StringIO object instead.
You may set any reading and/or writing preferences in the
options
Hash. Available options are:
:col_sep
The String placed between each field. This String will be transcoded into the data's Encoding before parsing.
:row_sep
The String appended to the end of each row. This can be set to the special
:auto
setting, which requests that CSV
automatically discover this from the data. Auto-discovery reads ahead in
the data looking for the next "\r\n"
,
"\n"
, or "\r"
sequence. A
sequence will be selected even if it occurs in a quoted field, assuming
that you would have the same line endings there. If none of those
sequences is found, data
is ARGF
,
STDIN
, STDOUT
, or STDERR
, or the
stream is only available for output, the default
$INPUT_RECORD_SEPARATOR
($/
) is used. Obviously,
discovery takes a little time. Set manually if speed is important. Also
note that IO objects should be opened in binary mode on Windows if this
feature will be used as the line-ending translation can cause problems with
resetting the document position to where it was before the read ahead. This
String will be transcoded into the data's Encoding before parsing.
:quote_char
The character used to quote fields. This has to be a single character
String. This is useful for application that incorrectly use
'
as the quote character instead of the correct
"
. CSV will always consider a
double sequence this character to be an escaped quote. This String will be
transcoded into the data's Encoding before parsing.
:field_size_limit
This is a maximum size CSV will read ahead looking
for the closing quote for a field. (In truth, it reads to the first line
ending beyond this size.) If a quote cannot be found within the limit CSV will raise a MalformedCSVError, assuming the data
is faulty. You can use this limit to prevent what are effectively DoS
attacks on the parser. However, this limit can cause a legitimate parse to
fail and thus is set to nil
, or off, by default.
:converters
An Array of names from the Converters Hash and/or lambdas that handle custom conversion. A single converter doesn't have to be in an Array. All built-in converters try to transcode fields to UTF-8 before converting. The conversion will fail if the data cannot be transcoded, leaving the field unchanged.
:unconverted_fields
If set to true
, an unconverted_fields() method will be added
to all returned rows (Array or CSV::Row) that
will return the fields as they were before conversion. Note that
:headers
supplied by Array or String were not fields of the
document and thus will have an empty Array attached.
:headers
If set to :first_row
or true
, the initial row of
the CSV file will be treated as a row of headers.
If set to an Array, the contents will be used as the headers. If set to a
String, the String is run through a call of ::parse_line with the same
:col_sep
, :row_sep
, and :quote_char
as this instance to produce an Array of headers. This setting causes #shift to return rows as CSV::Row objects instead of Arrays and #read to return CSV::Table objects instead of an Array of Arrays.
:return_headers
When false
, header rows are silently swallowed. If set to
true
, header rows are returned in a CSV::Row object with identical headers and fields
(save that the fields do not go through the converters).
:write_headers
When true
and :headers
is set, a header row will
be added to the output.
:header_converters
Identical in functionality to :converters
save that the
conversions are only made to header rows. All built-in converters try to
transcode headers to UTF-8 before converting. The conversion will fail if
the data cannot be transcoded, leaving the header unchanged.
:skip_blanks
When set to a true
value, CSV will skip
over any rows with no content.
:force_quotes
When set to a true
value, CSV will
quote all CSV fields it creates.
:skip_lines
When set to an object responding to match
, every line matching
it is considered a comment and ignored during parsing. When set to
nil
no line is considered a comment. If the passed object does
not respond to match
, ArgumentError
is thrown.
See CSV::DEFAULT_OPTIONS for the default settings.
Options cannot be overridden in the instance methods for performance reasons, so be sure to set what you want here.
# File csv.rb, line 1482 def initialize(data, options = Hash.new) # build the options for this read/write options = DEFAULT_OPTIONS.merge(options) # create the IO object we will read from @io = data.is_a?(String) ? StringIO.new(data) : data # honor the IO encoding if we can, otherwise default to ASCII-8BIT @encoding = raw_encoding(nil) || ( if encoding = options.delete(:internal_encoding) case encoding when Encoding; encoding else Encoding.find(encoding) end end ) || ( case encoding = options.delete(:encoding) when Encoding; encoding when /\A[^:]+/; Encoding.find($&) end ) || Encoding.default_internal || Encoding.default_external # # prepare for building safe regular expressions in the target encoding, # if we can transcode the needed characters # @re_esc = "\\".encode(@encoding) rescue "" @re_chars = /#{%"[-\\]\\[\\.^$?*+{}()|# \r\n\t\f\v]".encode(@encoding)}/ init_separators(options) init_parsers(options) init_converters(options) init_headers(options) init_comments(options) @force_encoding = !!(encoding || options.delete(:encoding)) options.delete(:internal_encoding) options.delete(:external_encoding) unless options.empty? raise ArgumentError, "Unknown options: #{options.keys.join(', ')}." end # track our own lineno since IO gets confused about line-ends is CSV fields @lineno = 0 end
This method opens an IO object, and wraps that with CSV. This is intended as the primary interface for writing a CSV file.
You must pass a filename
and may optionally add a
mode
for Ruby's open(). You may also pass an optional
Hash containing any options
::new understands as the final argument.
This method works like Ruby's open() call, in that it will pass a CSV object to a provided block and close it when the block terminates, or it will return the CSV object when no block is provided. (Note: This is different from the Ruby 1.8 CSV library which passed rows to the block. Use ::foreach for that behavior.)
You must provide a mode
with an embedded Encoding designator
unless your data is in Encoding::default_external(). CSV will check the Encoding of the underlying IO object
(set by the mode
you pass) to determine how to parse the data.
You may provide a second Encoding to have the data transcoded as it is read
just as you can with a normal call to IO::open(). For example,
"rb:UTF-32BE:UTF-8"
would read UTF-32BE data from
the file but transcode it to UTF-8 before CSV parses
it.
An opened CSV object will delegate to many IO methods for convenience. You may call:
binmode()
binmode?()
close()
close_read()
close_write()
closed?()
eof()
eof?()
external_encoding()
fcntl()
fileno()
flock()
flush()
fsync()
internal_encoding()
ioctl()
isatty()
path()
pid()
pos()
pos=()
reopen()
seek()
stat()
sync()
sync=()
tell()
to_i()
to_io()
truncate()
tty?()
# File csv.rb, line 1246 def self.open(*args) # find the +options+ Hash options = if args.last.is_a? Hash then args.pop else Hash.new end # wrap a File opened with the remaining +args+ with no newline # decorator file_opts = {universal_newline: false}.merge(options) begin f = File.open(*args, file_opts) rescue ArgumentError => e raise unless /needs binmode/ =~ e.message and args.size == 1 args << "rb" file_opts = {encoding: Encoding.default_external}.merge(file_opts) retry end csv = new(f, options) # handle blocks like Ruby's open(), not like the CSV library if block_given? begin yield csv ensure csv.close end else csv end end
This method can be used to easily parse CSV out of a
String. You may either provide a block
which will be called
with each row of the String in turn, or just use the returned Array of
Arrays (when no block
is given).
You pass your str
to read from, and an optional
options
Hash containing anything ::new understands.
# File csv.rb, line 1286 def self.parse(*args, &block) csv = new(*args) if block.nil? # slurp contents, if no block is given begin csv.read ensure csv.close end else # or pass each row to a provided block csv.each(&block) end end
This method is a shortcut for converting a single line of a CSV String into a into an Array. Note that if
line
contains multiple rows, anything beyond the first row is
ignored.
The options
parameter can be anything ::new understands.
# File csv.rb, line 1306 def self.parse_line(line, options = Hash.new) new(line, options).shift end
Use to slurp a CSV file into an Array of Arrays.
Pass the path
to the file and any options
::new understands. This method also
understands an additional :encoding
parameter that you can use
to specify the Encoding of the data in the file to be read. You must
provide this unless your data is in Encoding::default_external(). CSV will use this to determine how to parse the data.
You may provide a second Encoding to have the data transcoded as it is
read. For example, encoding: "UTF-32BE:UTF-8"
would
read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.
# File csv.rb, line 1321 def self.read(path, *options) open(path, *options) { |csv| csv.read } end
Alias for ::read.
# File csv.rb, line 1326 def self.readlines(*args) read(*args) end
A shortcut for:
CSV.read( path, { headers: true, converters: :numeric, header_converters: :symbol }.merge(options) )
# File csv.rb, line 1337 def self.table(path, options = Hash.new) read( path, { headers: true, converters: :numeric, header_converters: :symbol }.merge(options) ) end
The primary write method for wrapped Strings and IOs, row
(an
Array or CSV::Row) is converted to CSV and appended to the data source. When a CSV::Row is passed, only the row's fields() are
appended to the output.
The data source must be open for writing.
# File csv.rb, line 1635 def <<(row) # make sure headers have been assigned if header_row? and [Array, String].include? @use_headers.class parse_headers # won't read data for Array or String self << @headers if @write_headers end # handle CSV::Row objects and Hashes row = case row when self.class::Row then row.fields when Hash then @headers.map { |header| row[header] } else row end @headers = row if header_row? @lineno += 1 output = row.map(&@quote).join(@col_sep) + @row_sep # quote and separate if @io.is_a?(StringIO) and output.encoding != (encoding = raw_encoding) if @force_encoding output = output.encode(encoding) elsif (compatible_encoding = Encoding.compatible?(@io.string, output)) @io.set_encoding(compatible_encoding) @io.seek(0, IO::SEEK_END) end end @io << output self # for chaining end
You can use this method to install a CSV::Converters built-in, or provide a block that handles a custom conversion.
If you provide a block that takes one argument, it will be passed the field and is expected to return the converted value or the field itself. If your block takes two arguments, it will also be passed a CSV::FieldInfo Struct, containing details about the field. Again, the block should return a converted field or the field itself.
# File csv.rb, line 1684 def convert(name = nil, &converter) add_converter(:converters, self.class::Converters, name, &converter) end
Returns the current list of converters in effect. See ::new for details. Built-in converters will be returned by name, while others will be returned as is.
# File csv.rb, line 1551 def converters @converters.map do |converter| name = Converters.rassoc(converter) name ? name.first : converter end end
Yields each row of the data source in turn.
Support for Enumerable.
The data source must be open for reading.
# File csv.rb, line 1715 def each if block_given? while row = shift yield row end else to_enum end end
Returns true
if all output fields are quoted. See ::new for details.
# File csv.rb, line 1594 def force_quotes?() @force_quotes end
Identical to #convert, but for header rows.
Note that this method must be called before header rows are read to have any effect.
# File csv.rb, line 1699 def header_convert(name = nil, &converter) add_converter( :header_converters, self.class::HeaderConverters, name, &converter ) end
Returns the current list of converters in effect for headers. See ::new for details. Built-in converters will be returned by name, while others will be returned as is.
# File csv.rb, line 1582 def header_converters @header_converters.map do |converter| name = HeaderConverters.rassoc(converter) name ? name.first : converter end end
Returns true
if the next row read will be a header row.
# File csv.rb, line 1741 def header_row? @use_headers and @headers.nil? end
Returns nil
if headers will not be used, true
if
they will but have not yet been read, or the actual headers after they have
been read. See ::new for details.
# File csv.rb, line 1567 def headers @headers || true if @use_headers end
Returns a simplified description of the key CSV attributes in an ASCII compatible String.
# File csv.rb, line 1902 def inspect str = ["<#", self.class.to_s, " io_type:"] # show type of wrapped IO if @io == $stdout then str << "$stdout" elsif @io == $stdin then str << "$stdin" elsif @io == $stderr then str << "$stderr" else str << @io.class.to_s end # show IO.path(), if available if @io.respond_to?(:path) and (p = @io.path) str << " io_path:" << p.inspect end # show encoding str << " encoding:" << @encoding.name # show other attributes %w[ lineno col_sep row_sep quote_char skip_blanks ].each do |attr_name| if a = instance_variable_get("@#{attr_name}") str << " " << attr_name << ":" << a.inspect end end if @use_headers str << " headers:" << headers.inspect end str << ">" begin str.join('') rescue # any encoding error str.map do |s| e = Encoding::Converter.asciicompat_encoding(s.encoding) e ? s.encode(e) : s.force_encoding("ASCII-8BIT") end.join('') end end
Slurps the remaining rows and returns an Array of Arrays.
The data source must be open for reading.
# File csv.rb, line 1730 def read rows = to_a if @use_headers Table.new(rows) else rows end end
Returns true
if headers will be returned as a row of results.
See ::new for details.
# File csv.rb, line 1574 def return_headers?() @return_headers end
Rewinds the underlying IO object and resets CSV's lineno() counter.
# File csv.rb, line 1619 def rewind @headers = nil @lineno = 0 @io.rewind end
The primary read method for wrapped Strings and IOs, a single row is pulled from the data source, parsed and returned as an Array of fields (if header rows are not used) or a CSV::Row (when header rows are used).
The data source must be open for reading.
# File csv.rb, line 1752 def shift ######################################################################### ### This method is purposefully kept a bit long as simple conditional ### ### checks are faster than numerous (expensive) method calls. ### ######################################################################### # handle headers not based on document content if header_row? and @return_headers and [Array, String].include? @use_headers.class if @unconverted_fields return add_unconverted_fields(parse_headers, Array.new) else return parse_headers end end # # it can take multiple calls to <tt>@io.gets()</tt> to get a full line, # because of \r and/or \n characters embedded in quoted fields # in_extended_col = false csv = Array.new loop do # add another read to the line unless parse = @io.gets(@row_sep) return nil end parse.sub!(@parsers[:line_end], "") if csv.empty? # # I believe a blank line should be an <tt>Array.new</tt>, not Ruby 1.8 # CSV's <tt>[nil]</tt> # if parse.empty? @lineno += 1 if @skip_blanks next elsif @unconverted_fields return add_unconverted_fields(Array.new, Array.new) elsif @use_headers return self.class::Row.new(Array.new, Array.new) else return Array.new end end end next if @skip_lines and @skip_lines.match parse parts = parse.split(@col_sep, -1) if parts.empty? if in_extended_col csv[-1] << @col_sep # will be replaced with a @row_sep after the parts.each loop else csv << nil end end # This loop is the hot path of csv parsing. Some things may be non-dry # for a reason. Make sure to benchmark when refactoring. parts.each do |part| if in_extended_col # If we are continuing a previous column if part[-1] == @quote_char && part.count(@quote_char) % 2 != 0 # extended column ends csv.last << part[0..-2] if csv.last =~ @parsers[:stray_quote] raise MalformedCSVError, "Missing or stray quote in line #{lineno + 1}" end csv.last.gsub!(@quote_char * 2, @quote_char) in_extended_col = false else csv.last << part csv.last << @col_sep end elsif part[0] == @quote_char # If we are staring a new quoted column if part[-1] != @quote_char || part.count(@quote_char) % 2 != 0 # start an extended column csv << part[1..-1] csv.last << @col_sep in_extended_col = true else # regular quoted column csv << part[1..-2] if csv.last =~ @parsers[:stray_quote] raise MalformedCSVError, "Missing or stray quote in line #{lineno + 1}" end csv.last.gsub!(@quote_char * 2, @quote_char) end elsif part =~ @parsers[:quote_or_nl] # Unquoted field with bad characters. if part =~ @parsers[:nl_or_lf] raise MalformedCSVError, "Unquoted fields do not allow " + "\\r or \\n (line #{lineno + 1})." else raise MalformedCSVError, "Illegal quoting in line #{lineno + 1}." end else # Regular ole unquoted field. csv << (part.empty? ? nil : part) end end # Replace tacked on @col_sep with @row_sep if we are still in an extended # column. csv[-1][-1] = @row_sep if in_extended_col if in_extended_col # if we're at eof?(), a quoted field wasn't closed... if @io.eof? raise MalformedCSVError, "Unclosed quoted field on line #{lineno + 1}." elsif @field_size_limit and csv.last.size >= @field_size_limit raise MalformedCSVError, "Field size exceeded on line #{lineno + 1}." end # otherwise, we need to loop and pull some more data to complete the row else @lineno += 1 # save fields unconverted fields, if needed... unconverted = csv.dup if @unconverted_fields # convert fields, if needed... csv = convert_fields(csv) unless @use_headers or @converters.empty? # parse out header rows and handle CSV::Row conversions... csv = parse_headers(csv) if @use_headers # inject unconverted fields and accessor, if requested... if @unconverted_fields and not csv.respond_to? :unconverted_fields add_unconverted_fields(csv, unconverted) end # return the results break csv end end end
Returns true
blank lines are skipped by the parser. See ::new for details.
# File csv.rb, line 1592 def skip_blanks?() @skip_blanks end
Returns true
if unconverted_fields() to parsed results. See
::new for details.
# File csv.rb, line 1561 def unconverted_fields?() @unconverted_fields end
Returns true
if headers are written in output. See ::new for details.
# File csv.rb, line 1576 def write_headers?() @write_headers end