Maintenance of Ruby 2.0.0 ended on February 24, 2016. Read more
Subclass TestCase to create your own tests. Typically you'll want a TestCase subclass per implementation class.
Returns a set of ranges stepped exponentially from min to
max by powers of base. Eg:
bench_exp(2, 16, 2) # => [2, 4, 8, 16]
# File minitest/benchmark.rb, line 26
def self.bench_exp min, max, base = 10
min = (Math.log10(min) / Math.log10(base)).to_i
max = (Math.log10(max) / Math.log10(base)).to_i
(min..max).map { |m| base ** m }.to_a
end
Returns a set of ranges stepped linearly from min to
max by step. Eg:
bench_linear(20, 40, 10) # => [20, 30, 40]
# File minitest/benchmark.rb, line 39
def self.bench_linear min, max, step = 10
(min..max).step(step).to_a
rescue LocalJumpError # 1.8.6
r = []; (min..max).step(step) { |n| r << n }; r
end
Specifies the ranges used for benchmarking for that class. Defaults to exponential growth from 1 to 10k by powers of 10. Override if you need different ranges for your benchmarks.
See also: ::bench_exp and ::bench_linear.
# File minitest/benchmark.rb, line 67
def self.bench_range
bench_exp 1, 10_000
end
Returns all test suites that have benchmark methods.
# File minitest/benchmark.rb, line 56
def self.benchmark_suites
TestCase.test_suites.reject { |s| s.benchmark_methods.empty? }
end
Call this at the top of your tests when you absolutely positively need to have ordered tests. In doing so, you're admitting that you suck and your tests are weak.
# File minitest/unit.rb, line 1367
def self.i_suck_and_my_tests_are_order_dependent!
class << self
undef_method :test_order if method_defined? :test_order
define_method :test_order do :alpha end
end
end
Make diffs for this TestCase use pretty_inspect so that diff in assert_equal can be more details. NOTE: this is much slower than the regular inspect but much more usable for complex objects.
# File minitest/unit.rb, line 1380
def self.make_my_diffs_pretty!
require 'pp'
define_method :mu_pp do |o|
o.pretty_inspect
end
end
Call this at the top of your tests when you want to run your tests in parallel. In doing so, you're admitting that you rule and your tests are awesome.
# File minitest/unit.rb, line 1393
def self.parallelize_me!
class << self
undef_method :test_order if method_defined? :test_order
define_method :test_order do :parallel end
end
end
Runs the given work, gathering the times of each run. Range
and times are then passed to a given validation proc. Outputs
the benchmark name and times in tab-separated format, making it easy to
paste into a spreadsheet for graphing or further analysis.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm
validation = proc { |x, y| ... }
assert_performance validation do |n|
@obj.algorithm(n)
end
end
# File minitest/benchmark.rb, line 89
def assert_performance validation, &work
range = self.class.bench_range
io.print "#{__name__}"
times = []
range.each do |x|
GC.start
t0 = Time.now
instance_exec(x, &work)
t = Time.now - t0
io.print "\t%9.6f" % t
times << t
end
io.puts
validation[range, times]
end
Runs the given work and asserts that the times gathered fit to
match a constant rate (eg, linear slope == 0) within a given
threshold. Note: because we're testing for a slope of 0,
R^2 is not a good determining factor for the fit, so the threshold is
applied against the slope itself. As such, you probably want to tighten it
from the default.
See www.graphpad.com/curvefit/goodness_of_fit.htm for more details.
Fit is calculated by fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm assert_performance_constant 0.9999 do |n| @obj.algorithm(n) end end
# File minitest/benchmark.rb, line 133
def assert_performance_constant threshold = 0.99, &work
validation = proc do |range, times|
a, b, rr = fit_linear range, times
assert_in_delta 0, b, 1 - threshold
[a, b, rr]
end
assert_performance validation, &work
end
Runs the given work and asserts that the times gathered fit to
match a exponential curve within a given error threshold.
Fit is calculated by fit_exponential.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm assert_performance_exponential 0.9999 do |n| @obj.algorithm(n) end end
# File minitest/benchmark.rb, line 159
def assert_performance_exponential threshold = 0.99, &work
assert_performance validation_for_fit(:exponential, threshold), &work
end
Runs the given work and asserts that the times gathered fit to
match a straight line within a given error threshold.
Fit is calculated by fit_linear.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm assert_performance_linear 0.9999 do |n| @obj.algorithm(n) end end
# File minitest/benchmark.rb, line 179
def assert_performance_linear threshold = 0.99, &work
assert_performance validation_for_fit(:linear, threshold), &work
end
Runs the given work and asserts that the times gathered curve
fit to match a power curve within a given error threshold.
Fit is calculated by fit_power.
Ranges are specified by ::bench_range.
Eg:
def bench_algorithm assert_performance_power 0.9999 do |x| @obj.algorithm end end
# File minitest/benchmark.rb, line 199
def assert_performance_power threshold = 0.99, &work
assert_performance validation_for_fit(:power, threshold), &work
end
Takes an array of x/y pairs and calculates the general R^2 value.
See: en.wikipedia.org/wiki/Coefficient_of_determination
# File minitest/benchmark.rb, line 208
def fit_error xys
y_bar = sigma(xys) { |x, y| y } / xys.size.to_f
ss_tot = sigma(xys) { |x, y| (y - y_bar) ** 2 }
ss_err = sigma(xys) { |x, y| (yield(x) - y) ** 2 }
1 - (ss_err / ss_tot)
end
To fit a functional form: y = ae^(bx).
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingExponential.html
# File minitest/benchmark.rb, line 223
def fit_exponential xs, ys
n = xs.size
xys = xs.zip(ys)
sxlny = sigma(xys) { |x,y| x * Math.log(y) }
slny = sigma(xys) { |x,y| Math.log(y) }
sx2 = sigma(xys) { |x,y| x * x }
sx = sigma xs
c = n * sx2 - sx ** 2
a = (slny * sx2 - sx * sxlny) / c
b = ( n * sxlny - sx * slny ) / c
return Math.exp(a), b, fit_error(xys) { |x| Math.exp(a + b * x) }
end
Fits the functional form: a + bx.
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFitting.html
# File minitest/benchmark.rb, line 245
def fit_linear xs, ys
n = xs.size
xys = xs.zip(ys)
sx = sigma xs
sy = sigma ys
sx2 = sigma(xs) { |x| x ** 2 }
sxy = sigma(xys) { |x,y| x * y }
c = n * sx2 - sx**2
a = (sy * sx2 - sx * sxy) / c
b = ( n * sxy - sx * sy ) / c
return a, b, fit_error(xys) { |x| a + b * x }
end
To fit a functional form: y = ax^b.
Takes x and y values and returns [a, b, r^2].
See: mathworld.wolfram.com/LeastSquaresFittingPowerLaw.html
# File minitest/benchmark.rb, line 267
def fit_power xs, ys
n = xs.size
xys = xs.zip(ys)
slnxlny = sigma(xys) { |x, y| Math.log(x) * Math.log(y) }
slnx = sigma(xs) { |x | Math.log(x) }
slny = sigma(ys) { | y| Math.log(y) }
slnx2 = sigma(xs) { |x | Math.log(x) ** 2 }
b = (n * slnxlny - slnx * slny) / (n * slnx2 - slnx ** 2);
a = (slny - b * slnx) / n
return Math.exp(a), b, fit_error(xys) { |x| (Math.exp(a) * (x ** b)) }
end
Return the output IO object
# File minitest/unit.rb, line 1344
def io
@__io__ = true
MiniTest::Unit.output
end
Have we hooked up the IO yet?
# File minitest/unit.rb, line 1352
def io?
@__io__
end
Returns true if the test passed.
# File minitest/unit.rb, line 1434
def passed?
@passed
end
Runs the tests reporting the status to runner
# File minitest/unit.rb, line 1281
def run runner
trap "INFO" do
runner.report.each_with_index do |msg, i|
warn "\n%3d) %s" % [i + 1, msg]
end
warn ''
time = runner.start_time ? Time.now - runner.start_time : 0
warn "Current Test: %s#%s %.2fs" % [self.class, self.__name__, time]
runner.status $stderr
end if SUPPORTS_INFO_SIGNAL
start_time = Time.now
result = ""
begin
@passed = nil
self.before_setup
self.setup
self.after_setup
self.run_test self.__name__
result = "." unless io?
time = Time.now - start_time
runner.record self.class, self.__name__, self._assertions, time, nil
@passed = true
rescue *PASSTHROUGH_EXCEPTIONS
raise
rescue Exception => e
@passed = false
time = Time.now - start_time
runner.record self.class, self.__name__, self._assertions, time, e
result = runner.puke self.class, self.__name__, e
ensure
%w{ before_teardown teardown after_teardown }.each do |hook|
begin
self.send hook
rescue *PASSTHROUGH_EXCEPTIONS
raise
rescue Exception => e
@passed = false
result = runner.puke self.class, self.__name__, e
end
end
trap 'INFO', 'DEFAULT' if SUPPORTS_INFO_SIGNAL
end
result
end
Runs before every test. Use this to set up before each test run.
# File minitest/unit.rb, line 1442
def setup; end
Enumerates over enum mapping block if given,
returning the sum of the result. Eg:
sigma([1, 2, 3]) # => 1 + 2 + 3 => 7 sigma([1, 2, 3]) { |n| n ** 2 } # => 1 + 4 + 9 => 14
# File minitest/benchmark.rb, line 288
def sigma enum, &block
enum = enum.map(&block) if block
enum.inject { |sum, n| sum + n }
end
Runs after every test. Use this to clean up after each test run.
# File minitest/unit.rb, line 1448
def teardown; end
Returns a proc that calls the specified fit method and asserts that the error is within a tolerable threshold.
# File minitest/benchmark.rb, line 297
def validation_for_fit msg, threshold
proc do |range, times|
a, b, rr = send "fit_#{msg}", range, times
assert_operator rr, :>=, threshold
[a, b, rr]
end
end