This is the first part in a series (second part here). First I will show that shared examples suffer from too many flaws to be useful and argue that their very existence as a feature in RSpec betrays a deep misunderstanding about what and how to test. Second, I will demonstrate how using simpler, more explicit code solves all of shared example issues.
My sources will be:
- Stephan Hagemann's even-handed "Never use shared examples groups! Usually."
- Josef Strzibny's "Why I don't enjoy RSpec all that much" > The idea to DRY your tests comes from a well-intentioned place, the reality is that it's a confusing, hard-to-debug part of the test.
- RSpec docs. > Shared examples let you describe behaviour of classes or modules.
- SO thread "Shared examples vs shared context" > Shared examples are tests written in a way that you can run them in multiple settings; extracting common behavior between objects.
- David Chelimsky's lucid differentiation of "behavior at mix-in time" VS "behavior triggered due to messages" in "Specifying mixins with shared example groups in RSpec-2"
The problems
- A crutch to lacking Mix-in testing skill, too easy to abuse, lowers design quality
- Hard to understand and debug
- Hard to extend
- Hard to see where the test failed and what to fix.
rspec spec/my_class_spec.rb[1:2:5:1]
- Slow suite down.
n*m
VSn+m
specs.
A crutch
In the words of Shawn McCool:
It's sometimes difficult to identify poorly performing patterns when they're useful to patch up problems caused by other poorly performing patterns.
It's really very simple, the vast majority of engineers do not test mix-ins properly, through unit tests.
And I get it, it's if not hard, then involved. Especially if the mix-in implicitly depends on some other logic, usually coming from framework.
For example, to test a module intended for use in a Rails model, one would have to create (and later clean up) an ad-hoc table, initialize a dummy model, and mix the module into the dummy model class just to get to the actual testing.
In my experience, most devs have neither the patience nor the skill, nor the ideological inclination to apply isolated test practices despite their numerous benefits. Enter shared_examples
, a quick-and-easy way to "spec" what is normally logic defined in some module through actual including objects.
This has heavy, but often unappreciated consequences.
Firstly, using shared examples robs developers of a place in code (and thus in thinking) for specifying at-mix-in behaviors. What methods does the includer need to define to link with the mix-in, what config is possible and necessary?
Even if shared examples for actual includers all pass, there's no place to put counterexamples, what happens if includer is misconfigured, etc.
Secondly, using shared examples starts/accelerates the movement down a slippery slope of suboptimal testing practices and lowered design quality. "Why do isolated tests when we have a suite of fancy shared examples?". It's not too long when spec-less class Base
or class Common
show up as grab-bags of methods extracted as after-thoughts when many similar services have been written. As the SO thread comment betrays, shared examples have allowed us to extract common spec behavior, rather than focusing on extracting common and cohesive source code.
In the next part of this series I will describe how to design and spec mix-ins.
Complexed
David Chelimsky lists all the ways shared groups can be customised. There's parameterization, host group inheritance, extension block definitions. This powerful (leaky?) API is also the greatest weakness.
shared_examples "M" do |user|
let(:user) { User.new(:i) }
it "supports #do_something" do
expect(user.do_something).to eq(:something)
end
end
describe SomeClass do
let(:user) { User.new(:a) }
def user
User.new(:d)
end
it_behaves_like("M", User.new(:b)) do
let(:user) { User.new(:c) }
end
end
Which User
will do_something
be called on, a, b, c, d, or i?
Nothing but discipline is preventing devs from mixing all four sources of variables. Oftentimes the only way to understand the source of data is to parse the whole, often large, example group and the including spec.
In the next part of this series I will show you how to bring back intention and clarity to specs.
"Ghost" variables
Shared examples with "ghost" variables are also a thing, requiring users of the example group to parse it to understand what is missing:
shared_examples "M" do
it "supports #do_something" do
expect(ghost.do_something).to eq(:something)
end
end
ghost
needs to be defined includer-side, and there's no clear way or place to document this.
Keep this lack of clarity in mind as it directly maps onto the same problem in source code - methods an includer needs to define for a module to work properly are also often undocumented and only running and getting an error gives clues.
Examples defined through metadata
(ab)using RSpec's metadata allows sidestepping the need for it_behaves_like
altogether.
One more step of indirection, and you can have specs seemingly without examples, yay?
# spec/support/shared_examples/behavior_exhibiter_examples.rb
shared_examples "behavior exhibiter", :behavior_exhibiter do
describe "#method1" do
it "exhibits some behavior" do
...
end
end
end
# spec/my_class_spec.rb
describe MyClass, :behavior_exhibiter
Example groups masquerading as contexts
Since shared_examples
and shared_context
are very much aliases for RSpec, there's nothing stopping devs from polluting contexts with examples.
shared_context "with some setup" do
let(:execution) { Fabricate(:wss_integration_execution) }
it "also asserts something" do
# ...
end
end
RSpec.describe MyClass do
include_context "with some setup" # < secret example!
end
No extending
Imagine we have it_behaves_like("M")
and there's some complex setup and a couple of examples in there.
But what if we've overridden a method from M
with a super
call and now the object under test behaves_like("M and something else")
?
Now we have to extract the setup into a shared context or duplicate it outright and add an additional it
, doubling the setup time.
In the next part of this series I will show you how to have a more modular approach to specs.
Hard to see where the test failed, what to fix
Yes, you can rerun the bracketed rspec command, but to get the the failing example we still need to parse the failing example description line. This gets cumbersome if there are many similarly-named (only context differs) examples.
Furthermore, even once you get the failing example, since there is no corresponding source code file, you may need to do the additional investigation for which method is failing.
In the next part of this series I will show you how to have it
s correspond to source specced.
Slow suite
This may seem trivial at the start when there's just one call of the shared example, but over time, as shared example use becomes endemic, there will be a geometric explosion of examples run, needlessly.
Instead of having m
isolated specs for a module, plus, maybe n
specs, one or two for each includer - m+n
, you may end up with m
shared examples sprinkled across n
includers - m*n
.
In the next part of this series I will show you how isolated and contract-testing cut down on the specs needed.
Photo by Jasmin Egger on Unsplash