UPDATE: Go ahead and read the mini-rant that follows, because it does have some substance to it, but also be sure to check out my proposal for a possible alternative to DCI and share your thoughts about it.
Of course, I don’t actually believe the statement that I’ve used for the title of this post, but I am skeptical about the general applicability of the DCI Paradigm. There has been quite a buzz about it in the Ruby community, but even the best materials on it have gaping holes that leave me wondering whether anyone has actually applied the paradigm in a real, full-scale project before.
To change my mind about DCI quite possibly being a waste of time, I’d need to see three things:
A sample application with at least two dozen use cases. DCI is a big architectural pattern that is meant to keep big programs maintainable and readable as they grow. So far, I haven’t seen anything close to a “big program” implemented in it. Victor Savkin did write a nice example program, but even he admits that the “real benefits” don’t kick in until you apply them at a larger scale.
A clearly defined argument (with real code examples) that show the benefit of contexts. The idea of using simple data objects and implementing responsibility-centric role objects makes perfect sense to me, but contexts seem like ill-defined containers of little value. I want to see a clear example (not a contrived one) that compares simple role-centric service objects to something that uses the full DCI paradigm with contexts, and I want to hear what the specfic technical tradeoffs are.(While I’m not 100% sure I agree with the conclusion, Rebo has risen to the challenge and written out a very detailed and specific article about what value contexts bring to the table)
A clear explanation of why simple object composition would not be a better choice than the use of traits (or in the case of Ruby, module mixins at the instance level.) Why is direct data access important, especially if your goal is to make your data models into simple value objects? I’d like to see a specific, non-theoretical example of what you lose by using decorators instead of mixins in Ruby.
Note that you’ll find none of this in the original paper on DCI. It was written at a time in which the only thing that had been written using DCI was a trivial little animation of shapes, and a rough proof of concept IDE to manipulate it. The author discourages you in the paper from reading the source of his examples, which to me is suspect, but posits that he believes DCI will be very important and useful for a wide range of applications. Everyone has different standards, but for me to believe those claims, I’d need to see the three bits of evidence I’ve listed above.
I’ve worked on huge applications in Ruby and Rails before. I very much want to believe in DCI, but I’m having a hard time accepting the promises of Clean Ruby when it seems like the work on this paradigm is half-done. If it weren’t so oversold and hyped, I think I’d be more patient, but right now I’m just frustrated and confused.
If you have seen the kind of evidence I’m looking for, please do share it with me. I will happily retract each claim if I’m proven wrong, or if someone rises to the challenge of addressing these points.