Tuesday, October 27, 2009

Misadventures in DDD With Not-So-Ubiquitous Language

Lots of people have been talking about DDD (Domain-Driven Design) lately, and my friend Tom just discussed it over on his blog in the article Refactoring Toward Deeper Insight. So, I thought I'd share one way not to do DDD.

Once upon a time, shortly after the big blue book was published, I saw it in the store and thought it looked interesting. I bought it and read through the first section about knowledge crunching and ubiquitous language. And it all made so much sense—where I worked, the business analysts and software developers already worked together to crunch knowledge into a domain model they both understood (or so I thought). The trouble was, even though when they talked together, they used that language, when the developers actually wrote the code, they used their own. Why? Well, I didn't know—mostly just because we usually thought our names for things were better, I supposed. Trying to keep the language the same just wasn't something we'd thought about much.

So, I decided that on the project I was working on, we'd start doing DDD by using the names from the business language for the classes in our code. I didn't discuss it with the business people, since I didn't think it involved them much—they didn't need to change their language, we developers just needed to use it.

Everything went fine for most of the project. Then, a couple days before release, the business decided that they wanted to rename one of our core domain concepts—and that was okay, right, because it was just changing a few labels on a few screens?

And that's when I learned that if you want to do DDD, it helps to get buy-in from the business.

So, happy DDD-ing everybody—because I do think lots of the ideas are great and worth exploring—just make sure you let the business know what you're up to and you'll avoid my rookie mistake.

Monday, October 26, 2009

Estimating is Like Measuring the Coast of Great Britain

Earlier today, Liz Keogh tweeted this, which I couldn't agree with more, as long as she's saying what I think she's saying:

Thinking about fractal estimation—estimates grow as scale shrinks—and wondering what dimension current projects have.
A while back, every time I started doing technical estimates for a release, I began to think more and more of the illustrations along the side of this Wikipedia article: How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension.

The situation was, that as a developer in a waterfall process, when I was trying to estimate how long it would take to code something, even with the requirements supposedly "fixed," unless I examined the code I was proposing to change in so much detail that I was basically writing it—and that's not estimating anymore—I'd always miss things that would make coding take longer than I'd estimated. In essence, there were always craggy little details that upon actually writing the code, became significant or added up with others to become significant.

Now, I know that if your architecture and code are clean, this should happen less. But I still think it's generally true that until you actually write the code, you really don't know how long it will take, and the amount of time will usually be longer than you think when analyzing it at higher levels.

Then, I thought of this problem again when I attended a talk on Scope Budgeting. There, the general issue is that in Agile, if you write a user story, estimate it, put it in your backlog, and only elaborate it when you pull it back out to implement it, you may find that the size of the story grows at elaboration-time.

According to Liz's fractal estimation rule, that makes complete sense and should be expected. I agree. Whenever I have worked with user proxies to move from general functional requirements to specifics, the estimate has always increased. The amount of work needed to address all the details we discover always grows beyond the original estimate. If we made commitments based on that estimate, the only way to keep them is to prioritize and remove some of the details.

I can think of a few ways to address this problem, if we want to consider it to be a problem:

  1. Just be aware of it and deal with it as needed, which is a lot better than not being aware of it at all.
  2. Start measuring it, so you can factor it into your future estimates. This might be taken care of by comparing estimates to actuals and factoring that into future estimates. You'll need to decide whether the time it takes to do this is worth the return, though.
  3. Move toward techniques like Kanban that de-emphasize estimation. Whether or not this works for you will depend on your current situation, but it's always worth questioning whether you're estimating because you truly need to or just because that's what you're used to.


Update Sept, 2010: