My friend Kam always tells me that I’m too serious on my blog. This is probably a fair assessment, I suppose. For a change, I thought I’d try posting about something a little less serious.
I love my gadgets. I especially love my Apple gadgets. So are, I have owned or currently own:
- MacBook Pro (15″)
- 1 G3 iBook (12″)
- 1 PPC Mac Mini
- 1 2nd Gen iPod
- 1 iPod Mini
- 1 2nd Gen iPod Nano
- 1 4th Gen iPod Nano
- 1 1st Gen iPhone
- 1 iPhone 3G
…and of course…my iPad.
I posted my thoughts about the iPad when it was first announced and well before I had actually purchased one, which I only did a few months ago. And I have to say, the device has worked its way right into that gap between my laptop (where I do “work”) and my iPhone (where I now only do light content consumption).
In my original post, I talked about how the “computer” as we know it will become decentralized, eventually reincarnating itself as multiple network connected devices. Now that I actually own one of these “magical” devices, how is that working out?
That leads me to my experiment. This entire post was written using the following setup:
- iPad (WiFi 32Gb)
- 12 South “Compass” iPad stand
- Apple wireless keyboard
- iPhone 3G
The photos are, admittedly, not as great as they could be – taken on my iPhone 3G, which I’ve always thought had a pretty sub-par camera.
Typing using the Apple wireless keyboard paired with the iPad via Bluetooth saved me some aggravation. Having said that, the 12 South Compass stand also has a configuration you an use that places the iPad at a slight angle, making it better for typing.
As for apps, I used a combination of the WordPress app for iPad and the WordPress web application in mobile Safari. The WordPress app is, I find, a bit on the clunky, buggy side, but it’s ok for drafting a quick post. I also had to use the app to upload my photos into the WordPress media gallery so that I could include them in my post.
Speaking of photos, getting them onto my iPad was perhaps less than ideal – I had to email them to myself. I suppose if I had the iPhone version of the WordPress app installed, I could have added the pictures directly to the media library from there.
So there it is: my “experiment”. Is it something the average non-technically inclined user could do? Maybe, maybe not. Could it be that way in the not too distant future? I think so.
I spend a lot of time reading/thinking about leadership. While I think it’s valuable to think about what makes for *great* leadership, in wonder if it’s equally, if not more valuable to be able to identify the characteristics of *bad* leadership. Think of it as “leadership anti-patterns”:
Bad leadership is…
…about you, instead of them
…about taking, instead of giving
…about power, instead of empowerment
…about “my vision”, instead of *our vision*
…doing what’s expedient and easy, instead of doing what’s right and hard
…exercising control, instead of the control you can relinquish
…taking credit, instead of giving credit
What leadership anti-patterns have you observed? Perhaps more importantly, which anti-patterns are you guilty of? :p
One of the under-emphasized aspects of Scrum is the importance of the sprint retrospective. We all know about the stand-ups and the various planning meetings, but not all teams take the retrospective as seriously as they perhaps ought to. And it’s a bit of a shame, really. The retrospective is where the team can make tweaks and adjustments to a their process and really take ownership of it. Ownership leads to accountability, accountability leads to results.
If you’re looking for resources, I recommend “Agile Retrospectives” by Diana Larsen and Esther Derby – the exercise I’m about to describe comes straight from Larsen and Derby’s book. If you’re not looking to dive into a full-blown retrospective process (for some teams, it might seem a little bit too “New Age”), here’s a quick exercise you can try next time you hold a sprint retrospective: the “Learning Matrix”.
The basic idea of the exercise is this…mark out a 2×2 grid (can be done on a whiteboard or some flip-chart paper) like this:
The four quadrants (going from top-left, clockwise) are:
- “Smiles” – things we liked
- “Frowns” – things we disliked
- “Bouquets” – appreciations/thank-yous
- “Light bulbs” – ideas/things to try
How it works:
- For 5-10 minutes (depending on the length of the sprint), the members of the team identify items/issues/topics that fall into one of the 4 quadrants, recording each onto a Post-it note.
- After the time period has finished, each team member presents his/her stickies to the team by posting them onto the grid and giving a brief description.
- Close out the exercise by allowing some time for the team to discuss what they’ve discovered
What to do next:
Try to identify a few key items for the team to focus on for the next sprint. It’s probably important to emphasize that we’re talking about identifying “a few key items”. Depending on the size of the team and the length of sprints, realistically you can’t expect the team to be able to attempt too many of these “process experiments” and be able to focus on what they’re really here to do – deliver great software. The other reason I think it’s a good thing to limit the number of things that we’re tweaking in the process is that it’s just “bad science” – too many variables in play while you’re trying to verify your hypothesis.
One quick and easy exercise to try for prioritizing the items and deciding what the team wants to focus on is the “dot vote” (“Prioritize with Dots” in Agile Retrospectives). For this exercise, each team member is given a number of coloured sticky-dots. The exact number allocated depends on the number of items on the board and how many items you want to select, but pick a number, try it out…tweak it as you see fit. Team members place their sticky-dots on the items they see as being highest priority – they can put all their dots on a single sticky-note or spread them out – it’s entirely up to them. The items with the most votes are the ones the team will focus on next sprint. Ties can be resolved with a team discussion and/or revote.
“Idea Generation” vs “Idea Evaluation”
I think one of the keys to effective brainstorming is to separate the “idea generation” process from the “idea evaluation” process. This can be tough – particularly as most of us in software development are “solution oriented” – but it pays off. This is why I think it’s important for the team to start the exercise with 5-10 minutes of “me time”, rather than launching straight into a free-for-all discussion. I also think it’s important to hold off on the idea evaluation process until each team member has had a chance to present his/her ideas
One of the skill-sets that I feel often goes under the radar but is hugely important, particularly for Agile teams is having good facilitation skills. To keep the retrospective on track, there is typically somebody (usually the Scrum Master) facilitating the meeting and keeping the team on-track and focussed on the task at hand. As the facilitator, it’s important to understand the impact your behaviour can have on the process and the team’s level of comfort and trust. For example, you might naturally be a take-charge, “I’ve got a solution for your problem” type of person, but as facilitator, you will need to take care to ensure that your “style” doesn’t negatively impact the team’s ability to have open and honest discussion in a high-trust environment. Another example where this may be an issue is if you’re in a position of “traditional” (read: “org-chart”) leadership – depending on your leadership style and/or how the team relates to you, you may or may not be best suited to the role of facilitator. Bottom line: if you don’t think you can do it (and not everybody can) – find somebody else who can fill that role.
It should be common sense, but it’s worth repeating. If everything is top priority, then nothing is. Be really diligent about focusing on what really matters to your team.
My last month (sprint and a half) have been spent “merging” and “refactoring” the codebase for a supposedly “new” system that is being developed to (eventually) replace a legacy platform. One of the things I’ve come to appreciate is that developers are perfectly capable of writing brand-new code with a lot of the characteristics of legacy systems. In his book, “Working Effectively with Legacy Code”, Michael Feathers defines legacy code as “code without tests”. I’d add that code with bad tests can be almost as bad as code without tests. In fact, because unit tests are code, poorly written tests – test that are hard to read, hard to maintain, etc. actually contribute to the technical debt.
If you’re not sure what bad tests look like (and conversely, what good tests look like), I highly recommend you check out Roy Osherove’s “The Art of Unit Testing”. I also recommend checking out Osherove’s video test reviews – educational and entertaining at the same time.
This post isn’t so much about what constitutes a good or bad test – it’s more to highlight the fact that unit tests need to treated with the care as production code. If you believe that your production code should be readable – you should expect the same out of your unit test code. If you believe that your production code shouldn’t be thousands of lines long – your unit test shouldn’t be either. Don’t like your production code to have too many responsibilities? Expect the same of your unit test code. You refactor your production code has your requirements change and as your design emerges – do the same for your unit tests.
There is perhaps one exception – or at least something I might consider doing slightly differently. We all know about the “DRY” (don’t repeat yourself) principle. For unit tests, I am inclined to err on the side of readability over full-on adherence to the DRY principle – sometimes I prefer to be more explicit in the “arrange” part of my unit tests (“arrange”, “act”, “assert”). In particular, I tend to do this when there are fakes (mocks or stubs) involved – it’s important that somebody reading the test knows that there are fake objects involved in the test.
Some code hygiene issues that are specific to unit test code (and might not necessarily apply to production code)…
- Separate integration tests from unit tests – this is something you don’t generally have to worry about with normal code, but when you have a test suite that you want people to actually run – especially, when you have any reasonable number of test suites – you want to allow people to be able to run their test suites quickly (for example, Friday half an hour before beer o’clock). Separating integration tests from true unit tests will allow people to identify which tests are likely to be pretty quick and painless to run and which tests could potentially take a while. This is also useful if you find your CI build getting painfully slow.
- Keep the number of ignored tests to a minimum – Ignored tests are like commented out code. At minimum, add a comment (e.g. nUnit’s Ignore attribute allows you to add a comment) so that people know why the test is commented out. Generally, I don’t like to have too many ignored tests kicking around – far from being the great communication tool that well written unit tests can be, ignored tests just add noise .
I like to think of myself as being pretty anti-dogma when it comes to Agile methodologies. I think one of the most common ways that a team can “fail” in their adoption of Agile is to become fixated on strictly following what is and isn’t “Agile”.
One topic that has come up repeatedly for me lately is the choice between using Story Points for sizing stories in our backlog versus using “Ideal developer days”. On this topic, I have a very strong opinion, possibly bordering on “being that Agile-Nazi guy”, but I stand by it.
There are a number of ways that Scrum teams can size the items in their backlog. To be clear here, we’re talking about relative sizing – something you’d do in a release planning or backlog grooming type of meeting – as opposed to iteration planning, where you’d be tasking out a story, typically in hours. Some teams use t-shirt sizes, some teams use arbitrary “story points” (which have no units), etc. Other teams use “ideal days”. The teams I have been on lately fall into this camp. I have issues with this.
Units of time, and the associated baggage
One of the problems with sizing stories in ideal days is that we’ve become so accustomed to estimating our tasks (and being held accountable for them) in units of time, that I think we tend to slip into that typical dev mindset where we immediately switch into task estimating mode. If we’re not careful, the release planning meeting can become a tasking meeting. Related to this, though I have no evidence to back this up – I think there is a certain level of baggage that comes along with estimating in units of time. When you’ve been burned by a bad estimate, it’s easy to estimate based on the size of the ass that needs covering rather than the size of the actual story.
Mike Cohn’s “Agile Estimating and Planning” has an entire chapter dedicated to the debate over points vs. days. Cohn discusses some excellent reasons why a pure points-based approach is preferable.
Agile teams are often cross-functional. On my team we have, in addition to 3 developers, a QA, an interaction designer, and a visual designer. Now I’m not suggesting that the estimates won’t be driven by the developers most of the time, but there are a couple of reasons why including the rest of the team in the estimation process is helpful. First of all, it builds a more cohesive team – sizing things in “developer days” seems to imply that other people don’t have anything in particular to contribute to the process. Secondly, and potentially more importantly, you may miss out on crucial feedback from the other members of the team. A feature may be relatively easy for a developer to implement, but may take a QA a large amount of effort to properly test.
Points-based estimates don’t decay
I think this is one of the biggest reasons to go with a points based system. By saying that points-based estimates don’t decay, we are saying that for the most part, sizing estimates based on points don’t change over time. The major result of this is that your velocity will continue to be a meaningful measure of your team’s ability to deliver. It’s probably best to illustrate this by way of an example.
Suppose you have two teams that are essentially identical. Identical in size and ability. Their backlogs are also identical. Team A uses an ideal days approach to sizing their stories. Team B uses non-time-based story points.
Both teams start off their project with the same number of stories scheduled, and both teams manage to successfully deliver on their commitments.
Fast forward to a few sprints from now. Both teams have improved on their ability to deliver. Perhaps they are working with a new technology or architecture that takes some time to get fully comfortable with. Or it could be increased productivity through the use of new tools. Or perhaps it’s just a result of the team learning to work better together.
Compare the velocities of the two teams. Despite the fact that both teams have improved their ability to deliver (as measured by their velocities), Team B’s improvement is much more easy to see. The shifting of the scales of measurement for the team using ideal days makes the velocity measurement a much less useful measure.
Well it certainly has been a busy month (ok, over a month) since my last post. New project went live, the Vancouver 2010 Winter Olympics – and to top it all off, a new full time position. It was definitely nice to have a little down time to do some reading and thinking.
I recently read an article on HBR outlining a list of “10 Breakthrough Ideas for 2010” (subscription required for the full article, but you’ll get the gist of it from the sample). On the topic of “What Really Motivates Workers”, the authors discussed the results of a multi-year study where they tracked the day-to-day motivation levels of knowledge workers. The findings, while potentially surprising to leaders and managers shouldn’t be that surprising to most knowledge workers. Whereas a survey of leaders and managers identified “Recognition of good work” as the number one factor for motivating workers, the survey of the actual workers showed that the factor that had the biggest (positive) influence on workers’ emotional state was this: “Progress”.
On days when workers have the sense they’re making headway in their jobs, or when they receive support that helps them overcome obstacles, their emotions are most positive and their drive to succeed is at its peak. On days when they feel they are spinning their wheels or encountering roadblocks to meaningful accomplishment, their moods and motivation are lowest.
Call it apophenia, but as always, I seem to find a way to relate things to each other – in this case, Agile software development.
Technical Practices (i.e. “stuff that devs do”)
In a recent interview (actually the first technical interview for my current position), I was asked: “What do you like about TDD?”. I can’t recall the exact words I used, but I said something along the lines of:
I enjoy the rhythm, the groove you get into when you’re writing a few lines of test code, watching it fail, writing your code, watching it pass. The rhythm of Red, Green, Refactor is very satisfying
How does this relate to “progress”? Well when I do TDD, typically I work to implement the smallest bit of functionality I possibly can. I might implement the “happy path” on my first pass, an edge case next pass, an error case after that, etc. Depending on the complexity of what I’m implementing, I’ll go through the Red, Green, Refactor cycle every 5-10 minutes. Every time I go from Red to Green – that’s progress. Every time I refactor my code and the tests pass – that’s progress. Every time I check in my code, the continuous integration build runs, incorporating changes from the rest of the team, all the tests run – that’s progress.
Contrast this to the “other way”. I spend a few hours writing. I might do some testing by clicking around the application – seems to be working. A day later, another developer pulls down the source code – it won’t build. You have to go back and try to figure out what happened. That’s *not* progress. So we eventually fix the code so that it builds. A week later (if you’re lucky), the QA is doing some manual testing on the application – the application blows up due to an unhandled exception. Now you have to think back to what you did a week ago. That’s *not* progress.
You get the idea.
Team-level Practices (i.e. “stuff the entire team is supposed to do”)
One of the (in my mind) more important things Agile teams need to do when planning their iterations is defining what it means to be “done”. “Done” means we have met the acceptance criteria (which means that you have to first define them). “Done” means that when you’re done with a user story or feature, it’s shippable. It doesn’t require “tidying up” later. It doesn’t require a “stabilization phase”. It’s done and you don’t come back to it. If the customer wants to change something, well that’s another story – something be prioritized and scheduled just like any other story. But at the end of that iteration, it’s shippable – no “If”s, “and”s or “but”s.
In other words, the team is always making forward progress. Now, of course, if the client keeps requesting changes that’s not forward progress and quite rightly would be demotivating to a team (“I want that button green….now I want it blue….now I want it red…”). My personal hunch about this is that by requiring the customer to place the change request and prioritize their other requirements around these changes will eventually change behaviour simply by calling attention to the requirements churn.
“Working software over comprehensive documentation”
The preference in Agile methodologies for working software over documentation (note the wording: “working software over documentation” – doesn’t mean you never do documentation) – is another nod to the importance of progress. Few people regard spending months and months doing big up-front analysis design, writing reams and reams of documentation, “progress”. Shippable software – working software – is progress.
The “Progress lens”
So next time you’re feeling really positive (at work or in life in general) – think about what you’re doing through the “progress” lens. Maybe you’ve fixed a bug that’s been…erm…bugging you for a coupe of days. Or you’ve finally got your head wrapped around an idea that you’ve had trouble grasping. Or you’ve just run a new personal best 10k – not a world-class time by any stretch of the imagination, but a personal best nonetheless. Or maybe today was just a little bit less crappy than yesterday – that’s still progress.
Today was the long-awaited announcement of the much anticipated Apple tablet, which by now is known by its WTF-inducing name: iPad.
Weird name aside, so far it seems like the reception has been maybe a little less that what one would have expected given the buzz surrounding the device. As of this afternoon, “iPad” hadn’t been a top trend on Twitter, whereas “iTampon” has, apparently.
At the risk of having my Apple fanboy credentials revoked, I will admit to not being “wowed” by the device. This is clearly a V1.0 device. There are a lot of things that, on the surface, the iPad is missing. It doesn’t have any slots for expansion of the onboard memory, for example. It doesn’t have a camera (front or back). Doesn’t have many input/output ports of any kind (e.g. USB, HDMI). Gizmodo has a list of “8 Things That Suck About the iPad” – and I’m sure there are waaaay more than 8 things that suck.
I wonder, however, how much of this is that people are trying to figure out what this device is through the lens of what computing is today instead of what computing will be tomorrow (ok, not literally tomorrow – I mean “in the future”).
“Ceci n’est pas un PC”
Today, we have a lot of “general purpose” computing devices. Laptops, desktops, netbooks, etc. I call them “general purpose” because they can for the most part be configured to perform many different types of tasks in a number of different usage contexts. Take the typical laptop, like the MacBook Pro that I’m writing this post on – it has a display, a camera, a couple USB ports, a FireWire port, DisplayPort, DVI port, mic and headphone jack. With the right software, my “general purpose” computing device can – take a picture/video, output a video to a TV, output audio, record audio, etc. Most of the time the laptop does a decent enough job on any given one of these tasks. But what my general purpose computing device can never really give me is the rest of the experience. It isn’t the right form factor for taking pictures. Nor is it the best form factor for hooking up to my TV and watching a movie.
What I’m getting at is that perhaps we shouldn’t be comparing the iPad to a general purpose computing device – like a laptop or netbook – but rather with “targeted purpose” computing devices. These are devices that do a few things really well – and have form factors optimized for those tasks – but are not intended to do all things “sorta” well.
“The network is the computer” – redux
So let’s say I take a picture with my digital camera. And now I want to look at this picture on my laptop. Today, I’d take the memory card out of the camera, dig out the card reader, plug it into my laptop, download the image, etc. “Tomorrow”, maybe I’ll take a picture and it will automatically be made available to other devices via WiFi or Bluetooth or by direct upload to Cloud storage (I always seem to find some way to squeeze a mention of Cloud computing in). You can already do this with something like the Eye-Fi family of products.
Ultimately I think this trend might be just another sign that the network really is “becoming the computer” – as per Sun Microsystem’s slogan from the mid-80s that “the network is the computer”. These “specific purpose” computing devices are just components in networked computer. The same way I might plug a web cam into my PC today and use it as an image capture device, we are getting to the point where we are able to “plug in” a networked digital camera. Increasingly, our computing is being spread across multiple devices. Sometimes we might chose a particular device for its form factor. Other times, we might chose a device because it’s appropriate for our context. This point, for example, started out on my iPhone (when I first got the idea), the bulk of it, I wrote on my MacBook Pro laptop, and now I’m finishing it up on my netbook (ok, in this case it is because my MBP battery is crapping out).
Computing in the future will be networked, it will be ubiquitous, and it won’t look anything like it does today. Going forward, we should expect to see more computing devices that are hard to put an existing label or classification on (e.g. PC, laptop, smart phone). Whether the iPad will be regarded in the technological “fossil record” as being a key stage in the evolution of computing or as “just another gadget”, only time will tell.