08 October 2013

Solving the Legacy Code Problem

What is Legacy Code?


In his book, Working Effectively with Legacy Code, Michael Feathers defines legacy code very simply:  Code without tests.  He isn't trying to be inflammatory, or suggest that you are a bad person because you have written legacy code (perhaps yesterday). He's drawing a clear boundary between two types of code.  James Shore (co-author of Art of Agile) has an equivalent definition:
"Code we're afraid to change."
Simply put, if it has tests that will prevent us from breaking behavior while adding new behavior or altering design, we can proceed with confidence.  If not, we feel appropriately uneasy.

Of course, if you plan to never release a new version of your code, you won't need tests.  In my almost 40 years of programming, I've not seen that happen even once.  Code needs to change. Ergo, code needs tests.

Build Your Team's Safety Net


Your team may want to adopt this metaphor: Think of your whole suite of automated regression tests as a safety-net that the team builds and maintains to keep themselves from damaging all prior investment in behavior.

If it takes two hours to run them all, you'll run them once per day, and if they catch something, you know that someone on the team broke something within the last 24 hours.  If it takes one hour, you'll run them twice per day (once at lunch) and you've narrowed down the time by half.  That's probably better than 80% of the teams in the world, but it can be even better. 

I'll give you a real-world example: I worked on a life-critical application in 2002.  After two years of development, this product had a comprehensive suite of 17,000 tests.  They ran in less than 15 minutes.  That team often took on new developers, and we gave them this simple message:  "You break something, someone may die.  But you have this safety net.  You can make any change you believe is appropriate, but don't commit the change until you run the tests." In the time it took to walk to the cafe and buy a latte, I would know whether or not I was making a mistake that could cause someone to die.

We made changes up to a day before going live.

It can be that good.  Of course, it takes effort to "pay down" the legacy code debt (and a lot of mock/fake objects…another topic for another day.)  But the longer you wait, the worse the debt becomes.

Characterization Tests


The product mentioned above was developed from the ground up with unit tests written by a team who embraced unit-test-level Test-Driven Development (TDD).  Nice work if you can get it.  The rest of the world faces legacy code debt. 

You don't have to pay it all down before you proceed. In fact, you mustn't.  You have to be thoughtful about selecting high-risk areas:  An area of code that breaks frequently, or is changed frequently, should first be "covered" with "characterization" tests.

"Characterization test" is not defined by any particular type of tool. We often use the unit-testing framework, but we're not limited to it.

Like unit-tests, these tests must be deterministic, independent, and automated.  Unlike unit-tests, we want to "cover" the most amount of system behavior with the fewest number of tests and the least effort.  When you write these tests, you are not bug-hunting, but rather "characterizing" or locking down existing behavior, for better or worse.  It's tempting to fix production bugs as you go, but fixing a bug that's escaped into the wild could introduce another bug, or break a hidden "feature" that a customer has come to rely on.  It's fine to note or log the bug, but your characterization test should pass when the bug manifests itself.  Name the test with a bug description or ticket number, so the team can easily find it later.

Why not fix the production defect? Because the point of creating this safety net is to give you the freedom to refactor. You may be refactoring so you can add new behavior more easily, or even so you can fix a bug more easily later, but refactoring and adding behavior are two distinct activities. Using TDD, they are two separate steps in the cycle.  (Aside:  Fixing a bug is effectively adding new behavior, because the system wasn't actually behaving the way we expected. You can use TDD for that.)

The unit-testing framework and developer IDE usually gives us the most flexibility, plus the ability to mock dependencies and use built-in refactorings for safety. But in order to lock down large swaths of behavior, teams should think creatively. I've worked with teams who compared whole HTML reports, JPEG images, or database tables; or who have rerouted standard input and output streams. The nature of the product and the size of the mess may dictate the best approach.

And don't aim for a duration target, e.g., "15 minute test runs." Teams sometimes respond to arbitrary targets by sabotaging their own future in order to make the numbers.  For example, deleting existing tests! Rather, aim for improvement by looking for the greatest delay in testing.  Weigh a "huge refactoring" of the persistence layer against using an in-memory database.  There is no in-memory version of your database software?  Use a solid-state drive. Developers are naturally creative problem-solvers, particularly when they collaborate.

Resistance is Futile


Code written without tests often resists testing. When you write unit-tests test-driven, they tend to be very tiny, compact, isolated, and simple (once you get the hang of it). It's actually easier and faster to write them with the code using TDD, even though you end up with more of them.  Interestingly, if you write your unit-tests after the code has been written, you are really writing characterization tests: They're harder to write, they're often a compromise that tests a number of behaviors, and they often give you the bad news that you made a mistake while coding. This is why most developers hate writing "unit-tests" (me, included). We were doing it backwards.

That may make writing characterization tests seem unbearably painful, but it's really not.  Once you collect a handful of simple, "surgical refactorings" for creating testable entry-points into your behaviors, the legacy code problem becomes a bit of an archeological expedition: Find the important behaviors, carefully expose them, then cover them with a protective tarp.  It can be rewarding all by itself. But the big payoff comes later, when it's time to change something.

01 October 2013

Yet Another Meeting?! How to Make the Daily Scrum (or Stand-Up) Meeting Work for Your Team

Purpose


Let's cut to the chase:  No one likes meetings.  A status meeting every day is enough to drive you crazy.

The Stand-Up isn't meant to be a status meeting.  Status may get communicated, and extracted by the ScrumMaster (SM) if he or she is paying attention, but that's not the purpose of this meeting.

The Stand-Up is a daily tactical planning meeting: Of, by, and for the team.  For teams who need to collaborate (and I would suggest that there is no other kind, but that's a digression), this brief daily huddle creates visibility into what is needed by each team member today.

The Three Questions


You may know the three questions: (1) What did I do yesterday? (2) What am I doing today? (3) What are my impediments?

Each person takes turns answering these three.  Sounds like a status meeting, right?

Let's reword them:

1. What did I provide or learn yesterday that the team needs to know about?
2. What do I hope to accomplish today and whom do I need to collaborate with to get that done?
3. What's preventing me from doing my best professional work?

Sorry I'm Late, But Not Real Sorry


When people are consistently late to, or absent from, this meeting, it's because the meeting has lost its value.  Rather than creating some form of punishment/reward system to attract people, or simply canceling the meeting, we need to uncover the root problem, and take action to solve that.

Is the meeting at a bad time?  Try moving it. A lot of teams use 11:45am.  Usually folks are available then, and it's nearly guaranteed not to run long. Many follow-up discussions can happen during lunch.

Are people reporting task status to the SM?  Try holding a no-electronics huddle. (Update Jira before or after.) Can't remember what you did yesterday?  Take a minute or two prior to the huddle to write down what happened since the previous stand-up, and what you will be working on today.  Got nothing done, or your task is lingering in the "In Progress" state for more than a day? Think about what's impeding you, or with whom you'd like to collaborate to get that task completed. In other words, stop expecting yourself to shoulder all blame, or heroically accomplish each and every task, alone.

Someone shows up consistently a few minutes late?  I recommend starting the meeting on-time, every time.  If they miss something, they can ask someone to get them caught up afterwards. Besides, there's another stand-up tomorrow.  The habitually late person is responsible for finding his own strategy for on-time arrival.  (I was that person many decades ago, in high school, so my mother introduced me to black coffee. On-time, and my grades improved!  Your mileage may vary.)

Particularly with this team meeting, it's important that one individual cannot delay everyone else. This applies to your SM, PO, CEO, and POTUS. It's not their meeting. They are always welcome to schedule something afterwards (and perhaps buy the team lunch?)

This Won't Hurt a Bit


If the team feels that some part of their day-to-day work activities are uncomfortable or unnatural, ask them if they would like to invite an agile coach in to observe and make some recommendations.

A coach is not a manager: More like a "team doctor." Having someone intrude and observe may be uncomfortable, too, but the goal is the overall health of the team.

The daily stand-up contains much more than task progress: it often reveals the attitude of the whole team and hints at systemic dysfunctions. If your stand-ups are dull, too long, frustrating, mechanical, or contentious, then let the coach see all that. Good coaches don't blame individuals for a systemic symptom: The coach is there to identify challenges and help the team find an agreeable set of adjustments. We want to see the team create its own way of working that is professional, productive, supportive, exciting, and sane.

27 March 2013

Yet Another Blog Post About Women in Software

Why say more?

Last week, the Internet burst forth with wild, bizarre vitriol that one dear friend of mine called "Donglegate."  Since I wasn't in attendance at this conference, and since I had exhausted myself chatting with folks about it on Twitter, I didn't think I'd have anything further to say.

But then I saw another follow-up post, this one by Bob Martin, and I felt frustrated. It's not bad:  It does come to a reasonably healthy conclusion that "we need to make the women feel welcome." I agree, but still there is something subtly off-kilter in his use of "we." "We need them" he tells us.

Now, I hope never to live in a world that is so politically correct that we're all walking on eggshells, wary of a lawsuit because we may say something that someone else construes as an offense. Also, I'm not one who subscribes to some form of "blindness" (gender-blindness, color-blindness) as policy in the workplace, resulting in some utopian collaborative environment. Yet I'm also hoping we can do better at preventing archaic bigotry from creeping back into our industry, and our society.

My Computer Science/Engineering college friends, who mostly graduated around 1989, get together for a reunion every five years. I'd say that about 20-30% of them are women.  The last reunion, in 2009, included a tour given by the current ACM student-chapter president. She told us that only about 1% of her Computer Science graduating class was women, and that this was in-line with a downward trend across the country.  That's way down from when I was ACM student-chapter president, 25 years earlier. 

That's just insane.  Why has the industry changed for the worse? 20-30% representation during the Reagan era was bad enough, but 1% in the Obama age is ridiculous.

This is a subset of my dearest old college friends. This looks almost like a Good Old (White) Boys Club. Yet I can tell you there's a lot of diversity represented in this picture alone: 20% women, 20% gay, 10% Buddhist, 20% atheist, 20% (or more) Christian, 10% Jewish, and about 60% overweight. ;-)

Why "We" don't need "Them"

Apparently even President Obama has generated some bad press recently by saying "...when our wives, mothers, and daughters can live their lives..."  Granted, I take this quote out of context to emphasize it, but there's that "us and them" wording again.

The trouble with Bob Martin's and President Obama's choice of wording is that they define the uniqueness of women as it relates to "us" (presumably, us men). The basis is our actual physical differences. And, yes, men differ from women (and for some issues that's very very important), but those differences are entirely orthogonal to the issues at hand: Equal assumption of inherent technical abilities; equality of employment and pay, opportunity and training, safety, respect, dignity, and freedom.

Gender, race, ethnicity, sexual orientation, income and affluence, religious affiliation: None of these equate to better programming skills!  All things being fair and equal, given a randomly selected cross-section of the population of software developers, the percentages should reflect the same approximate ratios as the general public.  Approximately 51% of the U.S. population is female.  So something is obviously wrong with a college CSE program that has only 1% female graduates. I don't think this is an NAU-only issue, or an Arizona-only issue. And whether or not you believe the government could or should intervene, you will hopefully agree that there is something off-kilter here. What Would Deming Do?

Teams and organizations need diversity.  Not to meet some government quota, or to make one individual happy, but in order to have important input from people who think differently.

If you read that and think "Oh, Rob is saying 'women's intuition' is critical for success" you are not grokking me, at all.  You would then be falling back on categorization based on attributes (real or anecdotal) that are orthogonal to the skills required to be a great programmer.  I'm not saying that women will notice X or Buddhists will notice Y.  I don't know what they'll see, and you won't know, until it happens.

We cannot predict whose intuition or intellect or experience will shine on any particular day.  A group of great programmers with very little diversity is missing out on that chance spark that occurs when unique people in dialog spot something pivotal (a feature, a design option, a technology, or a blending of technologies) that would have otherwise gone unnoticed. 

We don't need them!  We need us! All the diversity inherent in our U.S. culture is there, ready to be used in synthesis.  In science, in technology, in business, in politics:  Multiple, diverse viewpoints give us more perspectives, and often provide the insight to try something that no one else has thought of yet.

What can we do?

Here are some things I try to encourage in myself, and in my clients' workplaces.  "Try" is an important word here:  The perfect is the enemy of the good.  I suggest we all strive to do better by trying these out for 30 days. We all make the occasional mistakes.

Check your compass.

Remember, you choose your actions.  Do they really align with what you believe represents appropriate moral action?  Do you hold a double-standard for others?  (Well, cut it out.)

The government makes laws, but doesn't program your Moral Compass. My Moral Compass may look very different from yours, and I may respond to an ethical dilemma differently than you would. (That does not mean that mine is broken, nor that I want to try yours out, thank you.)  Despite our differences, you may notice many similarities.  I've noticed that most every Moral Compass seems to contain The Golden Rule.  (Mine has it written this way: "Don't do to others what you'd rather they didn't do to you. If you do, they will!")

Notice your audience(s).

Be aware of, and take care of, your audiences, both intended and circumstantial.  Can the children at the next table in a crowded restaurant hear your lewd joke? I may laugh, but I'm also going to feel embarrassed for you, and uncomfortable for their parents.

(I'm no prig:  I have learned to enjoy Family Guy, because I'm often surprised by just how far the writers will go.  Yeah, I do find myself repeating the mantra "It's just a cartoon. It's just a cartoon...")

Laughter is mostly a reaction to surprise. But trust me:  You are not that funny.  I am not that funny. Bob Martin is a great public speaker, and he is not that funny.

Besides, people can turn off the TV, but they may not be able to (or want to) escape a conference keynote.

See bigotry as a systemic constraint.

To assume that a person or group is somehow wrong, lesser, or incompetent because of something they are or believe (versus, say, how they act towards others) is counterproductive. Think of it as a constraint to the flow of value. When the team looks at it that way, it removes some of the political and cultural discomfort that comes with talking about emotional topics.  You're simply addressing it as you would any other systemic waste.

Celebrate Diversity!

Like the popular bumper-sticker suggests, but perhaps expanding this idea beyond the person in the car behind you. ;-)

By "celebrate" I'm not suggesting you throw a Cultural-Awareness party each month and force the "minorities" to stand up and talk about "their" culture.  (I think that was the plot of a sitcom episode...or was it a client???) I'm talking about personal, quiet (and heartfelt) celebration: Delight in the diversity of your teams.  Hire based on necessary skills, team-fit, and a well-rounded education or set of life-experiences. Alter the working environment to be more inviting (or at least less uncomfortable) for various groups.  For example, nursing rooms: How many large corporate offices still don't have daycare or nursing rooms?!

Give your System 2 (analytic) brain a chance to catch up.

Notice actual differences in people where they appear, and quietly acknowledge any personal discomfort. Notice when your intuitive System 1 mind falls back onto stereotypes as a poor approximation for getting to know someone. Then do something about it:  Reach out and communicate with those who differ from you. Exercise your tolerance muscle.

Listen with tolerance.

Every peaceful encounter with someone who is different is an opportunity to learn. You learn about them, they learn about you. You learn from them, they learn from you. Trust that the extra effort to get to know someone on your team will eventually "return dividends," either in business or in life.

You don't have to change your mind about your long-held beliefs simply because the other person believes differently, but encourage yourself to imagine what it would mean if their beliefs made sense. The goal isn't to prove them wrong, even in your head. It's to appreciate the astounding variations that exist in our human minds and our cultures.

It's scary, but tolerance is a key ingredient for peaceful co-existence with neighbors, across the ocean, across the street, across the political aisle, or across the gender-divide.

Chill.

Try to be less easily offended. Imagine how much easier life would be if folks were less prepared to be offended by the words of others, less anxious to be the next victim, less interested in wealth through litigation, less conditioned to "save face."

When someone says something that is insulting to you, consider:  Do they really know you? Did they know you'd be offended?

Speak up.

You don't have to bottle up your response to an insult.  If you're offended, let them know:  "Your comment makes me uncomfortable. Perhaps you weren't aware that others can hear you?"

Then, at least the first time, give them the benefit of the doubt. "Escalation" is not the road to a peaceful resolution. (It's the plot of every "reality" show.)

Caveat: If you feel sincerely threatened, walk away, remove yourself from immediate harm, and contact a trusted third-party.

Two tweets for dessert

Soon after "donglegate" I "tweeted" the following suggestion: 
And Liz Keogh tweeted the following:

Update 08 Oct 2014 (in honor of Grace Hopper):
 
Tech careers are just "not interesting to women" some conservative straight white males have told me.  It just occurred to me that I have 1st-person access to data that suggests they're relying on a fallacious argument.  The data right here...in my brain.

I teach challenging high-tech courses to hundreds of developers each year, in different regions, companies, and corporate cultures.  I notice some trends in the population of participants.

Alas, I haven't been tracking this data in a spreadsheet (maybe I'll start!), but I have noticed that a very low percentage of participants in my technical courses are women.   (At most 10%.)

Okay, let me say for the moment that I buy the argument (I don't, by the way) that perhaps biological gender differences really are why women just aren't that interested in software (because, you know, being able to sit on your fat ass writing code was a reproductive advantage for men out hunting on the savannah...how?!)

So, why are there even fewer (2-3%) black men in my tech courses?  See, we can't blame that on biology without speaking from raw bigotry, now can we? (Because science.) Well, misogyny is bigotry, too.  The "other" (numerous equally smart, talented folks who are different from the self-reinforcing majority white male tech population at many start-ups and corporations across the country) is being discouraged, likely in many subtle ways.

02 January 2013

The Sportscar Metaphor: TDD, ATDD, and BDD Explained

 

Your Mission, Should You Accept...


You've been tasked with building a sports car.  Not just any sports car, but the Ultimate Driving Machine.


The Ultimate Driving Machine

Let's take a look at how an Agile team might handle this...

Acceptance Test Driven Development


What would a customer want from this car?  Excitement! And perhaps a degree of safety.  Let's create a few user stories or acceptance criteria for this (the line between those two will remain blurred for this post):
  • When I punch the accelerator, I'm pushed back into my comfortable seat with satisfactory acceleration.
  • When I slam on the brakes, the car stops quickly, without skidding, spinning, or flipping, and drivers behind me are warned of the hard braking.
  • When I turn a sharp corner, the car turns without rocking like a boat, throwing me against the door, skidding, spinning, or making a lot of silly tire-squealing noises.
These are good sample acceptance criteria for the BMW driving experience.  We can write these independently of having a functioning car to test. That's what makes this "Test Driven" from an Agile perspective:  The clear, repeatable, and small-grained tests, or specifications, come before we would expect them to pass.  This is fairly natural, if you consider each small collection of new tests to be Just-In-Time analysis of a user story. That's "Acceptance Test Driven Development," or ATDD, in a nutshell.

In order for us to write truly clear, repeatable "acceptance tests" for a BMW, we would need to get much more specific about what we mean by "punch", "satisfactory", "slam", "sharp". In the software world, this would involve the whole team: particularly QA/Test and Product/BA/UX, but with representation from Development to be sure no one expects Warp Drive. The team discusses each acceptance criterion to determine realistic measurements for each vague, subjective word or phrase.

DONE Done

What levels of fast, quick, fun, exciting, and safe are acceptable? What tests can we run to quickly assess whether or not our new car is ready for a demo? How will we know we have these features of the car fully completed, with acceptable levels of quality, so that we don't have to return to them and re-engineer them time and time again?

Once an acceptance test passes (and, on a Scrum team, once the demo has been completed and the stories accepted by the Product Owner), they become part of the regression suite that prevents us from ever allowing these "Ultimate Driving Machine" qualities from degrading.

Test-Driven Development 


Now the engineers start to build features into the car.  A quick architectural conversation at the whiteboard identifies the impact upon various subsystems, such as chassis, engine, transmission, environmental/comfort controls, safety features.

What would some unit tests (aka "microtests") look like?  Perhaps these would be examples (keep in mind that I'm a BMW customer, not a BMW engineer, and have little idea of what I'm talking about):
  • When the piston reaches a certain height, the spark plug fires.
  • When the brake pedal is pressed 75% of the way to the floor, the extra-bright in-your-face LED brake lights are activated.
  • When braking, and a wheel notices a lack of traction, it signals the Anti-Lock Braking system.
See the difference in focus?  Acceptance Tests are business-facing as well as team-guiding.  Microtests are tools that developers use to move towards completion of the Acceptance Tests.

I used to own a BMW. I couldn't do much to maintain it myself, except check the oil.  I would lift the hood, and admire the shiny engine, noting wistfully that cars no longer have carburetors, and I will probably never again perform my own car's tune-up.

Much of what makes a great car great is literally under the hood.  Out of sight. Conceptually inaccessible to Customers, Product Managers, Marketers...even most Test-Drivers. What makes the Ultimate Driving Machine work so well is found in the domain of the expert and experienced Engineer.

In the same way, unit tests are of, by, and for Software Developers.

What's the Difference?

In both cases, we write the tests before we write the solution code that makes the tests pass.  Though they look the same on the surface, and have similar names, they are not replacements for each other.

For TDD:
  • Each test pins down technical behavior.
  • Written by developers.
  • Intended for an audience of developers.
  • Run frequently by the team.
  • All tests pass 100% before commit and at integration.
For ATDD:
  • Each test pins down a business rule or behavior.
  • Written by the team.
  • Intended for the whole team as audience.
  • Run frequently by the team.
  • New tests fail until the story is done.  Prior tests should all pass.
Which practice, ATDD or TDD, should your team use? Your answer is embedded in this Sportscar Metaphor.*
 

Behavior Driven Development


For a long time no one could clearly express what "Behavior Driven Development" or BDD was all about. Dan North coined the term to try to describe TDD in a way that expressed what Ward Cunningham really meant when he said that TDD wasn't a testing technique.

Multiple coaches in the past (me, included) have said that BDD was "TDD done right." This is unnecessarily narrow, and potentially insulting to folks who have already been doing it right for years, and calling it TDD.  Simply because many people join Kung Fu classes and spend many months doing the forms poorly doesn't mean we need to rename Kung Fu. (Nor should we say that "Martial Arts" captures the uniqueness of Kung Fu.)  

I witnessed a pair of courageous young developers who offered to provide a demo of BDD for a meetup.  They used rspec to write Ruby code test-first.  They didn't refactor away their magic numbers or other stink before moving on to other randomly-chosen functionality. "This can't be BDD," I thought, "because BDD is TDD done well."

TDD is TDD done well.  Nothing worth doing is worth doing incorrectly.  I had been using TDD to test, code, and design elegant software behaviors since 1998. I wanted to know what BDD adds to the craft of writing great software.

I can say with certainty that I'm a big fan of BDD, but I'm still not satisfied with any of the definitions (and I'm okay with that, since defining something usually ruins it).  A first-order approximation might be "BDD is the union of ATDD and TDD."  This still seems to be missing something subtle. Or, perhaps there is so much overlap that people will come up with their own myriad pointless distinctions.

However we try to define it in relation to TDD, BDD's value is in the attention, conversations, and analysis it brings to bear on software behaviors.

In hindsight, I have already seen a beautiful demo, by Elisabeth Hendrickson, of TDD, ATDD, and (presumably the spirit of) BDD techniques combined into one whole Agile engineering discipline.

She played all roles (Product, Quality, Development) on the Entaggle.com product, and walked us through the development and testing of a real user story.  She broke the story down into a small set of example scenarios, or Acceptance Tests. She wrote these in Cucumber, and showed us that they failed appropriately.  She then proceeded to develop individual pieces of the solution using TDD with rspec.

Then, once all the rspecs and "Cukes" were passing, she did a brief exploratory testing session (which, by definition, requires an intelligent and well-trained human mind, and thus cannot be automated). And she found a defect!  She added a new Cuke, and a new microtest, for the defect; got all tests to pass; and demonstrated the fully functioning user story for us.

All that without rehearsal, and all within about 45 minutes.  Beautiful!

* I have a draft post that further describes, compares, and contrasts the detailed practices that make up ATDD and TDD, along with a little historical perspective on the origins of each. For today, I wanted to share just the Sportscar Metaphor. It's useful for outlining which xDD practices to use, and how they differ.

01 January 2013

A Happy and Prosperous New Year from Agile Institute!

The Times Square Ball in July.

Agile Institute had a wonderfully successful 2012. My thanks to all the clients and colleagues, commentators and critics (yes, even the critics) who made the year so amazing.

Personally, I was likely the busiest I've ever been, but it was the kind of busy that keeps me energized and enthusiastic. Most of my year was spent traveling the country delivering courses and coaching to various clients, both huge and tiny.  I visited Manhattan, NY, and Manhattan, KS; huge multinationals, and tiny start-ups.  Everywhere I go, there is enthusiasm for better ways to build software.

Interestingly,  Essential Test-Driven Development was by far the most frequently requested course, followed by Essential Agile Principles & Practices. I'm hopeful that XP engineering practices aka Scrum Developer Practices are now getting some recognition. They are necessary when organizations want to get beyond mediocre results, or painful challenges to their productivity.

We also developed two new courses in 2012:

Essential Test-Driven Development in JavaScript

Yep, our trusty old flagship TDD course is now available in JavaScript, as well as Java, C#, C++, and VisualBasic.NET. This course addresses those unit-testing/TDD challenges that are particularly vexing in JavaScript: DOM manipulation, asynchronous updates with AJAX, page-embedded code.
 
Essential Agile Product Leadership

Up until now, this course had been something I would either outsource to a trusted colleague, or deliver myself using licensed materials. That was until one exceptionally intriguing client asked me to customize the course for an executive steering committee. I decided to start from scratch.

I tested an open-source game with a friendly group of colleagues at the Agilistry, and it proved not to be quite what I needed.  Elisabeth Hendrickson and Dale Emery encouraged me to build something on my own, and stayed with me at the Agilistry until late into the night, brain-storming ideas for a portfolio-planning simulation.

This course still has much that is oriented towards the Scrum "Product Owner." What differentiates the Agile Institute course is a suite of modules and activities that address challenging leadership decisions regarding portfolio planning, customer relations, innovation, and business value models. In its new form, this course has only been delivered once, but received excellent ratings.

For more information regarding our courses, write me at Rob [dot] Myers [at] Agile Institute [dot] com, or take a look at our selection on the courses page.
A Canadian client takes pity on me.