| 
 A year ago I posted this on LinkedIn: === I tell my students to avoid select boxes. Because it’s often better to use radio buttons. But students often say “But it’ll make the page too long”. Yep, but that doesn’t necessarily mean it’s bad UX. See the page I designed to let users select a course. Huge list of radio buttons. But no issues in user research whatsoever. Does this mean you should always use radio buttons? No. But most designers would balk at a design like this even though it worked perfectly well for users. === It got a lot of comments - one of which was: “What other options did you test? Just because your design worked, doesn’t mean it’s the best!” Here’s what I said in response at the time: === I rarely test two solutions at once. Don’t get me wrong, I consider many options. But only one gets tested - unless it fails - then I try another. That’s because there’s usually a clear winner. Plus testing two versions is full of pitfalls. [...] === I’ve worked with quite a few product managers and designers who suggest testing multiple versions. It sounds sensible, right? 
 But in this case: More work = worse result. Here’s why (according to UX expert, Caroline Jarrett who wrote about it in “Designing comparative evaluations”): Reason #1: You won’t get a clear answerYou're hoping for "Version A wins!" but what you'll actually get is: Parts of A are better, parts of B are better, and there's probably a Version C that would beat them both. Not the clear direction you were looking for. Reason #2: Your results will get contaminatedIf you test both versions with the same participants, they'll learn from the first one. So some users may prefer the second version because they already understood the task - even when that version was objectively worse. Reason #3: You need a lot more participantsComparative tests need 3x your normal participant numbers. You need to balance who sees what first, and if you're testing separate groups, you need even more people for the statistics to be meaningful. Reason #4: Your big differences look identical to usersWhat seems different to you looks identical to users. So the differences you're testing might not even register. But most importantly is that testing two versions is almost always totally unnecessary. Instead: • Design one version properly • Learn what’s wrong • Fix it Clearer insights + less work = Better process If you’d like to learn how to design forms that users fly through using patterns that are a result of designing one version properly, learning what’s wrong and then fixing it, then: Cheers,  | 
Join 9000+ designers, content designers and engineers who get my free weekly newsletter with evidence-based design tips (in 3 minutes or less). Mostly forms UX, but not always.
Last week, I had a meeting with two devs and two designers. The meeting’s purpose was to let developers raise problems and get quick design decisions. One of the devs brought up an issue with a form that had conditional radio buttons. Here’s what the screen looked like (it’s a bit different to what I’m actually working on but close enough): When the user selects “Yes” it reveals a field to enter the name: If you leave it blank and submit the form, the page refreshes and shows an error: So far...
Quick one: Yesterday I sent you an email “My response to Hacker news comments”. I received a few responses asking for clarification in my second illustration – because I screwed it up. Here’s what it should have been: Enjoy,Adam
Last Tuesday, my article about whether to use “Your” or “My” in user interfaces went viral on Hacker News. In case you don’t know, Hacker News is a site where people discuss and upvote ideas in tech and design. The gist of my article said to use “Your” when communicating to the user, like this: And to use “My” when the user is communicating to us, like this: I read through all the comments on Hacker News and picked my top 5 worth responding to (as each one has a useful design takeaway):...