Following up last night’s experiment, here is what I accomplished in one hour of wireframe. I focused on a single core flow – defining who is cooking the meal and how it could work. If you like this idea and can pull it off, you have my permission to steal this idea. Just do a good job and give me credit.
Assuming the user has already found a recipe or built a meal and is ready to start cooking, we ask them to select who is cooking or add a new cook.A new user will need to upload/select an avatar, provide a voice sample (so the app can determine which spoken command refers to which set of instructions), and details about what this person can or cannot do.We return to the selection screen with our new cook ready to go.The app is now in ‘cooking’ mode. Each user gets their own timeline and their own single instruction. Terms that can can trigger additional interaction (such as whipping egg whites to “soft peaks”) are highlighted, and basic touch commands are available.We want to support voice controls as much as possible – to save time, keep your device clean, and generally allow multiple users easier collaboration. This isn’t an exhaustive list, but here are a few theoretical commands for this step: Go forward or back a step, give me more information, re-read the current instructions, or stop until I can get help from someone else.
This is a bare shell of a small feature-set within the apps. But, I think it shows the potential of re-thinking how we consume recipes and cook with our families and I hope that something like this arrives in the future.