I’m not a coder so I’m not entirely in-the-know when it comes to coder workflows but what I do know is that a significant portion of guys and girls who code value their whiteboard. That’s where a portion of the magic happens apparently.
According to Microsoft what happens on the whiteboard needs to get translated and that takes too much time and effort:
Once a design is drawn, it is usually captured within a photograph and manually translated into some working HTML wireframe to play within a web browser. This takes efforts and delays the design process.
This is why Microsoft has come up with Sketch2Code -“a web-based solution that uses AI to transform a handwritten user interface design from a picture to a valid HTML markup code.”
How does Sketch2Code work?
- First, the user uploads an image through the website.
- A custom vision model predicts what HTML elements are present in the image and their location.
- A handwritten text recognition service reads the text inside the predicted elements.
- A layout algorithm uses the spatial information from all the bounding boxes of the predicted elements to generate a grid structure that accommodates all.
- An HTML generation engine uses all these pieces of information to generate an HTML markup code reflecting the result
If you interested in testing out Sketch2Code you can access the website here.
You can find the code, solution development process, and all other details for Sketch2Code on GitHub.