AI Magic? Code from Wireframes and Diagrams from Texts
We also open sign-ups for our February GPT webinar.
[image created with Dall-E 3 via ChatGPT Plus]
Welcome to AutomatED: the newsletter on how to teach better with tech.
Each week, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.
In this week’s piece, I discuss how AI puts us in a space between learner and learned, I highlight two amazing graphical AI tools, and I share the sign-up link for our February GPT webinar.
💡 Idea of the Week:
Learners with Context
We teachers are accustomed to there generally being a large gap between our expertise and that of our students. We have significant subject-matter expertise, our students have little experience — much less expertise — with our subjects, and we need to get them up to speed as quickly as we can.
While some educators remain ignorant of the present and future capabilities of AI, with respect to their subjects and to their pedagogy, many of us are hard at work gaining knowledge in this burgeoning domain. This takes us out of our comfort zone and puts us in the stance of a learner, just like our students. Now, we aren’t so distant from their positions. We should lean into this similarity and model how to learn.
But we still have the context provided by our subject-matter expertise. We are able to locate the changes being wrought by AI on our fields within the context that our knowledge provides us. Our students don’t have this as an option — they don’t know much about AI’s powers to disrupt and enhance, and they don’t know much about what it is disrupting and enhancing.
As we navigate these uncharted waters — and consider incorporating AI training in our courses or decide to discourage AI use on a given assignment — we need to remind ourselves that we are learning too. Perhaps more than ever. Yet, we also need to continue to ground our learning in the timeless standards, techniques, norms, frameworks, and expectations that come with our subject-matter expertise. This is the foundational context that should guide our responses to the changes that AI is bringing, even if it turns out that these changes affect the foundation itself.
🧰 Two AI Use-Cases for Your Toolbox:
Wireframe-to-Code and Text-to-Diagram
Gemini is now the AI model driving Google Bard. It has been pitched as superior to GPT4 Vision (GPT-4V) when it comes to multimodal capabilities — that is, the ability to “understand, operate across and combine different types of information including text, code, audio, image and video” — although early analyses indicated that they are comparable.
A month ago, I discussed the mixed success of Gemini when it comes to analyzing handwritten student assignment submissions. This is an intriguing new frontier that we will continue to monitor.
Today, I focus on two other types of image-related AI tools that will be of interest to some professors: namely, (i) those that generate workable user interfaces and corresponding code from sketches and (ii) those that create diagrams from typed or uploaded text.
With all the focus on text-to-art AI tools, like Dall-E 3, Midjourney, or Stable Diffusion (which Eric Steinhart explained to us in the context of concept illlustration nearly 8 months ago), these other tools are often neglected, despite their utility in many educational contexts.
A Wireframe-to-UI-to-Code AI Tool
Many fields taught at the university level, like computer science and journalism, train students to develop user interfaces (UIs) for projects like web publications or software products. Traditionally, there was a significant gap between designing how a UI should appear and the creation of a functional version of that UI that a user could actually use. Indeed, in many fields, the UI designer drafted static wireframes and then handed them off to developers who translated them into workable code that made a dynamic user experience. (And the two then argued about the differences between vision and reality, as well as the feasibility of changes.)
This gap is shrinking more and more every year. For instance, Figma has online prototyping tools that “make it easy to build high-fidelity, no-code interactive prototypes right alongside your designs” so you can see how your wireframes will come to life before you hand them off to your team’s developers.
But new AI tools are showing a lot of promise to further shrink the gap between designer and developer. Take, for instance, the case of Make Real, a product of the collaborative whiteboard company tldraw (originally brought to my attention by the excellent latent.space newsletter).
In short, this whiteboard-style AI tool uses the magic of AI to move — via a “Make Real" button — from wireframes of a UI to a functioning version of it with viewable/downloadable code. Since this code is the very same code that would make the UI functional on, say, a website, it lets the designer directly develop their own designs.
Below is an example of my own making. It is a basic version of the sign-up flow that we implemented with Typeform for our February “Build Your Own GPT” webinar (see further down in this piece for a description of the webinar, or click here for a link for the real sign-up flow). In only a few minutes, I moved from a labeled sketch — the red labels are in the app while the blue text and arrow are overlays I added to images of the app in order to explain it — to a functional UI and corresponding code.
A gif showing Make Real’s power, even for a noob like me.
(Or maybe your response is like Kevin Cannon, a designer who tweeted “I think I need to go lie down” upon seeing this tool in action.)
The only catch is that you need to share an OpenAI API key with tldraw because the magic AI under the hood is GPT-4V, which tldraw must communicate with via your API key due to usage limits currently on GPT-4V.
(Note that access to the OpenAI API is distinct from access to ChatGPT Plus, and they must be bought separately. The latter enhances the chatbots one has access to, allows one to create custom GPTs, etc., while the former is a way to more directly interface with the engines powering OpenAI’s products so that you can create your own products that embed or rely on them. You can read more about APIs, the OpenAI API, and related topics by Googling, but here are OpenAI’s helpful introductory documentation materials.)
Security issues aside, it isn’t terribly expensive — the above mock-up cost me $0.04 to generate. With updates beyond GPT-4V coming out soon, costs will continue to decline. And it is likely that products like Make Real will soon no longer require you to share your API key (instead, you’ll need to share your money).
Check out tldraw’s substack if you are interested in more about this tool, including tons of use-cases and explainers, or see Excalidraw for a competitor. Since tldraw’s code is open source — itself borrowing from SawyerHood’s project — I expect many variants of it to appear in the coming months.
A Text-to-Diagram AI Tool
I love drawing on the board in class, except when I am in that room with the spongy chalkboard (I would say it is more like a black coating or sheet, poorly applied to plywood — don’t get me started). Diagrams often help students understand concepts and their interrelationships much better than text alone, even if the diagrams are themselves full of text.
With AI infusions, diagramming tools are getting smarter and smarter. Generally, the net result — for non-power users like myself (i.e. most professors) — is not beautiful diagrams. Rather, the net result is less time spent producing workable diagrams, which is all we are really after.
Take the case of Whimsical. With their tools, you can move from text descriptions of flowcharts or mind maps to the graphics themselves in very little time. And you needn’t buy their paid plans (which are super cheap anyway) to get the benefits, as they have a free tier and also a free custom GPT.
When I use the latter, sometimes I cannot get it to produce exactly what I want, like when it gave me this to represent the iterated cycle discussed in our Premium guide to crafting your syllabus’ AI policy:
However, with a little bit of guidance, it can save me a lot of time, like when it took me from an uploaded version of my piece from October on using AI while protecting student data (and a brief prompt) to this sweet diagram:
All I told it was to take the file uploaded and create a flowchart based on it that illustrates the various strategic solutions to the problem described, as well as their pros and cons.
One tip I have found useful in more complicated use-cases is to ask it to label each box with a unique number. This then enables me to more easily instruct it on replacements and modifications.
If you want to shop around with similar tools, check out Zapier’s guide for alternatives.
📝 How to Sign Up for Our
“Build Your Own GPT” Webinar
We are excited to announce our upcoming "Build Your Own GPT" webinar, a Zoom-based learning experience designed for professors, instructional designers, librarians, and any other educators who want to incorporate custom GPTs into their pedagogy this year.
Our February webinar — led by yours truly — will give a cohort of like-minded educators hands-on guidance in developing custom GPTs tailored to their specific needs. Whether you're looking to enhance student engagement, streamline course creation, improve your in-class activities, or empower subject matter experts, this webinar is your gateway to unlocking the full potential of custom GPTs. With evidence growing of the effectiveness of this sort of intervention, now is the time to jump on the opportunity afforded by recent advances from OpenAI.
By the end of the webinar, you can expect to have learned how to develop your own GPTs — but you will also walk away with one already developed!
The price is $99. All Premium subscribers ($5/month or $50/year, with prices going up soon) get a 10% discount code, included immediately below.
We are now opening sign-ups. To sign up, click this link or the below button:
Microsoft Releases Free Reading Coach
Jin et al. Show LLMs’ Reasoning Step Length Matters
Late in the fall of 2023, we started posting Premium pieces every two weeks, consisting of comprehensive guides, releases of exclusive AI tools like AutomatED-built GPTs, Q&As with the AutomatED team, in-depth explanations of AI use-cases, and other deep dives.
Our next three Premium pieces will be released on the following dates and will cover these topics:
January 24th - a Q&A with the AutomatED team.
February 7th - an AI use-case deep dive into how professors and others in higher ed can best leverage Microsoft Copilot and 365 Copilot.
February 21st - an AI use-case deep dive similar to the above but focused on Google’s Bard.
If your college or university uses the Microsoft 365 suite or Google Workspace, you won’t want to miss these deep dives.
So far, we have three Premium pieces:
To get access to Premium, you can either upgrade for $5/month, $50/year, or get one free month for every two (non-Premium) subscribers that you refer to AutomatED (note: we expect to raise prices this spring, so now is the time to lock in a lower rate).
To get credit for referring subscribers to AutomatED, you need to click on the button below or copy/paste the included link in an email to them.
(They need to subscribe after clicking your link, or otherwise their subscription won’t count for you. If you cannot see the referral section immediately below, you need to subscribe first and/or log in.)