AI That Can Click and Type?!

I reflect on 5 takeaways of Claude's new powers. Plus, 3 tips for your AI toolbox.

[image created with Dall-E 3 via ChatGPT Plus]

Welcome to AutomatED: the newsletter on how to teach better with tech.

In each edition, I share what I have learned — and am learning — about AI and tech in the university classroom. What works, what doesn't, and why.

In this week’s piece, I discuss how Claude’s new “computer use” functionality represents another soon-to-be ubiquitous reduction in the friction between one’s capabilities and one’s goals.

I also post a call for faculty who manage their department’s course scheduling to reach out, if interested, about a new AI-powered tool that I am developing to make this process more efficient.

1. Anthropic has unveiled a new capability called "computer use" in public beta for Claude 3.5 Sonnet (their most capable model), enabling the AI to interact with computers like humans do — viewing screens, moving cursors, clicking buttons, and typing text. Rather than creating specialized tools for specific tasks, Anthropic taught Claude general computer skills to use standard software programs designed for humans. Accessible via the API only, the AI currently leads competitors in computer-use ability, scoring 14.9% on the benchmark “OSWorld” evaluations compared to the next-best system's 7.8%, though still far below human-level performance of 70-75%. Here are two videos:

While the capability remains experimental and sometimes error-prone (struggling with actions like scrolling and dragging), it signals a shift in AI capabilities, and companies including Asana, Canva, and DoorDash are already exploring its potential for complex multi-step tasks. (More on its development and safety here; below, I discuss its relevance to higher educators.)

2. Similar projects are under development elsewhere, including Google’s Project Jarvis and Microsoft’s Copilot Vision.

3. The Department of Education’s Office for Educational Technology has released a new report: “Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration.”

5. Generative AI is being embraced faster than internet, PCs, according to researchers at Harvard, the Federal Reserve Bank of St. Louis, and Vanderbilt.

6. Arizona State University has now deployed ChatGPT Edu across more than 200 teaching and research projects, from AI-driven behavioral health training to scholarly writing support. The initiative, which drew over 400 proposals from 80% of ASU's schools, aims to personalize learning for its 181,000 students while maintaining privacy through OpenAI's dedicated education product.

💡 Idea of the Week: 
I Can ↔️ AI Can?

I think the importance of Claude’s new “computer use” functionality (explained above) is best framed in a broader story about a loss of computer-human friction.

Below, I will briefly tell the story and then point out some upshots for (higher) educators…

Over the past few decades, computers have become easier and easier to use to achieve our goals, even if we are not software developers or super savvy users.

On the one hand, hardware like our iPhones and Windows PCs have sleek and intuitive interfaces that make short work of complex tasks that previously were either impossible for the average person to complete, required advanced skills, or took a lot of time to optimize.

On the other hand, we can also develop our own software and user interfaces with much greater ease. Most products are highly configurable in their own right, and so-called “low-code” and “no-code” software development suites, tools, and integrations like Zapier have proliferated. They enable us to do what previously was the unique purview of software developers.

With AI, the friction is only decreasing.

Given that coding languages have well-defined and relatively simple syntaxes, and given that we now have AI that can produce competent linguistic outputs in much more complex languages, it’s no surprise that now we also have AI that can code quite well.

It’s also no accident; indeed, coding prowess is a priority for AI developers, as we have seen in nearly every new model release (see, e.g., how Anthropic touts the latest update to Claude 3.5 Sonnet).

This means that whether your goals are achievable no-, low-, or high-code, you can achieve them, even if you lack software development skills, so long as you have AI to help you and you can use it effectively.

For instance, the script I released in my most recent ✨Premium piece — which (de)anonymizes bulk-downloaded Canvas submissions using the freely-available Google Apps Script — is easily modifiable for Blackboard Ultra or other learning management systems if you supply it to a LLM and provide instructions on what you need.

As a matter of fact, you could have written the original script yourself, if you had thought of the use case and have some degree of prompting skill, regardless of your knowledge of JavaScript (the language Google Apps Script relies on).

What you need to know is how to prompt and broadly what you are trying to achieve, and the AI will handle the rest. Sure, you might need to do some back-and-forth with the AI to refine the details or squash some bugs, but it can help you do that easily, too.

However, there’s been one remaining obstacle to frictionless AI-enabled access to all that the computing universe has to offer: human-specific but machine-unfriendly interfaces. AI can enable us to communicate with computers via code, except when their access is mediated by interfaces designed uniquely for us.

For obvious reasons, there are a lot of these interfaces, and the computer-friendly route to achieving what they achieve is often circuitous, challenging, or simply not worth the effort. The simplicity and intuitive design features of the iPhone don’t appear that way from the perspective of computers.

Like other impending technologies of the same kind, Claude’s new “computer use” functionality is a massive step towards removing this obstacle.

Soon, human-specific but computer-unfriendly interfaces will be no obstacle for us to direct AI to help us achieve our goals. And they will be no obstacle for AI to achieve its own goals, as AI agency (or something like it) becomes more common.

(Self-directedness and other components of agency are key goals of AI development companies from Gru to Salesforce; more on coding agents’ effectiveness here.)

In sum, whatever AI can do, I can do, and whatever I can do, AI can do. Or, this is where things seem to be headed.

So, what does this mean for higher educators?

Here are some takeaways:

  1. The domain of the AI immune is ever-shrinking. Over the past two years, I have regularly updated my advice — and ✨Premium Guides (first and second) — on how to design assignments and assessments that are not susceptible to AI misuse. The options we have as instructors are being whittled down, with AI soon able to easily navigate any interface that we can, at our direction or otherwise (“log into Moodle for me, navigate to Homework #2, use Google and JSTOR to do the research, and complete a rough draft for my review”). AI immunity is getting closer and closer to being simply those formats and contexts that are tech-free.

  2. The integration of AI training and use in your field should be increasing; learning objectives that are AI-enhanced or AI-specific should be gaining prominence on your syllabi. For instance, ask yourself: what are some ways that your field might be affected by AI’s ability to use interfaces like we do?

  3. The collapse of the 'interface barrier' may lead to an explosion of disciplinary cross-pollination. As AI eliminates the technical barriers to using specialized tools and interfaces across fields, we might see students and scholars making novel connections between previously siloed domains. For example, a literature student might easily apply computational linguistics tools, or a music theorist might leverage physics simulation software — all without needing deep technical expertise in these areas. This suggests educators should prepare for (and encourage?) unexpected interdisciplinary work that was previously blocked.

  4. When you lean away from AI in teaching your students, it should be because students learning to think or act without AI is better, either for their personal development, their fundamental skillsets, or their abilities to use AI itself. This is a point I cover at length in my ✨Premium Guide on how to train/teach students to use AI. One way to think about it is that we now have machines that can fill many of the roles previously only fillable by humans. Imagine a world with human-clones that your students will be able to use, direct, work alongside, and — perhaps — be directed by. What would you teach them to prepare them for this world? (And if you think it probably won’t happen totally or completely, partial realizations are likely enough and disruptive enough to seriously hedge your bets.)

  5. The democratization of tech capabilities raises new equity considerations. As AI removes technical barriers, educators need to consider how this shifts educational equity issues — while coding/technical skill gaps may decrease, gaps in AI literacy, prompt engineering skills, access to advanced AI tools, and availability of cheap electricity could become the new dividing lines in education.

To make matters worse — or better, depending on the case — it’s all happening very, very fast.

Each of us must move quickly and flexibly, whichever direction we decide is best.

🧪 Call For Interest:
AI Course Scheduler for Departments

As I reported two weeks ago, I am putting the finishing touches on the beta version of an AI department/school/unit course scheduler.

It accepts as inputs the spreadsheets and narrative faculty preferences departments currently use for faculty teaching assignments. It then interprets these inputs approximately like a human scheduler would.

A standardized input form will be available but optional; I am assuming that most human department course schedulers don’t want to try to wrangle their faculty to use it and that they would rather dump in whatever data they currently gather to conduct scheduling.

The system outputs faculty-to-course assignments and can suggest course adjustments within given constraints.

If you’re interested in this AI tool, email me either to be notified when it's available (at a competitive price point) or to participate in beta testing at a discount.

Email me by responding to this email or clicking the below button:

And if you don’t handle scheduling for your department/school/unit but know who does, tell them about this to help them out!

👀 A November Webinar
on Feedback & Assessment?

On October 4th, I hosted a webinar on how to use LLMs like ChatGPT as a professor. Just like my September 6th webinar on how to train your students to use AI, the feedback was very positive, with 100% of responding participants giving it an A (“Excellent!”) afterwards.

Here’s some of the feedback I received:

“Graham opened my eyes to how create better prompts utilizing his approach.”

“A well organized approach to an intimidating topic. I especially appreciated the depths of the prompting explanations. They brought a new level of understanding to the challenges of 'talking' to A.I. ”

Attendees at my Oct. 4th webinar

I’m thinking we should do one more webinar before the end of the (calendar) year.

I’m also thinking that the topic should be ethically using AI to improve and accelerate feedback and assessment, given that many of your semesters are coming to a close — and grading is looming.

But I don’t want to schedule it and build it out unless there’s interest…

So, now’s your chance: if you would be willing to pay for a webinar in November on this topic? More specifically, would you be interested in a $150 webinar, lasting for 1.5 hours, on Zoom, with 50% off for pre-registrants and 60% off for ✨Premium subscribers?

Tell me via this poll:

Would you be interested in a Nov. webinar on using AI for feedback and assessment?

Login or Subscribe to participate in polls.

Note: Respondents may receive additional emails — related to the webinar, obviously — if they answer “Yes.”

🧰 3 High-Impact Tips for Your AI Toolbox

1. If you are developing a prompt for a LLM like ChatGPT in the text submission field for the LLM itself, remember to press Shift-Enter (or Shift-Return) to create line breaks. 

Often it’s best to have lengthy prompts, especially at the start of a conversation, but as you build them, if you press Enter (or Return), you will submit the prompt. Shift-Enter (or Shift-Return) is your friend.

And if you accidentally submit your prompt before you are ready, immediately press the stop generating button of the LLM and finish what you started (e.g. copy-paste the old prompt in, use Shift-Enter, and complete it) before submitting it again.

2. Ideally, you would structure your LLM prompts with the big-picture context and the LLM’s role in it, then supply any relevant background information (organized with section headings), and then supply a step-by-step query or direction that tells the AI what you want it to do. You are getting a generalist up to speed with your task for them, giving them the specialist information they need to do the task, and telling them the steps you’d recommend they take to complete it. (For a deep dive on prompting for heavy-duty tasks, see my ✨Premium Tutorial on long context prompting, with a focus on drafting grant applications with Gemini 1.5 Pro.)

3. You shouldn’t be using LLMs only to complete tasks you could do yourself; you should also be using them to brainstorm and analyze tasks that you need help in conceptualizing in the first place. In such a case, your goal is to work towards a prompt with the structure described above by prompting from a position of ignorance. “Suppose the context is X and I want to ultimately achieve Y. What information would you need to help me achieve Y and what steps would you recommend following to do it?” GPT-o1 is especially good for this purpose.

✨Recent and Upcoming Premium Pieces

This Week - Tutorial on All Major Functionalities of Microsoft 365 Copilot (sorry, this keeps getting delayed)

What'd you think of today's newsletter?

Login or Subscribe to participate in polls.

Graham

Let's transform learning together.

If you would like to consult with me or have me present to your team, discussing options is the first step:

Feel free to connect on LinkedIN, too!