One of the things that’s given LaTeX a huge boost is LLMs. There is so much training data in chatGPT and others that you can get perfectly formatted LaTeX with a simple prompt. Unfortunately getting working Typst code is a lot more difficult due to the lack of training data and constant breaking changes. Perhaps someone could look into finetuning an open-source model with examples and documentation? In the long run it might be worth contacting openAI or Anthropic to see if a deal can be worked out to provide training data.
As far as I know, Claude has some ability to write Typst code, but this code usually contains some errors and requires minor adjustments. However, it is generally usable.
Perplexity also seems to often generate some relatively good results, at least for concrete questions, as it can search the web for examples by itself.
I agree this is a huge thing - drawing a diagram in tikz is now very easy (since ChatGPT can do so much for you) whereas with cetz is hard. I would love it if there were a way for LLMs to decode the documentation, and examples around the web, to help generate diagrams.
Had the same thought. Someone should feed an LLM with the Docs and examples. Perhaps even find a way to generate training examples. (With LLMs themselves, just make it create some examples and correct it until they compile, one could automate this.)
I have just heard that from the Svelte community that it seems there is a brand new way to resolve the knowlegdge cut by making a document specifically for LLMs, abiding the /llms.txt file proposal.
This is the official Svelte document for LLMs, you can also search on YouTube for “Perfect Svelte 5 code completion for any LLM - Claude, ChatGPT and GitHub Copilot” to find an explanation video for that.
https://svelte.dev/docs/llms
I think we definitely can also, and should adopt those for Typst as well!