Abstracts
To rewrite textual source code, rewriting annotated parse trees using (concrete) pattern matching and substitution, and then unparsing the result, has been the plateau for high fidelity code transformations for almost two decades. Although elegant and easy to use, the tree rewriting mechanism isn't accurate enough by itself for industrial application. Instead, to avoid losing any source code comments and indentation accidentally, language engineers would pattern match on the trees and then collect edit actions (insert, delete, replace) with minute detail.
In this talk we present a new parse tree diffing algorithm which automates the task of identifying minimal text edits from a pair of parse trees: the original and the rewritten. These edits then change the program according to the logic of the applied rewrite rules, while keeping source code comments and indentation and other details such as case insensitive keywords intact. A second, inverted, diff algorithm infers edits from a formatted tree, where nothing has changed but the whitespace, for deriving hifi code formatters without loss of comments or case insensitive keywords.
The two new language-parametric algorithms are demonstrated on a simple and a complex language, with a simple and a complex transformation. We compare the old noisy result with the new pristine edits and formulate further challenges in generating idiomatic source code using rewrite rules."
ANTLR4 is one of the most popular parser generators. ANTLR-generated parsers are robust, performant, and relatively easy to maintain.
However, a parser is only the first step in a language engineering solution. Whether it’s a code migration, DSL, compiler, or another system or tool, we’ll need to resolve symbols, run analyses (such as data flow) and validations (such as type checking), interoperate with other systems, etc.
An abstract-syntax tree (AST) is the backbone of such advanced features. For that purpose, at Strumenta we have developed a collection of open-source AST libraries called Starlasu, integrating with LionWeb and ANTLR.
Currently,writing Starlasu ASTs means writing in one of the supported general-purpose languages, such as Kotlin, Python, TypeScript, and others. There’s a mismatch between the domain – AST construction, traversal, transformation etc. – and the implementation language, that libraries in the Starlasu family can fill only partially.
That’s why we designed and built Protostar, a language specialized on AST design, based on LionWeb and integrated with ANTLR. From Protostar declarative code we generate a Starlasu AST, transformers from ANTLR parse trees to the AST, and LionWeb integration code.
Protostar comes with a Gradle plugin for integrating it into your build, and lightweight IDE support through the Language Server Protocol.
Imagine designing DSLs in a time where 4GL and Oracle Forms looked like the future. Now step into a time machine and 40 years later, there are many lines of code written in your language. A big success. Except that time machine didn't exist, you're already on your pension, and a different group of people has been responsible for maintaining the language and it's environment.
Swat.engineering was tasked to rejuvinate this language and environment. We're going to share our experience of this migration and how managed to solve some of the new requirements using new DSLs.
SysML v2 is gaining strong momentum in aerospace and other engineering domains. At Metadev, we are developing Apricot, a web-based collaborative editor for SysML2 models (https://apricot.metadev.pro). Apricot is designed to support team-based modeling workflows, enabling engineers to build, share, and evolve SysML2 models collaboratively. This talk will present the challenges and solutions behind Apricot’s design from a language engineering perspective:
- Handling the breadth of the SysML2 specification, which defines hundreds of interrelated concepts.
- Organizing this complexity into an accessible and productive modeling environment.
- Addressing model persistence, versioning, and permission management in a collaborative setting.
- Applying DSL techniques and language workbench concepts to SysML2 tool support.
The presentation will include a live demo of Apricot, showcasing how these ideas translate into practice and how language engineering can help bring SysML2 adoption to real-world engineering teams.
In this industry case we present a tool for domain experts specifying the behavior of a complex printing system – interactively, incrementally and collectively. This should result in validated and verified models of product behavior that can be handed over to developers.
For the specification of product behavior, we have adopted two complementary languages: Gherkin and MuDForM. It is our aim to give these languages execution semantics such that we can connect them: using Gherkin scenarios as tests verifying the MuDForM models. We achieved this by adopting a functional state model. A functional state model has append-only semantics, meaning that facts are not updated in place, but new facts accumulate and may override previous facts.
To realize this state model, we chose Datomic, a database that has the desired append-only semantics built-in. Datomic comes with a rich API that is battle-tested. To leverage this API, we worked with the Clojure language, a modern JVM based LISP. Rather than re-implement concepts from JetBrains MPS (our platform for the past five years) we embraced a fresh, fully functional design approach – capitalizing on the knowledge of the Clojure/Datomic community. This has proven highly rewarding.
In the first phase (now completed), we took an internal DSL approach, building the above languages, the models and the interpreter with pure Clojure functions acting on a Datomic database. We will demonstrate our prototype, which is being rolled out to our target audience as we speak. Also, we will share insights into how functional thinking has shaped our language engineering practices
In the second phase (just initiated), we are transitioning to an external DSL approach by integrating Freon using LionWeb protocols. We will outline our strategy for bridging the gap between LionWeb and our functional approach to language engineering.
The rapidly increasing complexity of digital hardware, particularly in the domain of artificial intelligence (AI) accelerators, calls for design methodologies that enhance abstraction, productivity, reliability, and robustness-by-design. Domain-Specific Language (DSL) methodologies offer a promising solution by allowing designers to express hardware intent at a higher level of abstraction while retaining fine-grained control over low-level implementation details.
This talk presents a case study on applying DSL methodology to the design of a configurable AI accelerator. We describe the guiding principles of the DSL approach, illustrate its integration with conventional hardware description workflows, and highlight the resulting benefits in terms of design clarity, reusability, and verification. The study demonstrates the potential of DSL methodologies as a practical and effective tool for next-generation digital hardware design, with particular emphasis on AI-oriented architectures."
Formal verification is a technique for increasing the reliability of software by mathematically proving the correctness of software, instead of testing. It is particularly useful for software where there are large numbers of potential paths through the code; in such systems formal verification is often the only practical choice for achieving reliable software.
This talk will introduce the Coco language and explore some of the unique design decisions that were made when developing the Coco language. Coco is a particularly unusual language: it is designed to enable formal verification by non-formal-verification experts, and is also designed to generate code in regular programming languages (such as C++) to enable deployment of Coco programs. This talk will also explore how Coco has evolved over time, starting from a primitive DSL to becoming a more general programming language, and will also provide an overview of the implementation of the Coco language frontend. Lastly, we will explain how other companies have made novel uses of Coco as a backend language in order to produce accessible graphical tooling.
Coco is a programming language that is specifically designed for developing event-driven software that will be validated using formal verification. It has been used extensively in industry, with large companies such as ASML writing large amounts of software in Coco in order to improve their reliability.
Collaborating on complex models in desktop-based MPS applications often relies on cumbersome, Git-based workflows, creating significant overhead for subject matter experts, especially during review cycles. This talk presents a hybrid approach that bridges the gap between powerful, feature-rich MPS clients and modern, collaborative web applications.
Leveraging Modelix and its dedicated MPS plugin, we enable real-time, bi-directional synchronization between a standalone MPS instance and a web-based editor. We will showcase this solution with a case study of an itemis product, demonstrating how subject matter experts using the established desktop application can seamlessly interact with reviewers on its new web-based counterpart. The presentation will feature a live demonstration of a cross-platform review scenario, highlighting how this architecture eliminates setup friction and streamlines the collaborative modeling workflow.
Propagating model changes incrementally is an important capability to have: it improves performance by reducing network bandwidth usage and lag, it’s essential for implementing collaborative editing á la GoogleDocs, and potentially avoids re-computations — with some assumptions on the nature of the computation.
Much of last year’s work by the LionWeb initiative has gone into specifying and implementing the LionWeb delta protocol which enables support for propagating changes in models incrementally. In this talk, we’ll update you on the state of this specification, explain the architectural ideas behind it, and demo the current implementations of LionWeb delta protocol-compliant clients and repositories on various platforms, including C# and TypeScript.
DSLs allow experts to express complex business logic in a precise and formal way. One of their major limitations, is accessibility; they require users to learn a specific tool(and syntax), which creates a barrier for non-technical stakeholders. Our approach addresses this by using an LLM as a controlled parser for unstructured input, not as a general reasoner. The framework relies on a set of artifacts that formalize the domain: a grammar, an interpreter for semantic validation, and a 'linguistic connector' that maps formal DSL constructs to natural language prompts. This architecture decouples the domain's semantics from the conversational mechanics, allowing the underlying DSL and its validation rules to serve as the verifiable source of truth, while the LLM's role is restricted to a Natural language - DSL translator.
In our demonstration, we first test the system's depth against a DSL encompassing a detailed set of validation rules. We will test the system with vague or incomplete queries, to show how the system leverages the DSL's validation rules to guide the user with clarifying questions and prevent invalid operations. Then, to illustrate the framework's adaptability, we will configure it live for a second, lightweight DSL. This will demonstrate how the linguistic connector reduces the need for manual prompt engineering while extending the system to a new domain.
Many domain-specific modelling languages (DSMLs) combine graphical and textual elements. While it is often possible to get away with simple textual labels, many situations require more complex textual sub-languages. For example, consider the use of condition expressions in flowchart-like languages or action specifications in statechart-like languages.
Developing tool support for DSMLs that integrate expressive graphical and textual elements requires combining two different paradigms for language processing: projectional editing for the graphical elements and a parsing-based approach for the textual elements. A purely projectional approach quickly becomes inconvenient (necessitating parser integrations like grammar cells available in the MPS technology space), and parsing-based approaches currently do not handle graphical languages well. For modern web-based language workbenches, such integrations are currently not well supported.
We present a reusable framework for integrating text-based grammar support using Langium into the graphical model of GLSP-based languages. The backend maintains a complete abstract-syntax graph, irrespective of whether a particular part of the model is edited graphically or textually. Langium-managed text elements automatically manage scoping across the entire model contents, enabling scenarios where the position of a text node in the graphical model can influence the elements available during linking. The framework is available open source, and we invite the community to integrate it into their own languages.
Many companies keep moving their software to the cloud to improve scalability, availability, and reduce costs. However, rapid migrations without the necessary expertise often result in suboptimal deployments and weak security configurations. Such misconfigurations pose significant risks, including potential attack vectors when applications are exposed in the cloud. In this short demo, we will present LZA-Editor (Landing Zone Acceleration), a web-based, model-driven tool developed by Metadev for Ingram Micro and AWS Spain. The tool accelerates cloud deployments while ensuring security and compliance with ENS (the Spanish National Security Scheme). LZA-Editor is designed for executives and managers (non-developer roles) to blueprint cloud deployment strategies that meet security compliance requirements. It simplifies complexity by integrating policy enforcement, network design, and backup strategies, while embedding industry best practices by default. https://lza-editor.metadev.pro
Domain-Specific Languages (DSLs) have long been tied to specialized desktop environments. Yet, as software increasingly moves to the web, so too does the demand for DSL tooling that feels at home in the browser.
Simply running existing DSL tools in the browser is not enough. It often misses the smooth, interactive experience users expect from modern web apps.
Freon takes a browser-first approach, offering a rich set of native UI components, like checkboxes, radio buttons, sliders, and more, straight out of the box for an instantly familiar user experience. The variety of potential UI components in the web ecosystem is vast and ever-growing.
Rather than trying to pre-build them all, Freon introduces a flexible plugin mechanism that allows seamless integration of external components into the DSL editor. Even better, built-in Freon components can be embedded recursively inside external ones, enabling deep customization and a consistent user experience. This approach makes it possible to tailor DSL editors to end-users while preserving the look, feel, and responsiveness of a true web application.
In fact, the entire Freon DSL editor is itself a reusable component, ready to be embedded directly into any web app, allowing DSL capabilities to blend into existing workflows rather than forcing users into a separate tool.
In this session, we’ll showcase how multiple external components can be integrated into Freon, demonstrating how a DSL editor can be both powerful and natively web-friendly.
SHORT VERSION FOR PROGRAM OVERVIEWFreon brings DSL editing into the browser with a native web experience. Instead of just porting desktop tools, it offers built-in UI components and a plugin system to seamlessly integrate external ones—while keeping everything consistent and customizable. The entire editor is itself a component, ready to embed in any web app. In this talk, we’ll show how Freon makes DSLs feel like they truly belong on the web.
Daga is a versatile graphical diagramming tool which can be added into web applications to enable the user to view, create and edit a wide variety of customizable graphical models consisting of nodes and connections. It allows the creation of nocode or lowcode solutions.
Among its many uses cases, we have created applications that use Daga to work with a variety of established diagram notations such as UML diagrams, BPMN 2.0 and SysML 2.0 as well as custom diagrams to model CMDBs (Configuration Management Databases) and container deployment models (Docker and Kubernetes).
In this talk we will show not only the versatility of Daga for a variety of diagrams, but also features that can be implemented with it, such as real-time collaboration using CRDTs and importing/exporting the diagram with custom data formats.
Jjodel is a cloud-native reflective workbench that lowers barriers to language engineering by offering modular viewpoints for syntax, validation, and semantics in a low-code environment. While diagrammatic notations in software engineering are typically topological, capturing structure through connectivity, many engineering domains, such as railway interlocking, rely on layout-sensitive notations where meaning is conveyed by spatial configuration . These notations pose challenges not addressed by mainstream meta-editors like GMF or Sirius, since layout is traditionally treated as a rendering concern rather than a semantic one.
In layout-sensitive languages, semantics is often defined directly on the concrete syntax, which works well for human interpretation but undermines correctness when models are subject to automated transformations. Models that differ only in layout may share the same abstract syntax, jeopardizing the uniqueness of semantic interpretation. To address this, we present a formal framework that distinguishes explicit and implicit layout semantics and derives a set of requirements for unambiguous mapping from concrete to abstract syntax. These requirements cover layout extraction, semantic integration, abstraction, co-evolution, and automation readiness, ensuring that positional distinctions are preserved without polluting metamodels.
The talk introduces these principles through Jjodel, illustrating how its reflective architecture supports layout-sensitive notations while maintaining semantic integrity. A live demonstration using a simple algebraic notation highlights the trade-offs: while minimal, the case captures how spatial order affects semantics and how Jjodel enforces correctness under layout changes. This positions Jjodel as a testbed for advancing meta-language support for layout-sensitive domains, bridging the gap between human-centric diagrammatic practices and automation-driven MDE.
Programming environments typically separate the world of static code from the dynamic execution of programs. Developers must switch between writing code and observing its execution, often with limited tools to understand the relationship between code changes and runtime behavior. While several approaches exist to bridge this gap—exploratory programming for comparing code variants, live programming for instant feedback, and omniscient debugging for exploring execution history—existing solutions tend to focus on specific aspects rather than providing a fully integrated environment.
In this talk, we introduce the new concept of SpaceTime Programming, a novel approach that unifies these paradigms to create a seamless environment for exploring both code modifications and execution flow. At the core of our approach is a trace mechanism that captures not only execution state but also the corresponding code changes, enabling developers to explore programs in both space (code variants) and time (execution flow). We demonstrate this concept through two case studies: a live omniscient debugger, and a Flappy Bird game instrumented to support live and omniscient exploration.
Omniscient debugging lets developers test hypotheses and explore what-if scenarios without requiring restarts. Instead of only stepping forward through the execution, developers can also look back and inspect run-time histories. However, debuggers typically record histories as snapshots of program states, omitting the changes and causal relationships needed to answer why-questions [2].
Cascade has introduced cause-and-effect chains as a generic engine that powers change-based Domain-Specific Languages (DSLs) for live programming [3]. We aim to create a reusable omniscient debugger for change-based DSLs by integrating debugging mechanisms with cause-and-effect chains.
After a brief introduction [2], we give a live demo of a reusable omniscient debugger that provides a select-and-filter mechanism for answering why-questions [1]. We illustrate how to isolate issues and identify root causes for the Live State Machine Language, showing both executions from recorded histories and live sessions. The new interaction model supports practical and reusable omniscient debugging for change-based DSLs.
Keywords. Domain-Specific Languages, Omniscient Debugging, Execution Traces, Live Programming, Cause-and-Effect Chains, Language Workbenches
- [1] Jakub Kaşıkcı. 2025. Omniscient Debugging for Change-Based Domain-Specific Languages. Master’s thesis. University of Amsterdam. Ongoing project.
- [2] Jakub Kaşıkcı, Riemer van Rozen, and Tijs van der Storm. 2025. Omniscient Debugging: A Systematic Mapping Study. (2025). Ongoing work.
- [3] Riemer van Rozen. 2023. Cascade: A Meta-Language for Change, Cause and Effect. In Software Language Engineering. ACM.
This presentation introduces the collaborative architecture of jjodel, a cloud-based modeling platform that allows real-time, multi-user collaborative modeling. jjodel draws inspiration from the gaming domain, particularly from multiplayer games where actions are executed and broadcast instantly to all participants. It employs a room-based collaboration model, where every modeling action is immediately shared with all connected users, ensuring everyone has a consistent and up-to-date view of the artifacts being worked on.
The backend is implemented using socket-based communication, which supports low-latency synchronization. Additionally, jjodel features an Action Language that represents change operations in a uniform manner. This DSL ensures that edits can be consistently propagated, replayed, and analyzed across clients. To store collaboration data, jjodel utilizes a persistence layer that maintains both the current state and the history of collaborative sessions.
Similar to online games where conflicts are minimized because all players can instantly see and react to the same actions, jjodel reduces conflicts by synchronizing updates immediately across clients. This results in a smooth and reliable collaborative modeling experience.
We will demonstrate jjodel’s collaborative features through modeling scenarios, showcasing concurrent editing, presence awareness, and the effectiveness of this gaming-inspired approach to real-time collaborative language engineering.
An update of my 2023 talk on LangDevCon about Projectional Forms: using (extended) HTML forms to online define a model, meta-model and generator. Starting point is: everything that can be done with code, can be done with forms. Some points I want to show:
- LionWeb compatibility.
- From grammar to forms. Patterns. Handling recursive definitions.
- Building generators with forms.
The interoperability of modeling frameworks is an important condition for creating an ecosystem of modeling tools, where one can mix and match components from various technologies and benefit from the best capabilities each tool has to offer. The LionWeb protocol was started with the mission to achieve such interoperability between modeling frameworks and language workbenches.
To test the power of LionWeb, we apply it to SysML v2, a general-purpose modeling language, that is highly anticipated in the world of Model Based System Engineering (MBSE). The SysML v2 language has been developed by Object Management Group (OMG) and is prototyped in the open-source pilot implementation. The pilot implementation of SysML v2 uses Eclipse Modeling Framework (EMF) for capturing the structure of the language and Xtext for providing its textual notation.
As EMF is already connected to LionWeb, we can use the pilot implementation of SysML v2 to export the metamodel of the SysML v2 language into LionWeb and then reuse it in the other modeling frameworks from the ecosystem, such as JetBrains MPS and Freon. In this talk we demonstrate what it takes to export an EMF-based large-scale language, such as SysML v2, to the LionWeb format; how SysML v2 is imported into the Freon and MPS language workbenches; and what kind of opportunities are created by this ecosystem of interoperable language engineering tools.
Raw datasets are often too large and unstructured to work with directly, necessitating a data preparation process. The domain of industrial Cyber-Physical Systems (CPSs) is no exception, as raw data typically consists of large time-series datasets logging the system’s status by regular time intervals. We introduce CPSLint, a Domain-Specific Language (DSL) designed to enable the data preparation process for industrial CPS. We leverage the fact that many raw datasets in the CPS domain require similar actions to render them suitable for Machine-Learning (ML) solution workflows, e.g., Fault Detection and Identification (FDI) workflows.
CPSLint’s main features include enforcing constraints through validation and remediation for data columns, e.g., imputing missing data from surrounding rows, as well as statistical insights. These insights will provide user friendly analytic output such as plots. The more advanced features cover inference of extra CPS-specific data structures, both column-wise and row-wise. For instance, descriptive execution phases as an effective method of data compartmentalisation are extracted and prepared for ML-assisted FDI workflows.
As EMF is already connected to LionWeb, we can use the pilot implementation of SysML v2 to export the metamodel of the SysML v2 language into LionWeb and then reuse it in the other modeling frameworks from the ecosystem, such as JetBrains MPS and Freon. In this talk we demonstrate what it takes to export an EMF-based large-scale language, such as SysML v2, to the LionWeb format; how SysML v2 is imported into the Freon and MPS language workbenches; and what kind of opportunities are created by this ecosystem of interoperable language engineering tools.
Freon is a lightweight, web-native language workbench for rapidly building domain-specific languages (DSLs) with projectional editors and modular tooling. In its latest version, Freon introduces a dedicated scoping meta-language that allows language designers to declaratively define name-binding and resolution rules—removing the need for boilerplate or low-level implementation code. This talk will introduce the scoping meta-language and show how it:
- Captures common scoping patterns—nested scopes, imports, cross-references—in a concise, reusable notation.
- Supports extension and customization to match the binding semantics of complex DSLs.
- Integrates seamlessly with Freon’s meta-toolchain to provide immediate feedback in the editor.
Using a working DSL example, I will demonstrate how these declarative rules are encoded, tested, and applied in practice, and how they interact with other Freon modules such as typing and validation. Although scoping is a fundamental aspect of virtually every DSL, it has often been treated as a secondary concern in language workbenches—receiving less attention than syntax or type systems. This presentation puts scoping center stage, showing how Freon’s meta-language turns what is usually an overlooked, intricate problem into a clear, concise, and highly maintainable specification.
Short program versionScoping is essential to every domain-specific language, yet often receives little attention in language workbenches. This talk presents Freon’s new scoping meta-language—a declarative, extensible way to define name-binding rules that makes scope management both simpler and more powerful. Through a live DSL example, we’ll show how this approach streamlines language design while reducing implementation effort.
In JetBrains MPS all models are instances of MPS languages, including the models that specify an MPS language itself. This allows to programmatically generate models for all aspects of an MPS language. As a proof of concept, we have set up a generator to create the generator aspect of an MPS language. We show that this is even possible on a high level of abstraction with generator templates using the MPS generator language. In the end, it is an MPS meta-generator that we created: an MPS generator that generates MPS generators.
An interesting question in the context of meta-generators is how to control the evaluation time of macros in the generator templates. Furthermore, we introduced a mechanism to weave the generated generator code with hand-crafted code parts for tailoring the generated generators. We also identified limitations where MPS seems to not allow for perfectly nice solutions for some meta-generator details. As an application example, we use our meta-generator approach to export arbitrary MPS model data in XML format. A meta-generator takes the structure aspect of an MPS language as input and generates a respective MPS generator which transforms models of this language into models of the standard MPS XML language. The output format can easily be changed by switching to another meta-generator.
With great power comes great complexity. Modern code completion systems have evolved to offer increasingly sophisticated suggestions. At the same time, as they grow more capable, their results also become busier and noisier. Avalanches of suggestions can overwhelm rather than assist, leading to cognitive fatigue and reduced productivity.
In this talk, we will introduce CoCoCoLa (Code Completion Control Language) — an alternative approach that puts the users back in control by enabling them to specify and filter for desired properties in the presented suggestions. Instead of relying solely on implicit context and heuristics, CoCoCoLa allows developers to articulate their needs explicitly. In addition, we will share the design choices behind CoCoCoLa and explore a working proof-of-concept prototype for the IntelliJ platform. Through this talk, we aim to offer a first taste of a more controllable and transparent code completion experience.
Nelumbo is a new language designed for DSL development and logic programming. With Nelumbo, you can define the syntax and semantics of your language, and immediately parse, test, and execute your definitions. Nelumbo offers fully declarative semantics. Nelumbo is open source and you are invited to participate in its development. Wim will give a live demo and discuss the rationale behind Nelumbo's creation. https://github.com/ModelingValueGroup/nelumbo