Program

Latests updates to the program

Schedule

Thursday, October 17
8:30 Registration
Track 1. Salon de Grados Track 2. Room A2.12
9:00 Welcome
9:30 OPENING KEYNOTE
Domain-specific languages: towards increased adoption of model-based system engineering
Marcel Verhoef — ESA
10:00 Transitioning to MBSE in Space Projects at GMV: Insights, Challenges and Lessons Learned Elena Alaña & Sergi Company — GMV Mutation testing for Domain-Specific Languages: the case of task-oriented chatbots Pablo Gómez-Abajo — UAM
10:30 Coffee break
11:00 Application of modeling techniques for on-board satellite applications Óscar R. Polo & Pablo Parra — UAH Langium + AI: Building AI Applications for DSLs Benjamin F. Wilson — Typefox
11:30 Feature Models with dimensions in Space Missions Pedro J. Molina — Metadev Legacy Code Transformation with AI: Translating PL/I to Kotlin Using Seq2Seq Transformers Gian Cunningham — Heirloom Computing
12:00 A SysML-based Framework for Analyzing Security and Safety Properties Applied on an Aerospace Data Link Uplink Feed System Deni Raco — RWTH Aachen Enhancing hardware design through domain-specific language tools for higher abstraction Luca De Santis — Micron Technology Italia
12:30 Cinco Cloud — Live Metamodeling of Graphical DSLs Daniel S. Mitwalli & Daniel Busch Idea Flowing Language — a graphical domain specific language for web games Sebastian Blume — U. of Desden
13:00 Lunch
14:30 Round table Elevator Pitch.
Goal: foster collaboration & networking.
Briefly present yourself/company/research group in 60 seconds to stress your area expertise, your goals or kind of problems you are focus to solve, if you are looking any kind of skill for collaboration.
15:00 Building AI-based assistants for DSLs with ModelMate Jesús Sánchez — U. of Murcia Jabuti CE: Writing Executable Smart Contracts for Enterprise Application Integration Mailson T. Borges — Unijui U.
15:30 Enriching Models with Extra-Functional Information Sebastian Stüber — RWTH Aachen Open Science principles in software product lines: The case of the UVL ecosystem David Romero — U. of Seville
16:00 Introducing Rustemo: A LR/GLR Parser Generator for Rust Igor Dejanović — U. of Novi Sad Improving the Observability & Security of Kubernetes with MDE Javier Centeno & Jesús Rodríguez — Metadev
16:30 Coffee break
17:00 Introducing Typir for Type Checking in the Web Johannes Meier — Typefox Building applications with spreadsheets Jesús Sánchez — U. of Murcia
17:30 Common modeling mistakes and the necessity of declarative modeling Wim Bast — Modeling Value Group arKItect graphical DSL workbench Andrei Samokish — Knowdelge Inside
18:00 Language Engineering for Language Migrations Federico Tomassetti — Strumenta Let’s make a Pact. Don’t break my API! Frank Kilcommins
Thursday, October 17
8:30 Registration
Track 1. Salon de Grados
9:00 Welcome
9:30 OPENING KEYNOTE
Domain-specific languages: towards increased adoption of model-based system engineering
Marcel Verhoef — ESA
10:00 Transitioning to MBSE in Space Projects at GMV: Insights, Challenges and Lessons Learned Elena Alaña & Sergi Company — GMV
10:30 Coffee break
11:00 Application of modeling techniques for on-board satellite applications Óscar R. Polo & Pablo Parra — UAH
11:30 Feature Models with dimensions in Space Missions Pedro J. Molina — Metadev
12:00 A SysML-based Framework for Analyzing Security and Safety Properties Applied on an Aerospace Data Link Uplink Feed System Deni Raco — RWTH Aachen
12:30 Cinco Cloud — Live Metamodeling of Graphical DSLs Daniel S. Mitwalli & Daniel Busch
13:00 Lunch
14:30 Round table Elevator Pitch.
Goal: foster collaboration & networking.
Briefly present yourself/company/research group in 60 seconds to stress your area expertise, your goals or kind of problems you are focus to solve, if you are looking any kind of skill for collaboration.
15:00 Building AI-based assistants for DSLs with ModelMate Jesús Sánchez — U. of Murcia
15:30 Enriching Models with Extra-Functional Information Sebastian Stüber — RWTH Aachen
16:00 Introducing Rustemo: A LR/GLR Parser Generator for Rust Igor Dejanović — U. of Novi Sad
16:30 Coffee break
17:00 Introducing Typir for Type Checking in the Web Johannes Meier — Typefox
17:30 Common modeling mistakes and the necessity of declarative modeling Wim Bast — Modeling Value Group
18:00 Language Engineering for Language Migrations Federico Tomassetti — Strumenta
Thursday, October 17
8:30 Registration
Track 2. Room A2.12
10:00 Mutation testing for Domain-Specific Languages: the case of task-oriented chatbots Pablo Gómez-Abajo — UAM
10:30 Coffee break
11:00 Langium + AI: Building AI Applications for DSLs Benjamin F. Wilson — Typefox
11:30 Legacy Code Transformation with AI: Translating PL/I to Kotlin Using Seq2Seq Transformers Gian Cunningham — Heirloom Computing
12:00 Enhancing hardware design through domain-specific language tools for higher abstraction Luca De Santis — Micron Technology Italia
12:30 Idea Flowing Language — a graphical domain specific language for web games Sebastian Blume — U. of Desden
13:00 Lunch
14:30
15:00 Jabuti CE: Writing Executable Smart Contracts for Enterprise Application Integration Mailson T. Borges — Unijui U.
15:30 Open Science principles in software product lines: The case of the UVL ecosystem David Romero — U. of Seville
16:00 Improving the Observability & Security of Kubernetes with MDE Javier Centeno & Jesús Rodríguez — Metadev
16:30 Coffee break
17:00 Building applications with spreadsheets Jesús Sánchez — U. of Murcia
17:30 arKItect graphical DSL workbench Andrei Samokish — Knowdelge Inside
18:00 Let’s make a Pact. Don’t break my API! Frank Kilcommins
Friday, October 18
8:30 Registration
Track 1. Salon de Grados
9:00 The Arazzo Specification: Unlocking Value for Humans and Machines Frank Kilcommins — Smartbear
9:30 Collaborative Modeling capabilities in Orca with CRDTs Pedro J. Molina & Javier Centeno — Metadev
10:00 GenFPL: DSL-embeddable functional programming languages Meinte Boersma — DSL Consulting
10:30 Coffee break
11:00 LionWeb: Status Update Jos Warmer — Independent & Nico Stotz — F1RE
11:30 LionWeb and Kolasu: an integration story Alessio Stalla — Strumenta
12:00 Implementing LionWeb in Rascal Ulyana Tikhonova — F1RE
12:30 A Case Study: Execution of LionWeb nodes in Truffle Language Framework Erkan Diken — F1RE
13:00 Lunch
14:30 Bringing Notebook Experiences to modern IDEs Mark Sujew — Typefox
15:00 Seamless Interpreter Integration for Compositional Modeling Languages Nico Jansen & Bernhard Rumpe — RWTH Aachen
15:30 Augmenting graphical modeling workbenches with semantic-aware interactive features Théo Giraudet — Obeo
16:00 Coffee break
16:30 Current Developments in Tooling and Editor Support in textX Milan Sovic — U. of Novi Sad
17:00 A build tool for modular MPS projects Arjan Oortgiese
17:30 Closing session
Saturday, October 19

Session Details

Marcel Verhoef — European Space Agency

One of the main challenges of model-based systems engineering (MBSE) is to provide adequate levels of abstraction and notations for different stakeholders and domains, while ensuring consistency and traceability across the system lifecycle. Domain-specific languages (DSLs) are a promising technology to address this challenge, as they allow the definition of tailored syntax and semantics for specific system aspects, such as requirements, architecture, behaviour or verification. DSLs can also facilitate the integration of MBSE with other engineering disciplines, such as software, hardware or cyber-physical systems, by enabling the interoperability and transformation of models across different domains. Furthermore, DSLs can leverage the benefits of MBSE, such as increased productivity, quality and reuse, by providing automated visualisation and analysis, simulation and code generation capabilities for domain-specific models. Therefore, DSLs and MBSE can mutually enhance each other and contribute to the adoption of model-based approaches in mainstream development. Both DSLs and MBSE have been around for a quite a while and despite their promise, we still see many obstacles in their adoption in practice. In my talk, I will reflect on our past experiences, the current challenges in the European space context, and look ahead at the near future for opportunities to collaborate.

Elena Alaña Salazar & Sergi Company Aguilar — GMV

Model-Based Systems Engineering (MBSE) has significantly impacted the space industry, being the European Space Agency (ESA) a key actor of the adoption of these technologies. MBSE is a practice digitalising systems engineering, and the goal is to apply it in all phases of a project, ranging from Phase 0 / pre-Phase A (concept development) to Phase F (closeout). The transitioning from a traditional approach to MBSE presents several challenges, specially for big corporations trying to change their well-established procedures.

GMV's practical case is studied in this presentation, looking back at the goals set when MBSE was initially incorporated, where the company is at the current time, and what are the future prospects. A timeline outlining key activities performed in the past and next steps in the near future is provided. Moreover, an overview of current projects where MBSE is at the centre is presented, engaging discussion on the challenges specific to the application of MBSE in space projects involving large teams. Topics including requirements management, document generation automation and reusability between projects are put on the spotlight of the discussion.

Óscar R. Polo & Pablo Parra — Universidad de Alcalá

The development of on-board satellite software faces numerous challenges and demanding deadlines due to the complexity and critical nature of space missions. Validation and verification (V&V) are crucial to the development process, as they provide evidence for non-functional requirements and ensure compliance with quality standards such as the European Space Agency's ECSS-E-40. Another major challenge is the use of various configured deployment platforms and the need for adaptation to hardware evolutions during the engineering process.

The Space Research Group of the University of Alcalá has successfully participated in multiple space missions, such as the development of the on-board software for the instrument control unit of the Energetic Particle Detector of Solar Orbiter and the INTA's Nanosat missions. To carry out these activities, we have used an approach that incorporates solutions based on domain-specific languages and component-based software engineering, which have significantly streamlined the development process, enhancing efficiency and ensuring that our developments are compliant with established standards.

In this presentation, we will share case studies of our model-based approach, which include the integration of tools for the verification of temporal constraints and configuration management, automatic code generation, and the integration of low-code tools for validation tests.

Pedro J. Molina — Metadev

Feature Models are powerful abstractions used to decompose in components, configure Space Missions and double check constraints are enforced in design & validation time.

In this talk we will describe our vision to trace requirements to Feature Models enhanced with physical dimensions (constraints). And how each component or feature can be used as the source abstraction to derive the SW concepts for embarked components and, at the same time, the virtual representation on a Digital Twin for the Ground Segment, that can be used both for simulation and for operation & real telemetry in mission time.

This approach provides a cohesive way to introduce models as the key driver of a Space Mission with impact in design, prototype, validation, embarked components and ground segments for tracking and operation.

Hendrik Kausch, Mathias Pfeiffer, Deni Raco & Bernhard Rumpe — RWTH Aachen University

The MontiBelle Framework consists in:

  1. a SysMLv2 profile and a parser,
  2. using the mathematical formalism FOCUS as semantical backend for distributed systems, which provides compositionality of refinement,
  3. a modular development methodology based on the SPES-methodology,
  4. an encoding of FOCUS in the theorem prover Isabelle,
  5. a code Generator from SysMLv2 into the FOCUS encodings in Isabelle, generating speficitations and proofs,
  6. a formal IDE, enabling modeling, navigation, visualization, interactions with the formal backend, and highly automated formal verification.

Demo Outline: An aerospace case study about a data link upload feed system used e.g. for the communication between a satellite and a ground station is formally developed and accompanied by verification. During the decomposition and refinement steps, 26 formal properties were proven by the MontiBelle tool. The formal IDE, SysML v2 the Isabelle specifications and the refinement proofs are featured in the Demo.

Benoit Combemale — University of Rennes, IRISA & ESIR

Identifying the best abstractions to integrate into a Domain-Specific Language (DSL) is typically a continuous and iterative process. In practice, it is often the result of usage of a more generic DSL and the identification over time of recurring patterns to be abstracted and integrated in a more specific DSL. While collaboration between domain experts and language engineers is essential to initiate the development of a DSL and its associated tools, in many contexts it is impractical to keep the language engineers continuously involved for ongoing iterations. Therefore, it becomes crucial for domain experts to be able to independently and incrementally refine their DSL and its tooling. However, current Software Language Engineering (SLE) tools and methods (e.g., language workbenches) demand extensive expertise in language engineering, which domain experts often lack.

In this talk, we present a novel approach enabling domain experts to iteratively specialise their own DSLs and associated tooling. Our approach integrates several techniques, including language slicing for immediate and opportunistic reuse, as well as the promotion of language constructs into the metalanguage to facilitate the definition of new constructs. We demonstrate the relevance of our approach through a case study in the security domain, that we use to identify and to discuss the challenges associated with achieving this vision.

Daniel S. Mitwalli, Daniel Busch, Alexander Bainczyk, Marco Krumrey, & Jonas Schürmann

Cinco Cloud introduces a holistic web-based environment that enables language engineers to metamodel graphical domain-specific languages and domain experts to use these languages within corresponding integrated modeling environments. We now present the advanced approach of live metamodeling, which seamlessly blends the transition between language engineering and the use of a graphical DSL within a single environment.

Changes to the metamodel of a graphical DSL are immediately reflected in the graphical editor of the language being used, including the syntax and semantics of a language. The presentation illustrates the functionality of live metamodeling within Cinco Cloud with a concrete application example.

Jesús Sánchez — University of Murcia

The availability of AI features to enhance developer productivity is now the norm in programming IDEs. However, DSL environments cannot keep the pace and by default do not have such smart editing facilities. A key reason for this is that the scope of a DSL is typically small, which means that it is expected to have available only few training examples, but also that the amount of time available to develop an AI assistant is also limited.

This talk presents ModelMate, a system to build AI assistants for DSLs based on fine-tuning pre-trained language models. In this talk we will discuss which editing and assistance tasks are supported by ModelMate, the steps involved in building an assistant for a specific DSL depending on whether it is textual or graphical and how ModelMate works in practice. We will also perform a short demonstration with some examples of assistants that we have built (e.g., for Emfatic and Xtext) and report some performance figures. ModelMate is available at Github: models-lab/model-mate.

Sebastian Stüber — RWTH Aachen University

We explore systematic techniques to enrich models with additional information while preserving their standard and functionality. This opens up possibilities for new functionalities, such as sophisticated analysis and generator configurations. Employing a systematic approach, our methods are universally applicable, regardless of the specific modeling language. Three approaches will be compared:

  1. Stereotypes to add information within the model,
  2. Tagging, as a generic approach to add information in separate file, and
  3. use-case specific language, which references the original model. We will evaluate and compare the approaches from both the domain-expert and the tool-developer point of view.

Multiple examples demonstrate the applicability of the approaches. These examples consist of an effect-analysis for component and connector models, adding sustainability information to models, and a fine-grained code-generator configuration. The demonstrations will use the MontiCore language workbench.

Igor Dejanović — University of Novi Sad

In this talk, we introduce Rustemo, a new open-source and free parser generator for Rust that supports deterministic LR (with various flavors of LALR) and generalized LR based on the Right-Nulled GLR algorithm (RNGLR).

Rustemo features a clean separation between its core components: lexers, parsers, and builders. Lexers break an input sequence into tokens. Parsers perform syntax analysis on the tokens recognized by the lexer, while builders are responsible for producing the final output. By selecting or writing an appropriate lexer, different inputs can be parsed. Additionally, by choosing or writing an appropriate builder, one can obtain an auto-generated and manually tuned abstract syntax tree, a concrete syntax tree, or another structure at the end of the parsing process. For the default builder, AST types and actions are deduced from the grammar, allowing users to change them while preserving the modifications in subsequent round-trip generations.

Although still a relatively young project, Rustemo implements a notable set of features[1], is fully documented, and has good test coverage.

Following a short introduction, the typical workflow for using Rustemo will be demonstrated through a live demo of building a parser for a simple language. The code and instructions for the demo will be available on the project site and Git repository.

Johannes Meier — Typefox

Type systems are crucial parts of developing languages. Among others, types systems are needed for checking static semantics like validating assignability, resolving cross-references to type-dependent targets like overloaded functions, and generating source code for statically typed languages like translating DSLs to TypeScript.

Developing type systems for languages faces several recurring challenges, including similar kinds of types like primitives, classes, and functions, recursive type definitions, performance optimizations, and providing meaningful error messages for users of the language.

Due to the rise of software running in the web including programming languages and DSLs, Typir is introduced as the new open-source framework for type checking in the web. Implemented in TypeScript, Typir supports textual languages developed in the web stack, and can be applied to other model sources as well. Typir is designed to be more pragmatic than formal, e.g. no rule engine is used, but provides solutions for recurring type checking challenges.

After motivating challenges when developing type systems from scratch, this talk shows the design of Typir and applies Typir to concrete example languages. This presentation aims to discuss with the language engineering community needs for web-based type checking, to collect early feedback and to identify additional integration scenarios for Typir.

Federico Tomassetti — Strumenta

Migrating from one programming language to another is a complex, expensive, and valuable challenge. Techniques from the Language Engineering community can address this issue.

We present the current version of our solution, drawing on our experience with large-scale migrations across various domains.

We will discuss the numerous challenges we have encountered and those that lie ahead, including performing symbol resolution at scale, identifying code clones, eliminating dead code, and recognizing recurring patterns with the intent of supporting idiomatic translations. Finally, we will demonstrate how LionWeb has enabled us to develop a multi-platform solution for operating on models.

Wim Bast — Modeling Value Group

The author will speak about his observations in his extensive journey of realizing DSLs in different domains. He will share his thoughts about common mistakes made in modeling of, and thinking about the domain. Most notably the pitfalls of imperative or procedural thinking. He will address the notions of 'minimal complexity' and 'avoiding over-specification'. From there he will argue the necessity of complete declarative modeling. At the end he will address the requirements for declarative modeling languages and some open challenges. Positive and negative examples will be shown and demonstrated. A dialogue with the audience is welcome during the presentation.

Pablo Gómez-Abajo — Universidad Autónoma de Madrid

Mutation testing is a well-known technique for assessing the quality of software test suites. It involves introducing artificial faults into the source code, which generates a set of source variations called mutants. Next, we can apply the test suites to such mutants to account for how many of these mutants are detected, i.e., how many of them are killed. This measure yields a metric of the quality of the test suites called mutation score.

Creating ad-hoc mutation testing tools from scratch is costly and error-prone, making difficult its application to Domain-Specific Languages (DSLs). To alleviate such inconveniences, the Miso group built a domain-agnostic tool to engineer language-specific mutation testing tools called Wodel-Test. Using Wodel-Test, we have successfully generated mutation testing tools for both DSLs and general-purpose languages like finite automata, Java, ATL, and task-oriented chatbots.

In this presentation, we will showcase Wodel-Test for chatbots. The latter are defined using a DSL called Conga, which can generate code for chatbot platforms like Dialogflow and Rasa. We will demonstrate all steps in developing the testing solution and its use in analysing chatbots and their test suites.

Benjamin F. Wilson — Typefox

Delivering AI applications that understand custom DSLs is a challenge. Even state-of-the-art models have trouble recognizing and generating syntax for novel DSLs. Good prompting, retrieval augmented generation, and fine-tuning can help to correct this, but it's a time-intensive task, and you may not get the results you want in the end. For Langium-based DSLs, we've had the same problem, and so we would like to present Langium-AI as a solution.

Langium-AI is a proposed extension to the Langium engineering framework to support developing AI applications for DSLs in parallel with your DSL itself. We'll demonstrate how this approach works by deeply integrating the design process with the existing Langium workflow, resulting in consistent & testable results throughout the development process. We'll also walk through a concrete example of such an AI application being developed for a fresh DSL.

Gian Cunningham — Heirloom Computing

In my talk, I will explore the application of language engineering by showcasing an article I have written based off of the following Strumenta article, where I develop a sequence-to-sequence (Seq2Seq) Transformer model with PyTorch to translate and transpile legacy PL/I code into modern Kotlin.

This notebook involved constructing a custom Transformer model that uses advanced techniques, facilitating effective sequence-to-sequence tasks like language translation. A dataset is prepared by tokenizing ( the process of converting text into smaller units called tokens, which can be words, subwords, or characters, for easier analysis and processing) PL/I and Kotlin code, building vocabularies, and creating training and test splits. The training process then employed various techniques to optimize model performance. The translation process was enhanced by integrating ANTLR4 for parsing PL/I code and using Jinja2 for code generation, ensuring a smooth transition from legacy to modern codebases.

The results demonstrate the practical applicability of AI in language engineering, showcasing how neural networks can transform outdated code into contemporary solutions.

Sebastian Blume — University of Desden

Game experts are not necessarily programmers. Game experts are able to design games. The problem is that there are mostly designing languages which are only simulating games. Languages for developing games are not just requiring programming skills but they do not support design processes.

The solution is Idea Flowing Language. IFL is a graphical domain specific language which allows designing and developing games at the same time. In other terms said when the expert designs a game by a model of IFL it is not needed to design a controller or a view for it at once; even without having this it is already possible to just test and play the game at the model. Because of this capsulation possibility for the game logic and the game layout MVC can be fulfilled in the context of domain specific languages.

Basically, this all can be fulfilled since IFL is a mash between petri nets in a high level, behavior trees and Harel state machines. Idea flowing language is currently designed for little games like dice games. In the future apps with a lot of interaction should be possible. By a lot of data processing and instances vector engines could be applied.

Benoit Combemale — University of Rennes, IRISA & ESIR

As software grows increasingly complex, the number and diversity of concerns to be addressed also rises. To answer this diversity of concerns, developers may end up using multiple, possibly domain-specific, languages in a single software project, a practice known as polyglot programming. This practice has gained momentum with the rise of execution platforms capable of supporting polyglot systems (e.g., GraalVM, possibly with Truffle for DSLs). However, despite this momentum, there is a notable lack of design-time support for developers working on polyglot programs, such as in debugging facilities.

This talk addresses this gap by introducing a novel debugger architecture that is language-agnostic yet leverages existing language-specific debuggers. The proposed architecture is dynamically extensible to accommodate the evolving combination of languages used in polyglot software development. It utilizes the Debug Adapter Protocol (DAP) to integrate and coordinate existing debuggers within a debugging session. The effectiveness of this approach is demoed with a prototype, PolyDebug, and its application to use cases involving C, JavaScript, and Python.

This session includes a prototype & demo.

Mailson Teles-Borges, Eldair F. Dornelles, Sandro Sawicki, Fabricia Roos-Frantz, Rafael Z. Frantz — Unijui University,
Jose Bocanegra — Universidad de los Andes,
Antonia M. Reina-Quintero - University of Seville, &
Carlos Molina-Jimenez — University of Cambridge

Usually, the integration of two or more enterprise applications involves business or technical constraints, which can be represented by smart contracts. Nevertheless, writing smart contracts is an error-prone task due to the knowledge required and the variety of programming languages and blockchains available. Jabuti DSL, a domain-specific language (DSL) for Enterprise Application Integration, tries to solve this problem through simplified syntax and business semantics constructors.

As Jabuti DSL smart contracts are non-executable, we have developed Jabuti CE. Jabuti CE is a tool for specifying and transforming Jabuti DSL smart contracts into executable code. It consists of three main components: ANTLR-based Jabuti DSL Grammar, VSCode Plugin, and Transformation Engine. The grammar contains Jabuti DSL semantic rules. VSCode plugin is an extension for VSCode editor and provides some features like code highlighting, autocompletion, code navigation, syntax and semantic validation, among others. The transformation engine allows the implementation of template transformation for any blockchain and has the implementation for Ethereum through Solidity language and Hyperledger Fabric through Golang.

Jabuti CE code is available at https://github.com/gca-research-group/jabuti-ce-transformation-engine.

David Romero, José Ángel Galindo, Bhushan Megha, José Miguel Horcas, & David F. Benavides — University of Seville

Feature models are essential for modelling variabilities in software product lines, which often reside in private or dispersed repositories. The Universal Variability Language (UVL) addresses this issue by standardising the serialisation of feature models and promoting community-driven sharing.

Open science aims to make scientific research and data accessible to all. Its principles are based on transparency, accessibility and collaboration. This approach can be applied to the software product line community using a specific set of tools:

  1. uvlhub to share datasets of feature models,
  2. flamapy to enrich and extract metrics from these models, and
  3. FM Fact Label to visualise features from feature model data.

We identify the missing characteristics that feature modelling practitioners need in current open science tools.

Through hands-on sessions, participants will gain practical experience in using uvlhub to host datasets, flamapy to analyse them, and FM Fact Label to visualise them. This will not only demonstrate the integration of these tools into their research workflows but also show how their work can align with open science principles.

Javier Centeno, Jesús Rodríguez & Pedro J. Molina — Metadev

The Enterprise computing is moving steadily to containers and using orchestration tools with different flavors of the same ground: Kubernetes.

Kubernetes is capable of managing many machines (physical or virtual) to deploy containers in a distributed manner in a way we can provide high-availability, flexible load balancing, and fast provisioning. Mid and large-sized enterprises run hundreds of containers inside such Kubernetes clusters. Therefore the need for Observability and Security checking the graph of nodes, applications, services, instances & dependencies is a real need at scale.

Traditional tools in the Kubernetes landscape are governed by textual tools: YAML. There is space for building better graphical tools to represent and visualize the current state of a cluster (linked with real-time telemetry) showing dependencies, failing components, security risk, etc. The model discovery process makes a reverse engineering task to discover services, pods, containers, networks and its topology. At the model level a visualization is provided (Digital Twin) as a virtual representation of the physical assets to be observed & monitored.

Jesús Sánchez — Universiy of Murcia

Spreadsheets are the most widely used programming system, notably used by non-technical users who are not familiar with general purpose languages (GPLs). From the point of view of the “programming experience”, the main characteristic of a spreadsheet is that the user writes formulas (i.e., business logic) which directly manipulates values, in contrast to the use of abstract expressions in GPLs. In this respect, some low-code platforms (e.g., AppSheets) have attempted to use the idea of spreadsheets to facilitate its adoption by end-users, but they do not fully exploit the flexibility of spreadsheets. In this context, we are developing a type of low code platform designed around the spreadsheet metaphor, based on direct manipulation of concrete values, plus an application building environment based on live programming techniques.

The goal of this platform is to enable end-users construct simple applications, tailored to their needs by simply using well-known concepts from spreadsheets. In this talk we will present the goals of the project, named LowSheets, some motivating examples, the architecture of the platform and showcase the platform though a demo.

Andrei Samokish & Samuel Boutin — Knowdelge Inside

arKItect workbench, edited by Knowledge Inside, has a consistent track record of customer applications in automotive, railways signaling, energy spatial, finance, and construction domains, e.g. Hitachi rail STS has been using it for SADT modeling as support to safety analysis of SIL4 systems since 2013 or Renault electrical vehicle MBSE started to be modeled in arKItect in 2011.

arKItect key features that make it unique on the market are:

  • Simplified and much more intuitive metamodel, w.r.t Eclipse EMF/GMF metamodel or similar Object-Oriented workbenches.
  • Interpreted; your changes in the data model are available immediately without compiling.
  • low-code.
  • Intuitive because just need the concept of sets to understand a data model.
  • Usable by non-developers.
  • Fully scriptable in Python.
  • Multi-user and collaborative.
  • Generative views. A change anywhere is replicated automatically in every viewpoint where it is filtered.
  • 2D views, providing the potential of including 2D/3D design in combination with functional modeling.
  • Support for big models (100 000 objects).
  • Import / Export capabilities through Python.
  • Providing advanced features for user administration and configuration management, including Options, Variants, and Phases.

A demo will be delivered building a data model and a model from scratch showing most of the features.

Luca De Santis — Micron Technology Italia

Today's digital hardware systems are a blend of heterogeneous subsystems, including general-purpose processors, customized processors, complex state-machines and dedicated hardware accelerators. This complexity challenges the description, synthesis, and programming of these architectures, highlighting the need for increased levels of abstraction. The current trend involves integrating higher-level software concepts into hardware design, which can be uncomfortable for hardware engineers.

This proposal suggests a language that aims to elevate the abstraction level in digital system representation while preserving critical aspects of hardware engineering, such as synchronization, communication, customized programming, and optimization for power, area, and speed. It introduces a hardware description language based on domain-specific language methodology (ANTLR), demonstrating how hardware and software domains can coexist by creating appropriate syntactical structures aligned with the underlying hardware model.

Frank Kilcommins — Smartbear

Introducing the Arazzo Specification which enables the ability to define and document workflows, a series of API calls, that when woven together accomplish a specific business objective. The new specification was developed under the OpenAPI Initiative, and complements current specifications, including OpenAPI and AsyncAPI.

Arazzo provides a deterministic recipe for using APIs and enables code generation tooling for a given API based on use-cases. Additionally, Arazzo improves regulatory checks and bridges gaps where use-case flows span multiple API descriptions. The new specification provides a sufficient level of predictable determinism to AI models, allowing them to offer a natural language abstraction on top of the business use cases, while in parallel giving interoperability benefits to the actual API providers. The result is more value, with less vendor lock-in, and consistent API offerings for both humans and the new wave of AI consumers.

In general, there's enormous potential to enhance the developer experience (DX) and API documentation by enabling graphical rendering of API workflows. The specification also improves human understanding of how to consume API endpoints to achieve a specific goal. Arazzo descriptions can stay up-to-date and be assertable against the underlying APIs. This reduces the need for out-of-band documentation sharing, reduces the time and effort required to implement complex workflows, and automates testing and other repetitive tasks.

Overall, the Arazzo Specification improves the capability of API specifications to tell the story of the API in a manner that improves interoperability across industries.

Matthew Weidner, Javier Centeno & Pedro J. Molina — Metadev

Concurrent Collaboration is a feature gaining momentum for SaaS tools. Tools like Google Suite, Office 365 or Figma have popularized it in the last few years. Remote work has been a key driver for the adoption of Concurrent Collaboration. On the other hand, CRDTs (Conflict-Free Replicated Data Types) are a powerful abstraction able to allow concurrent and offline editing and simpler reconciliation (deferred merges at a later time).

In this session, we will present how we enhanced Orca (a no-code tool for container orchestration) and Daga (a diagramming library for models) to support Concurrent Collaboration using CRDTs in the modeling tool. The strategies followed, considerations about usability, pros & cons and lessons learned will be shared.

Meinte Boersma — DSL Consulting

Many DSLs end up having expressions, ranging from relatively simple logic and/or arithmetic, to more intricate, query-like collection and object traversals. More often than not, such a “funclarative” expression sublanguage is built from scratch — which is fun but also costly.

GenFPL is a new tool to generate a ready-to-run Functional Programming-style Language (FPL) to easily embed in existing DSLs — provided they can be defined using LionWeb. It's inspired by KernelF for JetBrains MPS, while aiming to be portable and as technology-independent as possible. It does so by generating a coherent family of extensible languages from a simple configuration.

In this talk, Meinte will demonstrate how GenFPL is used to implement a FPL and integrate it in an existing DSL. He will also touch on implementing and running an interpreter in TypeScript, defining a concrete syntax using Freon, integrating on the type (system)-level, and introducing a standard library. Finally, the author will discuss some design trade-offs around (not) providing functional abstractions.

Jos Warmer — Idependent & Nico Stotz — F1RE

At last year's LangDev we introduced LionWeb: Language Interfaces on the Web. Since then LionWeb has developed quite a lot. We expect to release a new version 2024.1 including a number of new features. The existing parts of the specification have been sharpened, but most importantly this LionWeb release includes the first part of the server protocol. This part is the "bulk" protocol, as it focuses on retrieving and storing (sub)trees of nodes. This protocol is not defined simply as dumb storage for model trees, but also defines validation requirements to ensure that the model tree always remains correct in relation to the LionWeb specification. As always, we don't just write specifications, but also implement them in parallel. To validate the bulk specification we therefore developed a lionweb-repository, acting as a reference implementation of the bulk protocol. Also new are several implementations in more programming languages and test suites to ensure that different implementations conform to the specification.

At this moment we are working on the next aspect of the server: the "delta" protocol. This will enable software components to subscribe to model changes in the server and allow sending and receiving changes to nodes. We will end with naming several development projects using LionWeb in real life.

You can follow the work on github.

Alessio Stalla — Strumenta

LionWeb is an open effort for language-oriented modeling tools on the web, including specifications, reference implementations in multiple languages, and accompanying software. Kolasu is Strumenta's Kotlin/JVM library for defining, traversing, translating, exporting ASTs (abstract syntax trees) and more. It’s part of our family of libraries for different platforms supporting our StarLasu methodology for building language-oriented tools. We present our motivations, the design decisions we took, the lessons that we learned and the challenges we faced extending Kolasu with support for LionWeb, and using it to persist and retrieve models in the LionWeb Model Repository.

Ulyana Tikhonova — F1RE

The LionWeb initiative was started two years ago, to facilitate the interoperability between various language engineering tools and to allow for the reuse of language engineering and modeling tools. In this talk, we'll demonstrate our implementation of the LionWeb protocol in Rascal. Rascal is a meta-programming language that is used for the implementation of textual DSLs and for more generic use cases of syntax-based analysis and transformation of programming code.

Comparing to the tools already integrated with LionWeb, Rascal belongs to the traditional grammarware space. In Rascal a DSL is defined in the form of a concrete syntax grammar. Native to LionWeb models are represented as abstract syntax trees and new types are defined using algebraic data types. Furthermore, Rascal doesn't support traditional OO concepts: there are no objects, only immutable values; no user defined sub-typing (inheritance); all models are pure trees, without explicit cross-referencing between their nodes.

We will demonstrate how we close this expressiveness gap and implement LionWeb constructs in Rascal, allowing for importing LionWeb languages (M2) and models (M1) in Rascal. This work is inspired by and partially reproduces the work from 2017 by Tijs van der Storm on the conversion between Ecore and Rascal.

Erkan Diken — F1RE

The current implementation of LionWeb specification enables the exchange of models between different technologies and programming languages. In this talk, we will present a case study of converting LionWeb representable models into Truffle intermediate representation (AST) and eventually executing the models in Truffle Language Framework.

Truffle is a language implementation framework that enables custom language implementations in Java. When compiled by GraalVM, languages implemented in Truffle are translated into efficient machine code. Enabled by this technology stack, specific instances of a class can be optimized in order to generate efficient machine code, using a technique called partial evaluation. Moreover, GraalVM provides a framework for creating language-agnostic tools like debuggers and profilers.

In the context of this work, we are able to execute the models outside a language workbench (e.g. MPS) in order to benefit from the high-performance execution framework and language-agnostic tools.

This work prioritizes the usage of the existing implementations of LionWeb libraries and tools. A demo of the work, consisting of the following steps, will also be presented to the audience:

  • Exporting MPS nodes (language and instance models) to LionWeb nodes
  • Importing LionWeb nodes and converting them into Truffle nodes
  • Compiling the Truffle nodes and executing in GraalVM.
Mark Sujew — Typefox

With the increasing popularity of Jupyter Notebooks, contemporary IDEs, most notably VS Code, have started natively supporting a new interaction model for programming: Instead of files consisting purely of code, we can deliver documentation, images, renderings and data analysis within a single file. This support goes beyond just rendering and editing notebooks, but brings the full feature set, including kernel execution - to your IDE.

In this talk, we'll present how we implemented a native notebook experience in the Eclipse Theia IDE framework. We'll take a look at an architectural overview and some interesting implementation details. Finally a short demo shows the feature in action.

Nico Jansen & Bernhard Rumpe — RWTH Aachen University

In software language engineering, interpreters enable the direct execution of program code or models, providing instant feedback at system runtime. As reuse is key in developing sophisticated modeling languages, their generated and hand-written artifacts must be seamlessly integrated. This challenge also applies when combining partial interpreters of different language components into a composed infrastructure.

While there are solutions tailored for specific technical spaces, there is still a research gap for generally building and incorporating partial interpreters into the context of larger languages. In this talk, we present the architectural design for integrating model interpreters of compositional languages. The concept is based on an extended visitor pattern for compositional data structures and is demonstrated in the context of the MontiCore language workbench.

Ultimately, we present how an automatically integrable interpreter infrastructure can be generated, requiring minimal manual implementation effort. The talk will be accompanied by a tutorial in which we compose interpreters of different languages, further integrate them with dedicated runtime artifacts, and seamlessly deploy them in the context of a model-driven low-code platform for web applications.

Théo Giraudet — Obeo

Domain-Specific Modeling Languages (DSMLs) usually come with a dedicated integrated environment called a modeling workbench. In the context of graphical DSMLs, such environments provide modelers with dedicated interactive features that help them perform navigation and editing tasks. Many of these features are generic and can be used by graphical DSMLs without any specialization (e.g. a physical zoom). Others require specializations in accordance with the involved DSML. For instance, a semantic zoom requires specifying which elements of the model must be graphically modified at the different zoom levels. However, current language workbenches do not propose facilities to help language designers in developing such semantic-aware features for their graphical modeling environments. So language designers must develop these features by hand, which is a complex and time-consuming task.

In this talk, I will present you a novel approach we propose to help language designer in this task. In addition to the existing DSMLs concerns such as the syntaxes, we propose to capture the interactive features of the targeted modeling workbench in the form of DSML pragmatics. We will finish with a demo of this new approach implemented as a prototype in Sirius Web, a language workbench for graphical DSMLs.

Milan Sovic, Daniel Elero, & Igor Dejanovic — University of Novi Sad

textX is a meta-language for building Domain-Specific Languages in Python. Inspired by Xtext, it leverages Python's dynamic nature to provide a more lightweight approach to building DSLs and parsers.

In this talk, after a brief introduction to textX, we will focus on recent developments in tooling and editor support. The new textX playground will be presented, allowing users to interactively create grammars and experiment with programs/models. Additionally, we will showcase developments in support for the Language Server Protocol and VS Code integration for textX languages.

The presentation of new features will be conducted through a live demo of building a small DSL for drone navigation. The code and instructions for the demo will be available on the project site and Git repository.

Arjan Oortgiese & Johan Blok

In this talk, we want to demonstrate a tool for building and managing dependencies for JetBrains MPS projects and invite the community to contribute to this project.

MPS is great for developing languages and extensions for languages written by other MPS developers. Extending a language not in the same repository becomes more complicated when the other language is under active development. Due to this, we observe that projects often are developed in one repository. These monolithic-like style MPS plugins make reusing parts of the project harder for the community.

The challenges that this tool aims to overcome are:

  1. New developers who want to build our extension language need the language we are extending to be installed in MPS to build successfully. MPS can help when the language is in the MPS plugin marketplace. If the language is not in the marketplace or the build job is a CI/CD pipeline, the developer must ensure that the language is installed in MPS.
  2. Solving transitive dependencies.
  3. The new EU Digital Markets Act (DMA) requires a project to supply a Software Bill of Materials (SBOM) that lists all the used components by the software.
Frank Kilkommins — SmartBear

60+% of organizations cite microservices as a leading driver for API growth in the next 2 years. The benefits of decoupled capabilities are hard to maintain at scale. Extensibility techniques and API management tools by themselves are not preventing breaking changes or sub-par developer experiences.

In this session, we’ll introduce Bidirectional Contract Testing — and dive into how contract testing capabilities can empower teams to evolve APIs safely while still leveraging their investments in microservices. Teams can quickly scale by utilizing existing mocking and testing tool investments to get visibility on API consumers. Understand what’s breaking, what’s not, and when a break is a good choice!

Walk away with?

  • Understanding of challenges in scaling microservices
  • How to design APIs for future needs
  • Why extensibility and API mgmt. are not enough
  • Overview of contract testing and how it can change your thought process on the testing pyramid
  • Scenarios in action — don’t break your promise!

Questions

Generally. talks will be arranged in blocks of 25 minutes and 5 minutes for Q&A.

To allow a more accessible Q&A sessions, we will be using Proxyhands.

ProxyHands is a mobile app that allows people with disabilities or shy people to make questions for the sessions direclty from the phone. Also if questions exceed the time provided, they can be answered offline by the speakears to be be shared later.

Feel free to install the app and try it during the event. A unique code for each session will be shared at the begining of the talk.