Defense of Lisp macros: an automotive tragedy

Replacing Lisp's beautiful parentheses with dozens of special tools and languages, none powerful enough to conquer the whole software landscape, leads to fragmentation and extra effort from everyone, vendors and developers alike. The automotive field is a case in point.

Intro

The fearsome Lisp macros and other aliens brought to life by the Lisp parentheses often get the blame for breeding a style of software development rich in abstractions and domain specific languages in contrast to the easily recognizable C-like creatures. What is not officially admitted is that in any project or industry of sufficient size, an overabundance of better, improved, simple, easy to use tools, programming languages, standards and best practices are employed and brought to life in an ah-hoc manner. A confirmation of Greenspun's Tenth Rule, if you will. Though the motivation behind this proliferation be that this is just the way things are or that the project is just too complex, part of reason lies in the use of programming languages that lack power, extensibility and interactivity.

I'll look at the automotive industry which relies solely on the C programming language, at least on the surface, and bring to light these hidden creatures. When a people is not powerful enough to rule the whole continent, every beggar together with their pack of hideous horses vies for their neighbor's riches. The landscape is fractured and tens of languages and dialects are spoken. All journeys take much longer than usual. Pacts, alliances and agreements for safe passage with all tribe leaders need to be signed and constantly updated. One sees in their voyage countless rituals none better or more widespread than the other. Without a powerful language to hold it all together, the automotive world is also split into pieces with each wannabe king claiming it has found the path to the happy life and deserves to mount the throne. Watch for this sign in what follows and for the three main problems all these tools try to solve once and for all.

The three basic problems all tools try to solve

Firstly, how to find out what your application is doing in real time once it is up and running. How to check your assumptions, backtrack, readjust and try again, how to have access to as many parts of the running system as possible and how to modify anything without slowing down your pace, like a road cyclist who doesn't need to stop at every curb to change gears and lose momentum. How to achieve all this seamlessly and concentrate on the project itself and not the tooling and the irrelevant problems around it. In short, how to be one with the system at all stages of development with as few interruptions as possible.

Secondly, how to come up with more intuitive and powerful means to develop, express and even comprehend whole software systems above the minutest details of algorithms and if's and else's. The classic can't see the forest for the trees idiom. What does the system do at a higher level? How it is organized? How can I easily and meaningfully change it without going into too much pointer arithmetic details? How do I speak about it, how do I cram it all up inside my brain to aid my understanding and explore it in my mind or with my colleagues and clients to better get the message across and align on goals and purposes. Ideally, it would be in a language closer to English or a language that everybody speaks fluently or even something visual I can slurp up all at once, like a painting that I can glance instantly and not a book that takes weeks to read, though of course, now I need some training or experience in the arts to understand anything and there are myriads of interpretation of those brushes before my eyes. But still, a language that is formally verifiable and able to be evaluated by a computer without much intervention from my side. This is the heart of abstraction towers and of domain specific languages. Many tools try to solve this problem without even knowing it.

Lastly, how to organize everything around the main programming language. What tools are best for building or packaging your software, what tools for writing code and how to settle on a definition of best. How to organizing teams, agree on standards, writing style, where to keep requirements and how to keep them in sync with the implementation. How to make sure everyone is on the same page regarding the language features, how to use those features in a standard way and what to avoid, either because one wants to prevent different ways of doing things or because some of those things are really dangerous. How to solve the shortage of qualified people problem which branches into how to make things more beginner friendly, easier and intuitive by avoiding C pointers and C macros and recursion, for example and by employing GUI tools and code generators that avoid these pitfalls. Only to fall into different mousetraps, of course.

All solutions, the problems they beget and again the solutions to fix them in this never-ending circle all spring from these initial needs and wants.

Automotive has special tools and languages for everything

A car's software is split into dedicated ECUs, each controlling the seats, windows or engine, among others. These microcontrollers with memory and peripherals and developed independently by different teams are all connected by a single communication network and exchange thousands of uniquely identified CAN messages each second. To simplify, each ECU behaves like a server, and the list of messages it sends and receives represents its API. These are the microservices of the automotive world, if you will.

At a minimum, an ECU must receive wake-up and periodic CAN messages from other ECUs to function properly. In the development phase, where these other ECUs are physically missing from one's desk, an engineer needs to simulate them in software together with the ability to sniff out CAN messages from their physical ECU, similar to what a Wireshark user sniffing network packets does. In a first sign of wannabe sole rulers across wast landscapes, Vector's CANoe, is the only tool for all development and test tasks and a versatile tool for the development, testing and analysis of entire ECU networks as well as individual ECUs.

In my experience, at the start of any new project it best to first discover the project at the fastest speed possible, with the full knowledge that my initial solution will not be correct and most of my code will not even survive the first few iterations. The classic build one to throw one away idiom. These iterations will help understand and even rewrite some of my initial assumptions. In extreme cases even to confirm that what I'm trying to develop is useless or impossible to build. This then is the first step in developing a software product or probably any handiwork. CANoe, and plenty of other tools in this field, confirms this need and that C is not the right tool to be going out exploring dangerous paths in the middle of the night,

At the beginning of the development process, CANoe is used to create simulation models which simulate the behavior of the ECUs. Over the further course of ECU development, these models serve as the basis for analysis, testing and the integration of bus systems and ECUs. This makes it possible to detect problems early and correct them. Graphic and text based analysis windows are provided for evaluating the results.

CANoe: Product Information

To aid this purpose, CANoe's simulation panel contains a visual representation of a car's network and the attached ECUs with the intention of simulating the whole car before writing any C code. Each ECU can be clicked around and programmed in CAPL, an event-driven programming language developed by Vector. It has C-like syntax, intentionally chosen to make C developers feel right at home, as Vector notes, while also acknowledging is doesn't introduce any groundbreaking ideas or abstractions. Instead, it tries to make life easier for engineers: the goal of CAPL has always been to solve specific tasks as simply as possible. Typical tasks are reacting to received messages, checking and setting signal values and sending messages. A program should restrict itself to precisely these things and not require any additional overhead. The CAPL know-how page has all the details.

This software modeling and the introduction of new, easier, intuitive, even graphical languages and interfaces similar in style to what one is already familiar with is a recurring theme throughout this journey. Watch out for it.

CANoe's trace panel displays all the messages on a CAN network, the sniffing requirement I've mentioned above. All messages have unique ids and each message's payload contains one or more signals that are attached a predefined meaning, like ignition_on or vehicle_speed. Thus, on receiving a CAN message, a hypothetical ECU implementation in C would first check its id in a big switch case, let's say. If the id is from the approved list of messages, C goes on and reads the payload. We can imagine getters and setters for such signals, like getIgnitionStatus or a periodically called function like sendVehicleSpeed that sends this info out on the CAN network for all the other ECUs to see. All this data about ids, messages and signals can be hard coded, but if it would, CANoe and similar tools would have a hard time deciphering what are essentially just bytes on a wire and the user would, in the best scenario, only see hexadecimal values in their graphical interfaces. They would have to either check the source code to see what they mean or some kind of a table like an excel sheet shared by all team members where all message ids and signals are kept together and assigned human-readable tags and extra info. So to make life easier for everyone the list of all CAN messages available on a network, together with the ECUs that send or receive them is kept in a dbc database that is then used by both the C implementation and the visual tools, a proprietary file format also developed by Vector. I've put the easier part into quotes. Every time I see something advertised as easy and intuitive I'm seeing green meadows and leafy forests, blood-sucking ticks waiting in the ambush.

Take a look at a dbc file example. Not human readable. The handling of these databases which, by the way, are now officially part of the project's requirements and managed by the client, is done with yet another tool: CANdb. Interestingly, dbc files do not contain the default values nor the interval at which signals must be sent, so this extra info is kept in some other medium, like excel files, as seen it in practice.

But see what problem this need of ours to express CAN messages in a human-readable format introduces? Now the list of CAN messages is extracted away from the implementation language, extracted and expressed in a format the C language does not understand. As a result, every time the client updates the database, the developer has to manually translate each and every message from the dbc and excel sheets back to source code. A developer's job, after all. Only this code is quite repetitive, huge, prone to fat fingers errors and really boring. All messages have the same structure only different names and different byte-orderings. The sane way is to let code generators do this job. But code generators are already a new language, the language they're implemented in and the language of how to use them. A rift has suddenly appeared out of the blue.

Climb up that high mountain and observe the king's country. C is the implementation language for an ECU that handles well-structured CAN messages. Nothing fancy. But the need to display and send those messages from external tools, possibly through scripting languages, the need to handle these messages with tools that don't require programming skills, the need to gather them into a single file easy to peruse, talk about and exchange between teams, all these needs forces the list of such messages outside of the implementation language, outside of C. If C's syntax, with no outside help, would be able to express such and use such a list to generate its own functions, interfaces and header files from it, that is if C would be able to write C, then these extra tools, dbc files, excel sheets and extra build steps would not be needed. CANoe or any other tools on their part could use use this well-formatted list for their own needs. Even better, if that format would be human-readable as C code should be, even creating, editing, searching, diff-ing and reading these files in one's text editor, would be possible without extra tools. But that is not the case, and so, we have a fine display of foot soldiers instead of a beautiful white horse.

To finish this discussion in glory and to confirm in the reader's eyes that in this particular case the introduction of a new language brings nothing really new just a slightly different syntax, check out the CAPL example from Vector's Tips and tricks for the use of CAPL.

As a side node, Vector considers C macros too powerful, though they don't shy away from their own macro flavors to aid in writing generic programs,

The preprocessor is a powerful tool in the C language, but it can also lead to confusion and consequently to errors. Therefore, only a subset of the well-known preprocessor directives in C is offered in CAPL with comparable semantics [...] In CAPL, there are a number of predefined macros that are available to users for use in the code or for conditional compiling. Macros for use in the code can be used anywhere in the code without restriction. In contrast to C, macros may be used freely within string constants, identifiers for variables, and function names. They always begin and end with a % character, and they are primarily used to write generic programs.

Tips and tricks for the use of CAPL

I've simplified things a bit for the sake of discussion and considered only CAN networks thus far, but it is worth mentioning that when there are other networks present as they usually are, dbc files are not enough. They can't express Lin, FlexRay, MOST or Ethernet networks. An new, xml-based file format is needed for that: FIBEX. A first hint that xml is quite popular for parentheses naysayers and a confirmation that the landscape is truly split into little kingdoms. I will quote a big chunk from the standard since it reiterates some of the things I've already covered and as such it might aid up the reader's understanding,

A single cluster (ECU) can transfer several hundred or even thousands of shared signals. A network can contain several clusters that are interconnected via so called gateways that transfer signals between them. Many protocols used within the different clusters have been developed to support the different needs of a wide range of applications. Among them are CAN, LIN, FlexRay, and MOST. Obviously, the importance of communication technologies has been increased dramatically.

Databases are necessary to store the information of all signals and their parameters in order to manage efficiently these networks. That data is used for various design and verification steps. Many dedicated tools have shown up on the market to support that. However, their application focus may differ. Unfortunately, no common format for the exchange of data between different tools is presently available. The growing number of signals in a reasonable network yields an increasing demand for a straightforward way to exchange data to avoid an error prone manual handling of redundant data. Furthermore, the fast-growing communication requirements of the implemented functions result into an increasing number of new, extended, and more dedicated tools. This increases the need for an exchange format that supports better data handling.

FIBEX is an XML data structure that can describe a complete network within one file. This data generally includes definitions of signals, frames, clusters, and ECUs sending and receiving the signals. FIBEX can be used to transfer network information between different tools. It is not intended to replace but to supplement established standards that are often used to store data locally. The coexistence with existing formats is regarded as an advantage since available tools and standards can still be used. FIBEX provides a bridge between them.

FIBEX’ strength is seen in the field of tool integration, data exchange, and data integrity. It is “more powerful” than most of the more dedicated formats. For example a single instance can contain the description of clusters with different protocols as well as the transfer function of the gateway between them. FIBEX should be used as an extension whenever the specific format is not sufficient. Since many tool vendors have announced to support FIBEX, a conversion between FIBEX and specific formats should be well supported.

FIBEX – An Exchange Format for Networks Based on Field Busses
ASAM MCD-2 NET

The need to interact with a running system childbirths more tools

In a perfect world I can play with a running system directly from my text editor. I can change variables and check their values, I can call any function from anywhere and display, save and modify the return result of such calls instantly, I can scribble wild ideas directly in code with as little ceremony as possible, I can modify, update and define new functions on the go, all this without the need of extra tools, rebuild and restart steps. All the code is available at my fingertips, all features before my eyes, no hidden parts, no forbidden areas, perfect freedom, all cards on the table, both hands on the keyboard. What a joy!

In the real world the hints of such joys start with a debugger but these joys are soon wrecked. Debuggers understand these time consuming, focus-assassins and debilitating code change, rebuild, restart cycles,

The classic debugging technique is to simply add trace code to the program to print out values of variables as the program executes [...] First of all, this approach requires a constant cycle of strategically adding trace code, recompiling the program, running the program and analyzing the output of the trace code, removing the trace code after the bug is fixed, and repeating these steps for each new bug that is discovered. This is highly time consuming and fatigue making. Most importantly, these actions distract you from the real task and reduce your ability to focus on the reasoning process necessary to find the bug.

The Art of Debugging with GDB, DDD, and Eclipse

But debuggers only temporarily avoid the cycle, as any code changes still involve a rebuild. And they introduce new tools able to examine the state of a running system, tools with their own special terminology, user interfaces and scripting languages. On this last point, haven't you noticed that every intuitive, simple to understand, button-clicking tool is heaven in its birthing phases and can act as the true savior for a while? Until things get serious, at which point a real programming language is always produced for those repetitive or error-prone tasks. Windows people have figured that out and turned this problem into fortunes. Figma people as well. Visual programming to make life easier?! Regardless, the automotive world doesn't lack such scripting languages as each vendor promotes a different debugger for their own special microcontrollers. I'll mention Lauterbach's PRACTICE scripting language with its ancient GOTO statementsand macros of three different kinds (local, global or private) and even recursive macros. Macros' crusaders surely like the idea of macros. Check the documentation for the details.

There are alternatives to the use of debuggers in the automotive field, none better nor powerful enough to conquer all the other tribes, as I've already hinted in the introduction, not even the debugger itself. I'll look at two of them.

Firstly, the Unified Diagnostic Services (UDS) or Diag, for short. It's a feature all ECUs must implement by law and it allows a car shop to configure, update, and check the ECU's status. Diag is still about sending CAN messages, or routines in Diag-speak. These are standardized messages used to read certain memory addresses, predefined identifiers which are mapped to variables in the C implementation, as well as to write them. In short, poke your ECU like it was a server with all kinds of GET or POST requests and check its response in CANoe's or other vendor's intuitive Diag interface. The 400 pages long ISO standard has all the details,

Modern vehicles have a diagnostic interface for off-board diagnostics, which makes it possible to connect a computer (client) or diagnostics tool, which is referred to as tester, to the communication system of the vehicle. Thus, UDS requests can be sent to the controllers which must provide a response (this may be positive or negative). This makes it possible to interrogate the fault memory of the individual control units, to update them with new firmware, have low-level interaction with their hardware (e.g. to turn a specific output on or off), or to make use of special functions (referred to as routines) to attempt to understand the environment and operating conditions of an ECU to be able to diagnose faulty or otherwise undesirable behavior.

Wikipedia: Unified Diagnostic Services

In a shocking move, the dbc file format we've already seen is not designed to hold Diag messages though Diag messages are still just CAN messages. At 500 pages long, the ODX standard introduces a new xml-based file format to fill this gap,

The ODX specification contains the data model to describe all diagnostic data of a vehicle and physical ECU, e.g. diagnostic trouble codes, data parameters, identification data, input/output parameters, ECU configuration (variant coding) data and communication parameters. ODX is described in Unified Modeling Language (UML) diagrams and the data exchange format uses XML.

ISO 22901-1: Road vehicles - Open diagnostic data exchange (ODX)

In a rare admission of a war campaign gone wrong, or to smear its adversaries and push its own solutions, Vector recognizes this is a complex beast one dares not tame with bare hands,

Until now, the process of creating ODX data has been restricted to just a limited circle of experts, due to its complexity. The current specification encompasses almost 400 pages. Users of the ODX data would rather concentrate on their actual task, namely the development of diagnostic applications, without having to deal with the specification or the data format and its dialects. With suitable tool support this is possible.

ODX in Practice: Experiences, challenges and potential

The suitable tool is Vector's CANdelaStudio. In a surprise move, the Diag data is not saved in the ODX format, as expected, but in its own proprietary CDD (CANdelaStudio Diagnostic Description) file format. It's the hunting season for file formats! The advantages section mentions that CDD files can be exported to other formats like CSV, HTML, ODX, RTF, XML, a2l, DEXT or CDI (I couldn't find what this is). I'll quote Autosar's DEXT, and come back to Autosar itself later in this article, since it give some reasons why even ODX is not sufficient for all needs and even new standards and formats are needed,

The market shows a high demand for transferring diagnostic demands to [...] suppliers. In the past, due to the absence of integral options, many file formats like ODX or EcuC were often used. But neither ODX nor EcuC is well suited to transfer this information. For example, ODX lacks in fault memory details and EcuC has a very generic nature that renders the enforcement of a strict model formalization very difficult. Therefore, the obvious solution approach has been to define a new standardized AUTOSAR exchange format on diagnostic functionality that can be used similar to a System Description, formalized as an ARXML file.

AUTOSAR: Diagnostic Extract Template

I've seen examples on the rigidness of these tools. Sometimes people want to highlight the features to be implemented in the next release or make some notes around them. Other times the team would ask the client if this or that Diag service can be added or this or that CAN message added, removed or modified, either because it was really wrong or because it was an internal message the team relies on. Since these visual tools do not provide the features to annotate, copy/paste, modify or comment out things, the teams relies on word documents, excel sheets, emails or just video calls and drawings on screen. With these, the places where one finds the project requirements increases as well, only because these languages developed to extend the original stiff language, are themselves not extendable by their users.

Diag still doesn't solve the development speed problem. It has access to some predefined variables for reading and writing. It can call out functions through these Diag routines and all this helps. When one of these variables or some part of the code you're currently developing is not callable through any Diag routine, the solution is to implement such a routine which requires knowledge of how the Diag module works, something that might be outside your actual problem you're trying to solve. This still bring us back to a rebuild and reflash cycle.

Secondly, the other alternative to a Debugger is Calibration. A new protocol and a new standard. A way to modify configuration options, or memory addresses, in a running ECU without touching the code The absence of any features to do so without recompiling the whole project is clearly stated as the motivation behind the standard,

The calibration of parameters is an essential part of ECU software development. Once a new set of parameters has been determined, the next development step is to run tests in order to evaluate the effectiveness of the calibration. For this purpose, internal variables are read from memory and transferred to a system that displays the data in a human-readable format.

In the early days of ECU development, the values of calibration parameters were directly modified in the source code. Variables had to be made available for data logging in the source code as well. Every change to parameters or the list of measurable variables required modifications in the source code, re-compilation and flashing of the ECU.

As the control software grew in complexity [...] this process became too cumbersome and slow. Additionally, the process of measurement & calibration needed to be separated from the process of software development, because a calibration engineer would need to change a parameter value or would want to record the values from a measurement variable, he had to ask the software developer to compile a new software version for him. This is the fundamental motivation for the group of ASAM MCD standards. The MCD standard provided the way to abstract the calibration from the physical memory locations.

ASAM MCD-2 MC

CANoe's AMD/XCP plugin offers another perspective on this issue,

Option AMD/XCP extends CANoe by adding the ability to access ECU memory. This access is done via the ASAM-standardized XCP or CCP protocol and is convenient to configure with files in A2L format.

CANoe offers access to internal ECU values via XCP/CCP for testing and analysis tasks. In contrast to the pure blackbox test, in which only the external ECU signals are stimulated and measured, internal values can also be calibrated and evaluated over XCP/CCP. Changes to these parameters lead to specific error states, and the resulting ECU behavior can be tested directly. It is also possible to test different variants of ECU software – switching is performed directly over XCP. Missing sensor values can also be simulated by writing values to the relevant memory locations via XCP/CCP.

CANoe: Product Information

An a2l file contains all the memory addresses we want to be able to read and write together with a meaning attached to each address, the size and the type of data one expects to find there (here is a a2l example file and ASAM's downloads section offers additional examples). Functions that transform from one type of data to another are also defined in such files as there is a conversion of types between the target (ECU, C implementation) and the tools that use that data in a human-readable format. For complex transformations where a real programming language is needed, the standard mentions a way to specify Windows DLL files in the a2l files. No mention of Linux, for example. Additionally the transfer protocol (XCP or CCP) between the PC must also be specified by a metalanguage (AML). This is the same rift we've seen before. Simplistically, one side needs to know what to ask for and to know how to interpret the response once it receives it; the other side needs to know where to look for the requested information. In the middle, someone must take care that the message gets across the valley through some medium (XCP or CCP),

To convert the ECU internal characteristic and measurement implementation values into physical values, ASAM MCD-2 MC describes computation methods for their conversion between both representations. Calibration engineers can work with the ECU data in a familiar format without having to understand ECU-internal data formats. Software engineers can provide this data to them or even get the description files automatically generated from code generators. An include mechanism ensures that description files can originate from different sources.

The ECU normally stores the measurement and calibration quantities internally in an implementation optimized format. This format is very often a fixed-point format. Outside the ECU physical models are used. The standard describes by so-called record layouts how data are stored inside the ECU and which computation methods are needed to transform the ECU internal data representation into the physical one and vice versa.

The standard also allows to describe and configure the ECU interfaces or vendor specific extensions by a meta description language (AML). For ASAM standardized ECU interfaces, such as CCP and XCP the content of these AML parts are also standardized. But there are also a lot of vendor specific instantiations in the market which use this mechanism.

Measurement and calibration systems are normally only used during development phase of ECUs. They allow a direct, address-oriented write and read-access but also a synchronous, continuous measurement access to ECU internal variables.

ASAM MCD-2 MC (ASAP2 / A2L) - Data Model for ECU Measurement and Calibration (Version 1.7.1)

This is another non-human readable file format to be handled only with tools. Since the memory addresses in a2l files are hard coded, any C code change that results in a change of allocated addresses after a build must lead to a change in the a2l file as well, or we'll read and write a different location than the one we'd expect to. This step is done with code generator tools, as confirmed even by the official documentation.

Tools and standards, a protocol (XCP, not CAN) and a file format (a2l) are needed to access what are practically variables from a running application,

With the XCP protocol standardized by ASAM, the user can read individual values directly from the ECU as needed. Once the A2L file has been configured and the necessary values selected, CANoe automatically acquires them and maps them as system variables. The user can then use these variables in any of the testing tasks. Besides offering access to ECU inputs and outputs, they also provide an in-depth look into the ECU’s memory.

Vector: A Look Behind the Scenes; ECU testing with XCP support

The testing phase itself is done with CAPL or with CANape as we'll soon see. One extracts the values of variables from a running ECU, maps them through an a2l file, makes them available to one's system and uses them for testing purposes using a different language than the language used to implement the system in the first place. Does that sound simple? No need to stress it, but any bugs or missing features found in this way will need to be fixed by modifying the C code, the language the actual system is implemented in. So back again to the rebuild cycle.

The a2l files specify the addresses and what goes where, a sort of an API, if you will. But the actual data to be written or read from an ECU is stored in a different format. ASAM CDF specifies an XML file format for this purpose while the ASAM MDF a very efficient binary format. As well as the a2l example mentioned above, the ASAM's website downloads section also has cdf and mdf file examples.

Vector has found clever ways to generate the a2l files, either through ASAP2 Studio's intuitive interface among others, but also from source code directly via C code comments. This, of course, needs outside parsers and code generators since C can do nothing with its own code, which in this case is not even code but a predefined way to tag comments similar to what some tools would use to generate documentation. Here is an example from ASAP2 Tool-Set User Manual on how to annotate C code with comments for the automatic generation of a2l files at build time (check the manual for more examples),

/*
@@ SYMBOL = sample1
@@ A2L_TYPE = MEASURE
@@ DATA_TYPE = UBYTE
@@ END
*/

results in...

/begin MEASUREMENT sample1 ""
    UBYTE NO_COMPU_METHOD 0 0 0 255
    ECU_ADDRESS 0
    /begin IF_DATA CANAPE_EXT
        100
            LINK_MAP "sample1" 0 0 0 0 0 0 0
    /end IF_DATA
/end MEASUREMENT

This rift, this mismatch at the boundary between two systems expressed in different languages, is another recurring theme. It always creates a need to conceive extra formats to patch that gap and make the two understand each other and use each other's data. That's a big problem when tools proliferate due to there not being a powerful one in the first place. It is why our traveler takes ten days for a journey that should take just one. Kings of the old knew that. If everybody does not speak the same language and enjoy the same culture, willingly or forcefully through the kiss of the sword, every journey through the king's land involves talking to every tribesman to secure safe passage. Some ask exorbitant prices, some refuse outright. A dirty business and a bad climate for business.

It is the same story with prose writing, I find. If English hadn't given me the words and metaphors and the technical means to wrap up things in rifts and kings, I would either have to use another language (and offer the reader a dictionary and a way of translating from one expression to another, similar to what a2l does) or repeat the same things over and over. But as things stand I can wrap up concepts already talked about and understood by the reader in a few words or images and refer to them instead, shifting the discussion to a higher level. Or my language can be visual, like in arts, or theater or music. In that case the medium of transmission is different which is what all these file formats do. Eventually in life one must pass beyond the everyday language of he did this, then he did that to express some different concepts and feelings. Maybe that's the meaning of the well-known saying that Lisp has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts. Additionally, the quest for such metaphors is another technical problem. Similar to iteration speed in programming, I want to quickly find new words, write them down, reread, edit, delete, write again. As fast as possible. I can't imagine what writing without a computer was like in the past.

I've hinted in a previous comment about a tool to handle all this calibration business in one place, a special tool only for this purpose, CANape (CAN Application Programming Environment) or The Universal Tool in the Vehicle, Test Bench and Laboratory (see CANape: Product Information).

The Wikipedia article on CANape stresses the part where one does not modify the source code but updates the configuration variables to change an ECUs behavior, which is avoiding the exact rebuild-restart cycle I've underlined already.

Again, because buttons and clicks help a user only up to a certain point, to make sense of the possibly thousands of data points per second, CANape uses a scripting language. Can you guess it? Is it CAPL? No! It's CASL (Calculation and Scripting Language). Don't stress, though, the syntax of CASL is very similar to the C programming language. The official manual even assures one of only needing a general programming knowledge of C and nothing else,

Note: Do not confuse CASL with the programming language CAPL, which is used in the CANoe and CANalyzer environments. CAPL is an event-oriented programming language. So-called CAPL program nodes are used to specify when an event will be executed and the nature of the reaction. CASL, on the other hand, is a signal-oriented language.

CANape CASL: User Manual

The fear of pointers is still with us, plus extra trivialities,

The CANape scripting language CASL is very similar to the C programming language. However, it differs in the following aspects:

CANape CASL: User Manual

CASL is still not strong enough. It can be extended through DLLs for case where you might still want to go back and actually use C functions directly from CASL, a language developed to interact with systems developed in C, or I donno, it all starts to get really confusing at this point so I'll leave it to you to explore it further.

A language is not just its syntax but the tools and environment around it

Embedded developers are a special kind of software people. They use countless IDEs, probably a legacy of each microprocessor vendor mixing up their own IDE to be sold as opium for all the known software affections. At least in that other fantasy land, the rivalry is between two venerable kings, Emacs and VIM, but here the holy wars look different, some of them use plain Nodepad, some have advanced to Notepad++, MPLAB, CubeIDE, QtCreator and some to CodeWarrior, the best kind of warrior for our wannabe emperor. Even the DSLs (yes, let's call them that and stop pretending) like CAPL and CASL and the other programming languages we've seen until now come with their own IDEs.

Here is what GreenHils, one these poison-mixers, is saying

By using a common set of development tools across projects, software engineers can more easily share code or move between projects without compromising productivity. The Project Builder’s intuitive GUI automates and simplifies the configuration of complex programming projects. With its automatic dependency determination, the Builder also helps cut time-to-market by eliminating the need to write and debug makefiles.

MULTI - Integrated Development Environment for Device Software Optimization

God forbid the maker and his makefiles. They're too old and well established to keep using them. QtCreator employs qmake to help simplify the build process for development projects across different platforms. It automates the generation of makefiles so that only a few lines of information are needed to create each makefile. The MULTI IDE uses a proprietary, xml-based, configuration file (gpj) instead of makefiles but it also generates makefiles behind our backs. The switch to other tools to edit configuration options now becomes a near impossibility, as now everything that can be configured must be configured through that vendor's random interface, buttons and menus, a new interface from the ones already familiar to you, an interface bound to change. Some IDEs even have different config files that are incompatible from one version to the next,

[...] my main complaint with other tools is proprietary config file formats which leave me stuck using their low-effort interface for absolutely every change before compiling. If I can't use my favorite text-based tool to diff/compare and make meaningful configuration changes, I will curse every extra click or scroll that I'm forced to perform in their proprietary software just to change an option for which I already know the name.

A reddit user

As an example of how complicated and how non-standard such a setup can become, see the official Getting Started with Green Hills Tools guide from Renesas, a semiconductor manufacturer, as there is no freely available online documentation or tutorial from GreenHills itself on how all this works.

Speaking of Renesas, they proudly boast their own accomplishments in the IDE's field,

CS+ is an IDE integrating the necessary tools for the development phase of software (e.g. design, implementation, and debugging) into a single platform. By providing an integrated environment, it is possible to perform all development using just this product, without the need to use many different tools separately.

CS+ V8.05.00 Integrated Development Environment: User’s Manual

But Renesas goes one step further into unexplored territory. They have four different IDEs in their shop and even different, incompatible proprietary config file formats at that! Even inside the same company it proves quite impossible to decide on a single file format. There is ample documentation on how to migrate between IDEs (or here) and even from their IDEs to GreenHills. That's just a hint on how complicated even the business of writing code in a text editor can become.

Rules and processes are enforced with non-formal methods

Even in a dead language like C that offers no means to create incomprehensible DSLs and rich abstractions one can still bring to life ugly monsters,

While C programs can be laid out in a structured and comprehensible manner, C makes it easy for programmers to write obscure code that is difficult to understand.

There are areas of the language that are commonly misunderstood by programmers. For example, C has more operators than some other languages and consequently has a high number of different operator precedence levels, some of which are not intuitive. The type rules provided by C can also be confusing to programmers who are familiar with strongly-typed languages. For example, operands may be promoted to wider types, meaning that the type resulting from an operation is not necessarily the same as that of the operands.

C programs can be compiled into small and efficient machine code, but the trade-off is that there is a very limited degree of run-time checking. C programs generally do not provide run-time checking for common problems such as arithmetic exceptions (e.g. divide by zero), overflow, validity of pointers, or array bound errors. The C philosophy is that the programmer is responsible for making such checks explicitly.

MISRA C:2012 Guidelines for the use of the C language in critical systems

The solution in automotive is to define standards that enforce certain coding practices, ban features like multiple returns from a single function, ban recursive functions or the dynamic allocation of memory (malloc) at runtime. But since one needs global constants to at least keep ECU configuration options (see Calibration, for example), this last rule forces a static pre-allocation of all memory at build-time. The usual approach is to keep tables with variable names, addresses and memory sizes in excel sheets, but since excel sheets are outside of C syntax, guess what do we need to bring in again? Code generators! I won't dwelve on this issues again as it's already familiar from the dbc files discussion.

The somewhat good news is that some of MISRA rules can be enforced with static checkers. These can generate reports about what rules are followed and what not, similar to a compiler. That bad news is that some rules can only be checked by manual inspection alone so you put your faith in the good will and patience of the developer,

It is possible to check that C source code complies with MISRA C by means of inspection alone. However, this is likely to be extremely time-consuming and error prone. Any realistic process for checking code against MISRA C will therefore involve the use of at least one static analysis tool.

All of the factors that apply to compiler selection apply also to the selection of static analysis tools, although the validation of analysis tools is a little different from that of compilers. An ideal static analysis tool would:

It is not, and never will be, possible to produce a static analysis tool that meets this ideal behavior. The ability to detect the maximum number of violations possible, while minimizing the number of false positive messages, is therefore an important factor in choosing a tool.

There is a wide range of tools available with execution times ranging from seconds to days. Broadly speaking, tools that consume less time are more likely to produce false positives than those that consume large amounts of time. Consideration should also be given to the balance between analysis time and analysis precision during tool selection.

Analysis tools vary in their emphasis. Some might be general purpose, whereas others might focus on performing a thorough analysis of a subset of potential issues. Thus it might be necessary to use more than one tool in order to maximize the coverage of issues.

Each mandatory, required and advisory rule is classified as decidable or undecidable. This classification describes the theoretical ability of a static analyzer to answer the question “Does this code comply with this rule?” The directives are not classified in this way because it is impossible, given only the source code, to devise an algorithm that could guarantee to check for compliance.

MISRA C:2012 Guidelines for the use of the C language in critical systems

One feature, or a limitation, is that the committee admits not all use-cases can be foreseen by the standard so it offers a deviation feature. A deviation is another non-formal piece of text saying this or that rule can be skipped in this or that circumstance: It is important that such deviations are properly recorded and authorized. Individual programmers can be prevented from deviating guidelines by means of a formal authorization procedure. Formal record keeping is good practice and supports any safety argument that might be made for the software.

So MISRA rules save some headaches and errors when applied, but there are places where they don't apply and that is necessary for the project and good. After a while you kinda start ignoring MISRA with we need this here anyway, MISRA doesn't know what is doing, I'll just check this in and be merry. It is a standard that employs extra tooling for it to be enforced and generates additional documents, be them Word documents or code comments, or what have you.

I don't think this is truly realized, but all this information written down in word sheets, requirements, knowledge passed on in meetings, little tricks and short hacks solved in video calls, information passed in emails and instant messaging is information that is part of the project, that is put down in some kind of a language, a non-formal one like English or tables or boxes or scribbles or laughs and jokes. The more a project has of these, the harder it is to talk about and verify the actual state of the project as it is obviously impossible to evaluate a non-formal language like you would a formal programming language and thus it becomes that much harder to explore and play with, test and explore the project.

Domain specific languages are everywhere

Some colleagues openly admitted of not knowing C at all, even though their CVs and their positions assumed such knowledge. But how? They've developed visual models in Simulink, a block diagram environment used to design systems with multi domain models, simulate before moving to hardware, and deploy without writing code. Unlike the CANoe ECU models that are meant to aid in the understanding of the project but need a software developer to write the ECU code, this graphical programming language also generates the C code, efficient and MISRA compliant at that. See the Automotive Code Generation page for details. C is here treated as a kind of a bytecode, never to be seen, never to be touched. Now don't go buzzing me with we don't need nor like DSLs here silly story. I won't believe you anymore. I can't!

The graphical programming language, similar to LabVIEW if you're more familiar with that, is used to move blocks around and connect them to pass the data and the computational results from one block to another, as one would call functions and return their values in a classic text-based programming language. Behind each block is Matlab, a high-level programming language designed for engineers and scientists that expresses matrix and array mathematics directly. Some user success stories for warm-up follow,

Simulink is particularly helpful in two stages of our development process. Early on, it helps us try new ideas and visualize how they will work. After generating code and conducting in-vehicle tests, we can run multiple simulations, refine the design, and regenerate code for the next iteration.

Jonny Andersson, Scania

Whether you’re developing controls for [...] an autonomous vehicle, an excavator, [...], if your team is manually writing code and using document-based requirements capture, the only way to answer these questions will be through trial and error or testing on a physical prototype. And if a single requirement changes, the entire system will have to be recorded and rebuilt, delaying the project by days, or even weeks.

Using Model-Based Design with MATLAB® and Simulink®, instead of handwritten code and documents, you create a system model [...]. You can simulate the model at any point to get an instant view of system behavior and to test out multiple what-if scenarios without risk, without delay, and without reliance on costly hardware.

Model-Based Design for Embedded Control Systems

Exactly the issue I've touched upon in the introduction. C's abstractions are not powerful enough to rise above dumb low-level details and get a clear picture of what you're trying to achieve, talk about that, share those bigger ideas with colleagues and refine on them. It also touches on that other point, namely, the iteration speed, how fast can you go from one version to the next, how fast can you change the implementation, test it, rewrite it, change your assumptions, see something tangible and not just imagining things and talking about things in meetings and on informal communication channels, be them video or texting.

But, my apologies for the interruption, I'll let these people talk for themselves, since they touch on all the pain points so beautifully,

In a traditional workflow, where requirements are captured in documents, hand off can lead to errors and delay. Often, the engineers creating the design documents or requirements are different from those who design the system. Requirements may be “thrown over a wall,” meaning there’s no clear or consistent communication between the two teams.

In Model-Based Design, you author, analyze, and manage requirements within your Simulink model. You can create rich text requirements with custom attributes and link them to designs, code, and tests. Requirements can also be imported and synchronized from external sources such as requirements management tools. When a requirement linked to the design changes, you receive automatic notification. As a result, you can identify the part of the design or test directly affected by the change and take appropriate action to address it.

In a traditional workflow, embedded code must be handwritten from system models or from scratch. Software engineers write control algorithms based on specifications written by control systems engineers. Each step in this process—writing the specification, manually coding the algorithms, and debugging the handwritten code can be both time-consuming and error-prone. With Model-Based Design, instead of writing thousands of lines of code by hand, you generate code directly from your model, and the model acts as a bridge between the software engineers and the control systems engineers. The generated code can be used for rapid prototyping or production. Rapid prototyping provides a fast and inexpensive way to test algorithms on hardware in real time and perform design iterations in minutes rather than weeks. You can use prototype hardware or your production ECU. With the same rapid prototyping hardware and design models, you can conduct hardware-in-the-loop testing and other test and verification activities to validate hardware and software designs before production. Production code generation converts your model into the actual code that will be implemented on the production embedded system. The generated code can be optimized for specific processor architectures and integrated with handwritten legacy code.

Model-Based Design for Embedded Control Systems

Check out an example of generating C code from a Simulink model with the Simulink Coder. Since the generated code must fit in with the rest of the project, sometimes adjustments are necessary, like in this example on how to configure the model for C code generation.

Needless to say, this is a whole new and complex language, we don't need no special new languages besides C, we need only if's, else's and everything as concrete as possible people be damned. No wonder engineers working with Simulink are not necessarily C developers. Simulink is a brand new universe that takes years to swim safely through its waters, regardless of what the marketing white paper above says about it being easy to use and avoiding the time-consuming and error-prone method of hand-writing your own code. HAND-WRITING YOUR CODE?! That's an activity for future to be software anarchists! Regardless, the suits of the present seem to have developed a fondness towards it and for code that generates code. Code that generates code?! Wait, I've heard that idea before and it usually gets smashed to pieces as being irrelevant for us here. Regardless, I'll pass the microphone again,

Compact, efficient code: the code automatically generated with Embedded Coder and Simulink Coder required about 16% less RAM than the handwritten code used on a previous version of the Cruise Controller; the code met all project requirements for efficiency and structure.

High test efficiency: debugging the control software on the desktop instead of in the vehicle enabled the Daimler team to reduce the time and cost associated with resolving software problems.

Fast development: the entire project, including analysis, restructuring, modeling, and testing, took just 18 months. It would have been nearly impossible to achieve this project deadline without the use of simulation, production code generation, and processor-in-the-loop capabilities offered by MathWorks.

Daimler Designs Cruise Controller for Mercedes-Benz Trucks

Code development costs cut by two-thirds: by modeling the control application software in Simulink and using the model as an executable specification, we have eliminated misunderstandings that can occur between OEMs and suppliers. Further, by generating code from our models, we have eliminated the bugs and human errors that come with hand-coding. These improvements enabled us to cut code development costs by roughly two-thirds and shorten development times.

Nissan Accelerates Development and Testing of Engine Control Software

We also anticipated many design iterations, so we wanted an easy way to visualize results and debug our designs. In addition, we wanted to save time by generating code, but the code had to be efficient, as the CPU load on our electronics control unit (ECU) was already about 60% when we started the sensor fusion project.

Developing Advanced Emergency Braking Systems at Scania

According to my experience with Toyota, pretty much all of the code is written in C. But there are some several functions mainly about bit manipulation which are written in assembly. However, they are shifting towards model-based development using higher level language like MATLAB, and convert back to C to shorten the development cycle and improve in readability, re usability and code maintenance.

A Quora user on programming languages for ECUs

C is truly dead at this point. But why not stab it a little longer? What would be the harm in it, anyway?

ASCET provides an innovative solution for the functional and software development of modern embedded software systems. ASCET supports every step of the development process with a new approach to modeling, code generation and simulation, thus making higher quality, shorter innovation cycles and cost reductions a reality.

The ASCET tools support model-based software development. In model-based development, you construct an executable specification – the model – of your system and establish its properties through simulation and testing in early stages of development. When you are satisfied that the model behaves as required, it can be converted automatically to production quality code. The key advantage of model-based development is that the software system can be designed by domain experts, using domain-specific notions, independently from knowing any details how it will be realized by an implementation.

ETAS ASCET Developer 7.9.0: Getting Started
ASCET: Getting Started

Die C, die, even though you're already dead!

But C has a dark side. It is too easy for errors to creep into the code that can be extremely difficult to find. Problems start with the syntax because it makes writing code vulnerable to error. For example, optional braces, assignment in expressions, and automatic switch/case fall through, etc. Then there are semantically dubious or complex features that are difficult to use correctly and encourage “programming on the edge of safety.“ For example, goto statements, pointers, and integral promotion. These aspects can also interact in dangerous ways

ETAS is rising to all these challenges with a new language to engineer safe and secure software effectively: Embedded Software Development Language (ESDL). ESDL eliminates typical C pitfalls and, in addition, enables software reuse, simplifies maintenance, and supports product-line variant engineering. ESDL enables developers to spend time solving problems instead of programming around the inadequacies of C.

Using code generation to create C Efficient use of ESDL in development is enabled with ETAS ASCET DEVELOPER 7, an Eclipse-based Integrated Development Environment (IDE) and a C code generator.

The IDE provides modern editing features like language templates, content assistance proposals and quick fixes for problems. This makes ESDL easy to learn for beginners. ASCET-DEVELOPER 7 also continually checks for ESDL programming violations, calculates quality metrics, and offers best-practice recommendations. Feedback is provided to developers “on-the-fly” during edit time, therefore reducing the time between making a coding error and its detection to zero.

ESDL: Safety and Security in Code

Same story again from ETAS, a subsidiary of Bosch. It is interesting to note that the model and the code are seen as different entities. In Lisp world, the model that generates the Lisp code is still Lisp. That's the big difference. In automotive world, the model that generates C code is something entirely different. It could be a visual language, it could make use of xml files, it can implement a scripting language. All these combined define the modeling language.

Sure it is easy to fall prey to such hype and most of is maybe just that, hype and marketing and people wanting to make money and gain market share. Nothing wrong with it. But having never seen more powerful systems, having never played with the parentheses, for example, makes one that more gullible and easily impressed by such promises.

Do you want to explore Siemens' Simcenter Embedded Software Designer on your own? Go on!

Move away from the language completely and just click buttons

Your mistake is in assuming that you'll write code when using Autosar. Autosar exists so that the company doesn't need to hire developers to write code. The ideal is to buy everything from suppliers who also take on the liability in case there are bugs and recalls need to be made. If you work with Autosar, you need to be VERY clear with your future employer about what EXACTLY your duties will be. It is very likely that your role as an Autosar developer will mainly consist of integrating Autosar components and configuring them.

Lol, you are joking right? Any person who values their time and career future and wants to actually develop something fun would stay away from AUTOSAR.

I like cars and tried the Autosar route. It kind of sucked the fun out of development for me.

Reddit answers to a user asking if they should go with Autosar in their career

Autosar's goal is to aid in code reuse through standard modules and interfaces. The modules are implemented and sold by third-party vendors, either independently or as a whole stack while their configuration and the connection between them (the function calls, after all) are done through visual tools and code generators.

There are two main Autosar schools of thought, the Classic Platform where the ECU's software is split into three layers, the Layered Software Architecture document having all the details and the pretty pictures and the Adaptive Platform which implements the AUTOSAR Runtime for Adaptive Applications (ARA). Two types of interfaces are available, services and APIs. The platform consists of functional clusters which are grouped in services and the AUTOSAR Adaptive Basis. Yes, I really don't know what that means or why they are called platforms.

The reason for this split, aside from the increased complexity and high-performance computing needs mentioned in the Explanation of Adaptive Platform Software Architecture and a really surprise move on the part of the more than 350 Autosar Autosar legislators, is that these platforms are specifically targeting C and C++. Autosar is not language-agnostic though it boasts itself as a rich and modern programming environment . As such, the standard is quite low-level, it does specify the details of what goes where but does not build new abstractions to make it easy to talk about the project. It does build visual tools instead, as we shall see. A change to a different programming language, like Rust, would again force the change to a still new standard, as the change from C to C++ did, if that future ever comes before Autosar anarchists storm the castle and burn it to the ground,

If you want a dead end career, autosar is a good place to start. I could barely sustain for a month working with that messed up software.

Reddit: comment to the question Is automotive embedded Autosar only?

It also brings it new types that have to be mapped back to C++, similar to a2l. Remember the dictionaries and the transport medium across that rift? Either way, the gist is that modules developed by one vendor, if it follows the correct Autosar platform and platform version can be used like lego bricks. Pluck them into a project using the same Autosar platform and version alongside other modules developed by the same or even a different vendor and it should all work, it's interfaces will have the structure and semantics defined by Autosar. Hardly groundbreaking! Though it needs over 2300 official documents spanning more than 13000 pages to specify in the minutest of details the architecture of the entire software stack of the ECU itself, what goes into each module, what goes out, the types of data, the function signatures, all that.

The AUTOSAR application communication interface shall allow AUTOSAR applications to use the same interface definition independently of whether they are located on the same or on different ECUs. A standardized interface definition for applications is a prerequisite for the reuse of software and hardware independent deployment.

Autosar: Main Requirements

Things are a bit messy when it comes to file formats. Unlike the safe lands we've seen before, the XML file is again the choice for such module specification, though it also requires UML diagrams, excel files, GUI tools and splashy visuals,

In a development process, many different tools with different representation of AUTOSAR models are used (Excel Sheets, Modeling Tools, UML, XML, etc.). Each tool and its underlying representation of data have their advantages and disadvantages. These tools and representations can be grouped into technological spaces.

Using XML and UML within AUTOSAR combines the strength of both technological spaces: AUTOSAR defined templates for data that is exchanged in AUTOSAR. Since XML is widely accepted as a standard for representation and exchange of structured data it was chosen to be the basis for the exchange of AUTOSAR models. Due to the complexity of the data and its interrelationships a manual creation of a consistent AUTOSAR XML schema turned out to be time-consuming and error prone. In addition the expressive power of XML schema is not sufficient for expressing content related constraints between data entities.

Therefore a meta-model based approach was chosen to graphically describe the templates by means of UML2.0 class diagrams. Constraints that cannot be formulated graphically are described textually in the template specifications respectively as OCL (Object constraint language). The UML model which defines all data entities and interrelationships that can be used for describing AUTOSAR systems and related artifacts is called AUTOSAR meta-model. An instance of the meta-model, i.e. a concrete description of software components, etc., is called an AUTOSAR model.

Autosar here even mentions the reason behind needing a new format: commonly readable by both parties across the rift. Commonly readable again, not human readable. Here are some arxml examples: PortInterfaces.arxml and EcuM_swc.arxml.

If in CANoe one would bring in help to model a single ECU, in Autosar one models the entire car by way of these Components which then get assigned to either the same ECU or different ECUs connected to the same CAN network, for example. This is somewhat similar to what Kubernetes would do when you want another instance of your Docker app up and running without having to worry where it will be deployed.

Autosar comes with its own terminology (runnables, software components, PDUs, Ports, splittables, etc; check the 140 pages long glossary for more). The UML diagrams even have their own original icons (see Autosar: Virtual Functional Bus, Chapter 3: Overall mechanisms and concepts).

GUI tooling is the way to develop software with Autosar, from configuring, splitting, merging, modifying the arxml files to generating the C or C++ code. There are some old friends in this space, with Vector offering a complete solution for your projects: tools, basic software, engineering services, on-site support and training classes. Additionally numerous other tools for ECU testing as well as measurement and calibration are available. Similar to Matlab's Simulink, there is an emphasis on visual programming, user-friendliness and easy to ride wild horses. Everything easy and intuitive. Has nobody here read Teach Yourself Programming in Ten Years? Or Teach Yourself Programming, for that matter? Or Programming? Was it replaced with Model, Build & Execute App in Only 10 Minutes, instead?

DaVinci Developer Classic is a tool for designing the architecture of software components (SWCs) for AUTOSAR Classic ECUs. This tool lets you create a graphic design of the interfaces, define the internal behavior with runnable entities and link the SWCs to one another.

This function lets you generate the header file and the implementation template file for C-based applications easily and quickly from the DaVinci Developer Classic tool.

Advantages: User-friendly and easy design of AUTOSAR SWCs; Numerous graphic editing functions; Check the SWCs for AUTOSAR conformity; Link model-based development tools via ARXML

Vector: DaVinci Developer Fact Sheet
Vector: DaVinci Developer Classic

DaVinci's Modeling Language (DML) is a kind of visual language that all tools in this Autosar sphere employ, the goal being a simplified representation of the ARMXL models. Check the DaVinci Developer Adaptive for extra info, video sessions included.

Dassault Systèmes offers the AUTOSAR Builder™, another user friendly wizard (check page 6 for a beautiful model),

AUTOSAR Builder is 100% AUTOSAR standard-compliant. To ease the creation of complex AUTOSAR models, it offers user friendly wizards and advanced graphical and table-based editors, guiding the user thru typical AUTOSAR design steps. Therefore, it hides the complexity of AUTOSAR design activity through features which not only prevent the user from creating erroneous designs but also by offering automated completion/creation of AUTOSAR design elements.

Dassault Systèmes: CATIA Autosar Builder

Tresos Studio from Elektrobit is another wizard. Check how one would go about building a new project with EB Tresos. Buttons, clicks, new terminology. Easy and fun!

ECU basic software configuration, validation, and generation in one single environment. Users benefit from one tool environment for configuration, validation, and code generation instead of juggling multiple tools. Multithreading mechanisms are used to save time for code generation. Various assistant functions and wizards ease day-to-day work.

Elektrobit: EB Tresos Studio

Engineers leave companies and the field completely due to Autosar, but business suits seem to like it since it's just buttons one can click so they can understand that. While kings dream of impossibly expensive grand palaces the soldiers in the field know about how crazy all this is, they feel and see that they've become button clickers instead of software engineers. But I'll let others share their pain and misery of having to live with a DSL not invented by one person but by an association of over 350 automotive partners worldwide,

If you want to increase the chances of being miserable in your professional career start working with autosar. I worked 3 months with it, and just bailed, I spend more time dealing with the tooling than actually implementing a feature.

The only way I learned any of that was to screw around with it and very deliberately ask questions when updates were made so that I could learn. By the end of it, I was able to make simple changes but honestly I still never understood what was going on behind the scenes. If you get good at it, it’s decent job security. But it’s gonna be quite a ride.

What you do do all day long is use Autosar GUIs. For those, no public tutorials exist as far as I'm aware because not only do these tools cost serious money, the seminars taught on how to use them aren't free either. This exactly is why you can't meaningfully learn Autosar without having an Autosar-related job. You just don't have access to the tools and 3rd party libraries.

Users feelings on Reddit about their own experience with Autosar

AUTOSAR takes a long time to get a hold of. I got 1.5 years in it and I still haven't understood that thing properly. You can spend 10 years working on just Com stack or diag stack that would still be about 15-20% of autosar architecture. In fact you can spend your entire career with just that one stack. I know folks who have worked 5 years in one stack and struggle when they change over to a new stack. It's a painful software to deal with, at least on the vector daVinci software, you make one drop-down menu selection and your configuration in 2 other places would have been changed. The documentation for the architecture is published on the autosar website and it's free to read. The software is a whole other story. You can't buy the software because it's really expensive. You can't pirate download the software because the automobile companies usually send a list of features needed and the companies like vector which provide the software just release the software with the requested features for that specific project. You gotta have to bang your head a lot and struggle for a few initially to get a hold of the thing.

Basically a tool that is supposed to make a bunch of automotive makers happy but in reality it leaves everyone equally unhappy, but because it is standardized it gets used anyway.

I actually LOVE automotive. But I would never touch any AUTOSAR project. AUTOSAR is basically just a convoluted way to deflect blame (and liability) when some weird software bugs happen. I would go as far as saying that AUTOSAR makes software quality worse, because nobody understands how this complex monster of XML, headers, configs, #ifdevs etc works. I prefer to spend my time to write solid architecture, easy understandable code, and test the shit out of my code. Complexity is the enemy of quality.

Now I just sit back with a margarita in my hand and enjoy watching the European car brands being out-innovated by more nimble, flexible, maintainable, adaptable, robust, fitter Asian brands. Once the last German automotive supplier is gone kaputt, AUTOSAR will be forgotten by history. Darwin always wins in the end....

Autosar is bad. It is bad as specification(nonono it is not a standard, it is a specification) for multiple reason. But that is not actually a problem. The problem is implementations. They are horrible, they are bad at any angle. Buggy tooling, ignoring actual autosar(they call it a deviation), slow, incompatible, ugly implemented bsp level, where you have to learn tens and hundreds of workarounds and tricks. Simplest operation takes weeks to implement.

It is the ecosystem around the autosar that is cancerous. The tools are buggy and atrocious. The code they produce is unreadable and in some instances buggy as well. The documentation is also non-existent. The complexity is bigger than developing by hand but I guess you gain the time of testing and complying with all the standards. And all these at a very high price point. The few months I worked with some of these tools were the most miserable months of my career. If you work above mcal you are not an embedded developer you are a systems engineer. Where I work the people that have only worked on automotive lack very fundamental knowledge of embedded. I wonder what will happen in the automotive industry when there will be very few that grasp the low level.

Vector daVinci is really a pain in the ... . The documentation is somehow vague , they do not offer too many details. And i found a lot of bugs in the cbd package that i received from them after being bought for more than a couple thousands. On the other hand , i had a project in which i had to rewrite manually the CanNm stack (non autosar project which used autosar type NM) and still has problems. The idea of standardization is not bad by itself, only the complexity of configuring correctly the project is driving me nuts.

Reddit: Hate for Autosar

The future: self-driving cars go all in on DSLs

The wannabes Caesars have done it. They won't leave any stone unturned, not a single place on the continent remains untouched by their grandiosity. And as such, they have legislated new languages with which now one speaks of the driving conditions and of the interactions between cars, though I don't see what new things they introduce besides the human-readable Python-like syntax, that provides for a lower learning curve for domain experts while supporting a domain model that is directly recognizable by domain experts ( ASAM introduction) and a need to describe the behavior of the autonomous vehicle as well as other actors or entities in the environment.

This standard started out as an xml-based format but evolved into a full-fleged DSL, the ASAM OpenSCENARIO, which defines a declarative, constraint-based and aspect-oriented programming language. All in one. Here is a scenario that requires an initial speed of 40 to 60 kph, as pieces of code are now called not functions but scenarios, similar to Autosar's reinventing the familiar modules terminology into components,

scenario my_scenario:
    s: speed with:
        keep (it in [40kph..60kph])

    do serial:
        init: car1.drive() with:
            # note that it is recommended to specify an initial range
            # for speed and not a concrete speed
            speed(speed: s, at: start)

ASAM: OpenSCENARIO DSL, Writing Reusable Scenarios

I won't dwelve much on this one mostly because I haven't used it, except to note that other companies are basing their own DSLs on OpensSenario, like Foretellix's Measurable Scenario Definition Language (M-SDL),

The Measurable Scenario Description Language (M-SDL) is a mostly declarative programming language. The only scenario that executes automatically is the top-level scenario, top.main. You control the execution flow of the program by adding scenarios to top.main. M-SDL is an aspect-oriented programming language. This means you can modify the behavior or aspects of some or all instances of an object to suit the purposes of a particular verification test, without disturbing the original description of the object.

Foretellix: Measurable Scenario Description Language Reference

Here is an example from the reference manual that shows how to define a new scenario called two_phases. It defines a single actor, car1, which is a green truck. It uses the serial operator to activate the car1.drive scenario, and it applies the speed() modifier,

# A two-phase scenario
scenario traffic.two_phases: # Scenario name
    # Define the cars with specific attributes
    car1: car with:
        keep(it.color == green)
        keep(it.category == truck)
path: path # a route from the map; specify map in the test

# Define the behavior
do serial:
    phase1: car1.drive(path: path) with:
        speed(speed: 0kph, at: start)
        speed(speed: 10kph, at: end)
    phase2: car1.drive(path: path) with:
        speed(speed: [10..15]kph)

Foretellix: Measurable Scenario Description Language Reference

At a first glance, and similar to the CAPL observation I've made above, it doesn't look to me that they introduce really new or desperately needed language abstractions that aren't available in other languages.

There is even a specification for the road network, in xml of course,

ASAM OpenDRIVE was developed in response to demand for the specification of an exchange format to define static road networks that can be used in driving simulation applications.

The ASAM OpenDRIVE Specification specifies the file format for static road network descriptions. The Extensible Markup Language (XML) is used to represent these descriptions. The ASAM OpenDRIVE Specification specifies how to model static road networks. In more detail, it specifies the structure, the sequence, the elements, and values to represent static road networks.

ASAM OpenDRIVE

If this video presentation of ASAM OpenDRIVE doesn't convince you of its value, there is a different standard for road networks one can use, a standard developed by the same German car manufacturers, also based on xml,

The Navigation Data Standard (NDS) is a standardized format for automotive-grade navigation databases, jointly developed by automobile manufacturers and suppliers. NDS is an association registered in Germany. Members are automotive OEMs, map data providers, and navigation device/application providers.

Wikipedia: Navigation Data Standard
NDS Tools: Providing Our Members With Everything They Need

Well, the list of examples can go on and on but we've reached the edge of the continent. What remains is to turn our horses around and explore the little kingdoms we've already encountered in our journey in more detail. Or, who knows, you can venture into the rough seas ahead and bring to light new DSL, file formats, standards and intuitively easy to use tools now that your appetite has been wet.

Conclusion

The pristine C land has been torn to pieces. Every imaginable torture has been inflicted upon it. Its use restricted, features forbidden, fresh blood brought in to replace its inadequacies, people banned for even seeing its face, its true self painted over with colorful interfaces. Nobody wants to see its true image, nobody to hear its voice. Yet, they say it is the thing they all need. Like health, they all praise it then drink themselves to death. A plethora of tools to increase the speed of development, to inspect a running system, to exchange requirements and reach agreement between teams and stakeholders as to what needs to be developed and extra programming languages have taken its place.

If Lisp gives its practitioners the freedom to dream and play with their own wild abstractions, its absence gives companies the opportunity to flood the market with junk-food software and keep developers hooked on these cheap, calorie intensive but nutritionally void substitutes. And build fortunes, as a result.

On the other hand, I feel skeptical that one tool can solve all problems. I feel skeptical that Lisp can be used everywhere. That fact that it works so well in a couple of instances, like it does for Emacs or StumpWM among others, is no real argument. The cost, the energy, the developers' know-how, must all be counted in as well. There is only so much talent lying around in any industry. Not everybody can live in designer houses, some must live in cramped apartment blocks. It is possible that the software industry has stretched itself thin, most probably driven by profits. Software is the promised land where investors multiply their fortunes. When more and more of these gold-seekers pour in, the human resources needs increase as well. At one point there is a lack of sufficiently trained or naturally inclined practitioners to be able to do the job. So naturally the response is to lower the entry threshold. Naturally the response is to simplify, improve and expand on the tools these newcomers use. The more intuitive, the better. With the negative effects we see all around us. It is possible indeed that a small team of knowledgeable parentheses masters could build a car's software in Lisp, do it better and more efficiently than what is now available in the industry with all these tools we've seen. But add in dozens of car companies in the mix all wanting a market share in this gold rush and the number of available teams riding wild horses and still being able to take a shot decreases dramatically. So what is one supposed to do? Bring in the tamed horses. The old ones, too. Even the lame ones! In times of war, even donkeys are good enough and can count as horses for a while.