The Mnemosyne project is led by GrammaTech in collaboration with UT
Austin and MIT sponsored by DARPA and AFRL. Our work on Mnemosyne
started in May 2020. If you want to chat you might find us on the
The Mnemosyne/docs git
repository which holds all prior versions of this site—commit
date is concurrent with publish date.
The goal for Mnemosyne is to provide an automated software development
environment which is usable and enables developers to build better
Our efforts will be guided by the following measurable goals. These
goals will be measured through our own use of Mnemosyne
- Usability. How easily is Mnemosyne used. Is it difficult to
identify, invoke, and then apply the results of the synthesis
modules. We will attempt to continually evaluate usability in our
own development using Mnemosyne. This is our most subjective
- Quality. How good is the synthesized code returned by
Mnemosyne. More generally what is the quality of entire software
projects developed using Mnemosyne. Evaluation of this metric
will leverage the many readily accessible tools for automated
quality assessment of software projects; from linters and static
analyzers to dynamic fuzzers.
- Scalability. There are two aspects to this question which we
will attempt to measure independently. First, how do our
individual synthesis modules scale against task size. From single
expressions and statements, up to functions, and maybe eventually
whole modules. At least initially we may use lines of code (LOC)
of synthesized code as a proxy for complexity. Second is the
question of how well Mnemosyne scales to the size of the overall
project. As Mnemosyne relies on the software developer to
decompose the top-level requirements into pieces which are
tractable to the available synthesis modules the system should
productively contribute to any scale of software project (or just
to portions of a software project). However, it may still be the
case that certain domains of software projects benefit more
directly from the use of Mnemosyne for developer assistance.
- Automation. We will measure the overall impact of automation.
We plan to augment our Argot-server to track the provenance of
every character of code. We can then review this information to
measure to which degree specific synthesis modules and the
developer contributed to the code base of a project over the course
Copyright and Acknowledgments
Copyright (C) 2020 GrammaTech, Inc.
This material is based upon work supported by the US Air Force,
AFRL/RIKE and DARPA under Contract No. FA8750-20-C-0208. Any
opinions, findings and conclusions or recommendations expressed in
this material are those of the author(s) and do not necessarily
reflect the views of the US Air Force, AFRL/RIKE or DARPA.