As far back as 2005, Nightmare has been a paradigm for editing data structures in GBAFE, allowing for dynamically generating editors for any structure given only a .nmm module. Even as hacking has progressed, the structure of Nightmare modules has lived on, being both able to be automatically generated from virtually any FEBuilderGBA editor window, and more interesting for our purposes, able to be turned into CSV spreadsheets to be used with Event Assembler buildfiles. Despite this, the best documentation on the structure of Nightmare modules is from 2010, long before any modern hacking methods, and as such contains much information that has become outdated when structuring them for use with modern tools. This guide aims to give a concise description of the structure of Nightmare modules, how they are used in the modern day, and when it’s probably better to use something else.
Looking at a Nightmare module with no context as to what its contents mean, it’s not particularly clear what any of it means. However, it is quite simple once you understand what’s going on.
Here is the beginning of a Nightmare module for FE8’s Chapter Data Table:
1 Chapter Table Editor by Sme 0x8B0890 79 148 ChapterList.txt NULL CP 0 4 NEHU NULL
The first block is the file’s header. This contains various information defining the structure of the overall table:
The first line is the version number. This value does nothing for us in any modern context, but does need to exist.
The second line is the module name. This, similarly, does not contain any information we will use, but is a part of the header regardless.
The third line is the starting offset of the table in the ROM. When we later generate a CSV, this will be the value in the top-left cell, and is where the contents are written to by default.
The fourth line is the number of entries in the table. This will be used to determine the number of rows in a generated CSV.
The fifth line is the length of 1 table entry, in bytes. This value is integral to indexing the table properly.
The sixth line is the name of the file in the same directory that is read from to determine the name of each table entry. This gets used for the initial values in the leftmost column of a generated CSV, but is not necessary; if not using one, put
NULLon this line instead.
The seventh line is the name of a TBL file to be used when parsing and displaying values in text fields. You can always leave this as
We then have a line break: line breaks are not necessary, and are ignored when the file is parsed. However, for the purposes of readability, you should always put a line break after every section of the file.
The rest of the Nightmare module is formatted exactly like the second block here, and repeated for however many fields there are in an entry. Each of these blocks will correspond to 1 column in a generated CSV.
- The first line is the name of the field. This is used as the value in the first row for the designated column in a generated CSV.
- The second line is the offset from the start of the entry that this field is located at, in bytes. Necessary for telling where a field is located.
- The third line is the length of the field, in bytes. Generally, no value in a table in ROM is going to be more than 4 bytes long. Note that because the smallest unit of measurement in offset and length is a byte, you cannot have fields that are less than a byte in size.
- The fourth line is the kind of value in this field. For purposes of Nightmare, there are 7 values that can go here, but we only have to utilize 3 of them:
NEDS. These stand for Numeric Editbox Hex Unsigned, Numeric Editbox Decimal Unsigned, and Numeric Editbox Decimal Signed, respectively. There is no
NEHS. In Nightmare, these define the kind of editor to display corresponding to this field, but for us, it only defines the kind of value contained within the field.
- The fifth line is the name of the file used to get names corresponding to values for this field. NMM2CSV does not reference this file if defined, so it should always be
NEHU field, the value in the generated CSV is written in hexadecimal. For the other two, the value is written in decimal. If you want negative values to generate properly, the field should be
After we have constructed a Nightmare module, our next step is to generate a CSV we can edit from it. We do this using @circleseverywhere’s NMM2CSV. Given a clean ROM and a folder of Nightmare modules, this will generate a CSV from each in the same location. You only have to run N2C once. Running it again will overwrite all existing tables with fresh ones, overwriting all of your changes to them.
Once we have our output files. This is about the point where the next steps diverge based on how specifically you want to process CSVs for use in an EA buildfile. In this section, we will be using CSV2EA, from the same link as above. For other options, see 3: Further Reading.
Opening the generated CSV in any spreadsheet viewer, we can see some of the aforementioned data in specific locations: The top-left cell contains the base offset of the table, the top row contains column labels taken from the field names in the Nightmare module itself, and the leftmost row contains row labels taken from the file defined in the Nightmare module’s header. The value in every other cell is taken from the clean ROM given to N2C, at the offset and of the length specified for each field.
One mechanic of Nightmare modules is that they do not need to have a field for every single byte within the range of 1 entry, but when inserting data we do need some value to write to every single byte of each entry. This leads to N2C creating
UNKNOWN columns, which represent the space between defined entries in the Nightmare module. Ideally, you would cover every byte of an entry within the module when you write it, but should something be missed and these be generated, you’re best off either amending the Nightmare module to account for the space and regenerating the CSV or just leaving them be.
The values in each cell get written verbatim to an .event installer as numeric values, so you can use definitions instead of raw numbers to improve the readability of your CSVs, and I would recommend doing so.
The top-left cell of the CSV contains the base offset of the table, as stated earlier. But what if you want to expand the size of the table, or move it somewhere else in the ROM? If you replace the contents of this cell with
INLINE MyTableName, the generated installer will automatically repoint the table from the old location specified in the NMM to an EA label called
MyTableName that can then be referenced wherever else is needed.
Once you’ve edited your CSV, the next step is to run C2EA. Similar to N2C, this will take every CSV it finds in a given folder and its subfolders and produce a .event file, as well as a master table installer .event file that automatically includes all of the individually generated ones. Note that for C2EA to work, you do need to keep the Nightmare modules in the same location as the CSVs. Once it’s finished generating the installers, all you have to do is #include the master table installer in your buildfile and it will install the contents of all of your tables.
There’s been a few attempts at GUI editors to replace CSVs over the years, but to my knowledge none of them were ever completed or released. Something that does aim to improve on C2EA and has been completed and released is @Snakey1’s Table Manager, which allows you to define composited tables with fields that get written to entirely different data structures. It’s a bit more complicated to set up than C2EA, but if you want to give it a shot it’s always an option.
Despite just writing an entire guide on how to do so, I actually would recommend not using Nightmare modules and CSVs for your buildfile tables in the vast majority of cases. Setting up a Nightmare module, generating a CSV from it, and running C2EA every time you ever make a change to it is an involved and time-consuming process that is very much not worth it when the data structure you’re working with has only 1 or 2 fields or requires very little editing. Instead, I would recommend doing these sorts of data structures entirely within EA.
For tables, something like this:
PUSH ORG $<location_of_table_pointer_1> POIN NewTableLocation ORG $<location_of_table_pointer_2> POIN NewTableLocation … POP NewTableLocation: FILL (sizeOfEntry * numberOfEntries) #define TableEntry(id,value1,value2,valueEtc) “PUSH; ORG NewTableLocation+(sizeOfEntry*id); BYTE value1 value2; SHORT value3; POP”
For copying over vanilla entries, if you’re unlikely to change most or all of them you can take the raw contents of the entire vanilla table and put it into a binary file that you #incbin at the new table location instead of
FILLing it, or you can write back the entries using the macro. This can be set up in about a minute or so, effectively accomplishes the same thing as a CSV, and doesn’t require running C2EA before EA whenever you make changes.
For lists, something very similar:
PUSH ORG $<location_of_table_pointer_1> POIN NewTableLocation ORG $<location_of_table_pointer_2> POIN NewTableLocation … POP #define TableEntry(value1,value2,valueEtc) “BYTE value1 value2; SHORT valueEtc” #define TableTerminator “WORD 0” NewTableLocation: TableEntry(1,2,3) TableEntry(4,5,6) … TableTerminator
Because lists are variable-length and terminated, rather than reserving space for the table then writing to it afterwards with a macro, we write directly following the label and end with the terminator. Same as before, very quick to set up and equally as useful while being more convenient than a CSV.
For both of these options, you’ll have to find the pointers yourself. Open the vanilla ROM in a hex editor and search for the base address of the table in vanilla with a reversed byte order and followed by 0x08, e.g.
0xC0FFEE -> EE FF C0 08. For every match you find,
ORG to its location and
POIN to your new label.
The only times that I would recommend using nightmare modules is in cases where there are a lot of fields in each entry of a table. Handling errors in a 50-argument macro for writing to a table is perhaps even more time-consuming than setting up a CSV, and it’s a lot easier to handle errors within. In vanilla, these constraints are pretty much only fulfilled by the Chapter Data Table, Character Table, Class Table, Item Table, and Item Spell Association List, and as such, these 5 data structures are the sum total of what I would recommend using CSVs for.
As always, please let me know if you spot any errors!