Friday, December 2, 2016

Investigative Capabilities in a Digital World - My Responses

A thread on reddit pointed to Canada's consultation on what sort of legislative framework they should create to facilitate investigations in the digital world.

Many of the questions are leading and borderline deceptive in their phrasing, but if we're being charitable, this is a good opportunity to inform policy makers. To that end, these are my responses:

How can the Government address challenges to law enforcement and national security investigations posed by the evolving technological landscape in a manner that is consistent with Canadian values, including respect for privacy, provision of security and the protection of economic interests?

Evidence-based policy suggest you first demonstrate the actual need for such respective legislative changes, and further, that the proposed changes actually solve the problems they are intended to solve. Impartial third-party studies should be required for any proposed measures. As a computer scientist with some expertise in computer security, legislators simply do not have the requisite understanding to judge the need or the effectiveness of any proposed measures.

The greatest dangers to Canadians are automated software systems operating on behalf of criminals looking to compromise servers and personal devices. The most effective measures to protect Canadians would thus secure their devices against any such intrusions. We enjoy physical security due to law enforcement patrols to prevent crime, the justice system to punish, reform and deter criminals, and a good public education system that empowers people to provide for themselves such that they do not need to resort to crime.

No truly effective law enforcement or justice system is possible for online activities, given criminal activity affecting Canadians can originate from anywhere on Earth. The Internet is truly the wild west, and like the wild west, safety for all Canadians can only be achieved by empowering and educating individuals in how to secure their own devices.

On a legislative level, a modicum of regulation on the software industry to use only secure software systems when dealing with personal information, and to require companies to develop software using safer, more secure techniques, such as using safer programming languages, would significantly reduce all Canadians' vulnerabilities to online criminals.

In the physical world, if the police obtain a search warrant from a judge to enter your home to conduct an investigation, they are authorized to access your home. Should investigative agencies operate any differently in the digital world?

Why should they? In each case, our charter guarantees the security of the person against unreasonable search and seizure. Law enforcement is *supposed* to be inconvenienced if they want to violate someone's rights. That's the whole point of the charter, to safeguard against abuse of power.

Currently, investigative agencies have tools in the digital world similar to those in the physical world. As this document shows, there is concern that these tools may not be as effective in the digital world as in the physical world. Should the Government update these tools to better support digital/online investigations?

Of course, authorized use of such tools ought to be as effective as possible, so long as "these tools" do not necessitate compromising citizens' everyday software in order to be effective. For instance, these tools cannot require backdoors into people's software in order to be effective. The tools used by law enforcement *will* leak into the criminal world, and you would place every citizens' bank accounts and private life in jeopardy if you require backdoors.

Is your expectation of privacy different in the digital world than in the physical world?

Of course. Physical security and digital security are necessarily different because physical objects and digital objects have completely different properties. For example, it's trivial to digitally copy an item, it's not easy to copy a physical item. It's thus impossible to enforce the same sorts of property rights for digital items as we enjoy for physical items. You can securely share a private physical photo without too much concern that it will be disseminated globally, but as many people have learned, digital photos are easily leaked and spread online.

Since the Spencer decision, police and national security agencies have had difficulty obtaining BSI in a timely and efficient manner. This has limited their ability to carry out their mandates, including law enforcement's investigation of crimes. If the Government developed legislation to respond to this problem, under what circumstances should BSI (such as name, address, telephone number and email address) be available to these agencies? For example, some circumstances may include, but are not limited to: emergency circumstances, to help find a missing person, if there is suspicion of a crime, to further an investigative lead, etc…

In which *specific* cases has the absence hampered a time-sensitive investigation? I hear this "fact" repeated constantly, but no specifics are ever provided. If no compelling case exists, or alternate investigative methods can and have yielded the same results, then rapid subscriber access is an unnecessary infringement on our rights and freedoms, and frankly, a waste of legislators' time.

If such specifics are too sensitive to disclose, then they can be analyzed by an impartial, third­party investigation consisting of computer experts to determine their validity. I know of no such cases, nor any such investigation. If such an investigation justifies rapid subscriber access, then rather than infringing our rights and eliminating all oversight, establish a court that's dedicated solely to rapid processing of such applications, and cycle the judges sitting on these courts every year to prevent overly favourable relationships with those requesting warrants. No blank cheques, and any such requests must still go through a court.

Finally, subscriber information does not identify a responsible party, so relying on this will generate many false leads and waste investigative resources in time-sensitive investigations. For instance, today's deplorable software security makes it fairly easy to hijack someone else's wifi signal, so the false belief that an IP address that participated in an illegal activity will lead investigators to waste time pursuing the subscriber who was assigned that IP address. Law enforcement needs to be trained to understand these realities.

Do you consider your basic identifying information identified through BSI (such as name, home address, phone number and email address) to be as private as the contents of your emails? your personal diary? your financial records? your medical records? Why or why not?

Of course it's not as private. My kitchen is also less private than my bedroom, but law enforcement still requires a warrant to access it.

Do you see a difference between the police having access to your name, home address and phone number, and the police having access to your Internet address, such as your IP address or email address?

Internet address, home address, phone number and email address are all logical addresses via which I can be personally reached in some way, and behind which I hold significant personal and private information. My name is different from all of the above: it is not unique, and it does not provide a way to reach me personally.

The Government has made previous attempts to enact interception capability legislation. This legislation would have required domestic communications service providers to create and maintain networks that would be technically capable of intercepting communications if a court order authorized the interception. These legislative proposals were controversial with Canadians. Some were concerned about privacy intrusions. As well, the Canadian communications industry was concerned about how such laws might affect it.

As a computer scientist, I can tell you right now that reliable and complete intercept capability is impossible. If you legislate any such requirements, then those wishing to hide their activities have any number of mechanisms available to *undetectably* circumvent it. And not just via encryption, see steganography, for instance.

Given this impossibility, and the fact that IP addresses cannot identify the responsible party, but merely a means by which a criminal activity took place, interception capabilities are likely an unnecessary waste of resources that would yield too many false positives, and resources are probably better spent on more reliable and effective investigative techniques.

Furthermore, the existence of such networks makes them ripe targets for foreign governments and criminals. The potential for abuse, and the vulnerabilities created, are simply too great, even if well-intentioned.

Should Canada's laws help to ensure that consistent interception capabilities are available through domestic communications service provider networks when a court order authorizing interception is granted by the courts?

Yes, but I still question the effectiveness of such measures, and the cost-benefit ratio of creating this infrastructure.

If the Government were to consider options to address the challenges encryption poses in law enforcement and national security investigations, in what circumstances, if any, should investigators have the ability to compel individuals or companies to assist with decryption?

Never. The most significant danger to Canadians is online criminal activity. Software that compromises people's devices to leak personal information, for identity theft among other criminal purposes, is already being traded online. Any attempt to weaken or circumvent encryption puts every Canadian in danger for very little gain considering messages exchanged between criminals can be hidden via other undetectable means (see discussion of Intercept Capability above).

Canadians as a whole are better served by *requiring* devices to implement proper encryption that cannot be circumvented. Any security researcher will tell you exactly the same thing: securing an encrypted device with any backdoor is impossible. You cannot compromise every Canadian's entire life just to assist a small number of investigations.

How can law enforcement and national security agencies reduce the effectiveness of encryption for individuals and organizations involved in crime or threats to the security of Canada, yet not limit the beneficial uses of encryption by those not involved in illegal activities?

They literally cannot. It's impossible. Any computer scientist who has studied security will tell you the same. Even the best design I have seen to date, which holds decryption keys in hardware not accessible to software running on the device, and which looks pretty safe at first glance, is rife with vulnerabilities when you consider that all of our hardware is built overseas under the purview of foreign governments and in manufacturing plants with completely unknown on-premise security measures. Hardware-level security breaches have already occurred even though there's plenty of other lower-hanging fruit.

Should the law require Canadian service providers to keep telecommunications data for a certain period to ensure that it is available if law enforcement and national security agencies need it for their investigations and a court authorizes access?

Retaining information is fine, so long as that data is required to be appropriately secured from access without a warrant. In particular, telecommunication providers should not be permitted to sell this information, and should be required to conduct third-party audits of the security practices used to archive this data.

Even so, the utility of data retention will probably decrease in the coming years as more services move to the internet, and more information exchanged online becomes encrypted. Already, people can switch most of their telephony services to online voice chat services, thus reducing the effectiveness of telephony data retention.

If the Government of Canada were to enact a general data retention requirement, what type of data should be included or excluded? How long should this information be kept?

I can't imagine a use for data more than 5 years old. Retaining all Internet traffic is impossible, but this is what would be needed for truly effective analysis.

However, full retention is an unacceptable security risk. Even enacting a private key encrypted infrastructure, whereby all such data were stored in encrypted form and the decryption keys were held by the courts to be used only when authorized, securing these keys against any possible disclosure is probably impossibly difficult.

To balance the security risks against realizable data capture abilities, retaining only metadata for 5 years and encrypting it all using the above public key infrastructure would be acceptable.

Editorial Note: I have little expectation that such public key protections would be employed though, and without them, I'm not in favour of any data retention at all.

Sunday, November 27, 2016

Semantic Tab Panels via HTML Tables

There are plenty of tutorials online for creating tabbed panels in HTML documents, some using JavaScript, some using pure CSS tricks. Most of the approaches seem to assume that a list of some type is the appropriate element to use for a tabbed interface, but I'd argue that lists are not semantically descriptive of tabbed interfaces. They either include the main tab content along with the tab name, like so:

<ul class="tabs">
    <li class="tab-active"><a href="">Active</a> Active tab content goes here</li>
    <li><a href="">Second</a> Inactive second tab content goes here</li>
    <li><a href="">Third</a></li>
</ul>

Or the tab content has no semantic relationship with the tabs that control their display, like:

<ul class="tabs">
    <li class="tab-active"><a href="#first">Active</a></li>
    <li><a href="#second">Second</a></li>
    <li><a href="#third">Third</a></li>
</ul>
<div class="tab-body">
    <div id="first">Active tab content</div>
    <div id="second">Second tab content</div>
    <div id="third">Third tab content</div>
</div>

It's simply bizarre to mix the tab name with the main tab content as with the first case. The tab name controls interaction with the tab content, it's not part of the tab's content. They are semantically distinct, and so should be distinct in the markup.

In the second case, name and content are separated, but not in a semantically meaningful way, which makes it invisible to screen readers unless you add extra annotations. The connection between tab name and tab content is completely indirect since it depends upon interpreting the meaning of a link's target as referring to an element in the current document. This is obviously only one approach to separating tab name from content, but every approach I've seen so far makes a similar tradeoff: no technique expresses the relationship between tab name and tab content using markup that properly describes their relationship in a semantically meaningful way.

So what is the semantic relationship between tab name and tab content? Well clearly the tab name is a heading of some kind that summarizes and/or meaningfully describes the content it references.

Secondly, a set of tabs are logically related in some larger context, and so markup that also expresses a grouping is the most semantically meaningful. For instance, expressing tab content as a set of divs doesn't express this association, nor does expressing tab names as standard headings H1..H6.

However, there is an HTML element that provides headings and semantically associates those headings with its contents: tables!

<table class="tabs">
    <thead><tr>
        <th class="active" id="first">Active</th>
        <th id="Second">Second</th>
        <th id="Third">Third</th></tr>
    </thead>
    <tbody>
        <tr>
            <td headers="first">Active tab content</td>
            <td headers="second">Second tab content</td>
            <td headers="third">Third tab content</td>
        </tr>
    </tbody>
</table>

The table headings are the tab names, and the table cells contain its contents. Each cell designates the header/tab name to which it belongs, which is meaningful semantic markup for tabbed interfaces. Now all that remains is providing the interactive nature of tabbed interfaces. I think most of the standard techniques will work just as well here, but I used a simple javascript function and some basic CSS you can see in this jsfiddle. Basically, the CSS sets the backgrounds, borders and mouseover behaviour needed to mimic the familiar tab layout:

body {background-color:white;}
table.tabs{border:none; border-collapse:collapse}
table.tabs>thead>tr>th{text-align: left;padding:15px;position:relative;bottom:-1px;background-color:white}
table.tabs>thead>tr>th:hover{background-color:lightgrey; cursor:pointer}
table.tabs>thead>tr>th.active:hover{background-color:white}
table.tabs>thead{border-bottom:1px solid}
table.tabs>thead>tr>th.active{border:1px solid;border-bottom:none}

Then a simple javascript function run at document load progressively enhances the table to act like a tabbed interface:

<script type="text/javascript">
    function regTabs() {
        [].forEach.call(document.querySelectorAll('table.tabs td[headers]'), function (x) {
            var t = x.parentElement.parentElement; // table or tbody tracks active tab
            var th = document.getElementById(x.attributes.headers.value);
            if (th != null) {
                x.th = th;
                th.style.float = 'left';
                th.addEventListener('click', function() {
                    t.activeTab.style.display = 'none';
                    t.activeTab.th.classList.remove('active');
                    x.style.display = '';
                    th.classList.add('active');
                    t.activeTab = x;
                });
            }
            if (th == null || !th.classList.contains('active'))
                x.style.display = 'none';
            else
                t.activeTab = x;
        });
    }
    regTabs();
</script>

Basically, the headers are all floated left and the the active tab content is tracked as a property of the table. The headings capture the click event and modify the display styling property of the tab content and the active tab name/heading.

I'm no CSS or javascript wizard, so this can all probably be improved, but it suffices to make the point that mimicking tabbed interfaces using tables is a good semantic fit.

Given the use of DomElement.classList, I believe this requires IE10 or up, but it certainly could be ported to older browsers without too much difficulty.

Edit: a redditor pointed out that description lists, dl/dt/dd, are also appropriate semantic markup for tabbed panels. These would indeed work almost as well as tables, provided you don't try to use multiple dt elements for any given dd. I've created a jsfiddle demonstrating semantic tabs using description lists, but it requires strange, non-semantic arrangement of the markup to work easily.

Sunday, October 18, 2015

Versioning Domain Entities with Ordinary SQL

In any system with mutable entities saved in an object store, it's inevitable that you'll want to audit the changes made to a particular entity. Explicitly versioning your entities solves this problem, and it's often pretty useful to end users of a sophisticated application, say for undo purposes. However, it's not always so clear how to transform a mutable entity into an versioned entity.

There are also a number of complications to versioning mutable objects:

  1. Handling parent-child relations such that a child change does not propagate to every parent
  2. Supporting efficient queries on history
  3. Supporting efficient queries for the newest version
  4. Space complexity of data representation
  5. Concurrent changes

Some of the above properties also have a natural tension, ie. supporting efficient queries often require storing more data so their space complexity is worse. I'm going to provide an overview here of a very simple schema for versioned entities with the following properties:

  1. Queries on history are just as efficient as queries on current data
  2. Queries on history have the same form as queries on current data, ie. no separate history table that must be handled specially -- in fact, you can use the exact same queries for both history and current data, thus eliminating all query duplication!
  3. Parents and children can evolve independently, ie. data update is "strictly disjoint access parallel" in transactional concurrency parlance: as long as two updates don't update the same object, they will both commit successfully

Schema

The schema changes required are quite straightforward for any entity with a primary key (which should be all of them!):

  1. Add a version number column of type int or long: this is a monotonically increasing version number, where version=0 is used when the object is first created, version=1 is after the first change is committed, etc.
  2. Ensure the PK index orders the version number in DESCENDING order, because the highest version number is the most current version
  3. Change the primary key to be composite over the existing PK and the new version number -- the original PK is now called the "logical id"
  4. Add a revisionDate column storing the date the version was committed
  5. Add an index over the logical id and the revisionDate, again with revisionDate in DESCENDING order

That's it! The original entity's PK is now a logical id rather than an actual instance id, so accessing a particular version of an entity now requires both the logical id and the version to access. The latest version is simply the entry with the largest version number.

Queries

Supposing we have an entity Foo whose PK is called "id", and :columnName is the syntax for a named query parameter. Ordinary queries on entities now become the following queries on versioned entities:

  • Insert a new entity:
    INSERT INTO Foo(id, ..., version, revDate)
    VALUES (:id, ..., 0, GETDATE())
  • Get the most current version:
    SELECT TOP 1 * FROM Foo
    WHERE id=:id
    ORDER BY version DESC
  • Get a version before a specific date/time:
    SELECT TOP 1 * FROM Foo
    WHERE id=:id AND revisionDate <= :rev
    ORDER BY version DESC
  • Update an entity:
    INSERT INTO Foo(id, ..., version, revDate)
    VALUES (:id, ..., :cur_version_plus_1, GETDATE())

Updates on entities are now inserts, so the entities at the data store level are now effectively immutable and the entity table is an append-only log of all changes to those entity types. Since the indexes are ordered so that the latest version is always first, the impact on accessing the most current version should be minimal. Accessing older versions are similarly easy, consisting of a query on the revisionDate column.

We use the revisionDate and not the version number when querying over history to easily support queries over more than one entity. Since each entity evolves its version independent of all others, the version numbers of two connected entities, say Foo[id=1] and Foo[id=2], are completely unrelated. The former might have version=999, and the latter version=1. The change that caused Foo[id=1] to increment its version from 998 to 999, could have also been the one to change Foo[id=2] from version 0 to 1. So if you look at Foo[id=1] at version=998, the only way to see the correct version of Foo[id=2] is to return the first version whose revisionDate is <= the Foo[id=1].revisionDate for version=998.

Fortunately, this makes the queries extremely simple, since a complicated query for a whole set of interconnected entities needs only a single date parameter. In other words, you can compose two independent queries using the same revisionDate parameter name via simple concatenation into a multiquery, and it just works.

Furthermore, your UI then only needs to pass around an optional revisionDate parameter to switch between queries on the latest data and queries on history. In fact, switching the query isn't even needed, because you can use the same query for both!

SELECT TOP 1 * FROM Foo
WHERE id=:id AND (:rev IS NULL OR revisionDate <= :rev)
ORDER BY version DESC

As promised, this query unifies the queries for the most current version, and any historical version of an entity. If the :rev parameter is set to NULL, then the most recent version is returned. The query optimizer can easily eliminate the redundant WHERE clause, so there should be no performance impact. If :rev is present, the redundant NULL check should be eliminated by the query optimizer and the last change before the given date/time would be returned. Alternately, you can simply always use the query with the revisionDate and simply pass in DateTime.Now, which will always return the latest result.

Concurrency

Being strictly disjoint access parallel, this schema on mutable entities is pretty much as good as it gets concurrency-wise. The entity table is essentially an append-only log of changes, and the new composite primary key ensures that any entity can undergo only one change event at a time. This ensures that any invariants in the application's domain model are preserved once committed to the entity store.

This is also the ideal optimistic concurrency model for any CRUD application.

Space Complexity

The space complexity is obviously sub-optimal since every version is stored. However, managing this space is a breeze, since you can easily write a query to delete all entity history before the current version, or keep the N latest versions, or basically any storage policy that makes sense for your business.

One serious concern are "large" columns in an entity, say an unbounded text column, or binary data. In this case, you should perform an additional refactoring of your schema to model git's way of doing things: add a Blob table that holds the data indexed by its hash, then just store the hash in the entity table. This keeps it compact, and ensures you only ever store one version of a binary blob, which you should probably be doing anyway.

Variations

There are a number of variations on this schema, like normalizing the revision date into a separate Revisions table to which you can add a userId to blame for the change, etc. But then the Revision table becomes a central bottleneck to insert performance and updates are no longer strictly disjoint access parallel. Furthermore, all select queries must now join on this Revisions table to find the revision date. I think the schema I've laid out provides the optimal performance tradeoff with little enough space overhead.

Versioned ORM

Standardizing this schema as the default operating mode for an ORM would be ideal. This sort of ubiquitous schema management should be automatic, and now that ORMs are moving to code-first designs where they manage the whole schema anyway, this should be a snap right?

The API interacting with the ORM remains largely unchanged, but all change submissions transparently become inserts, and all select queries automatically have the universal query form described above. A query on history would then simply be a query that's started with a date or version parameter, ie. instead of context.Foo.Where(...), it would be something like context.Foo.Before(DateTime?).Where(...).

Summary

I'm not the first to describe this schema for versioning, but I haven't seen a description quite so comprehensive, detailing the queries and its concurrency properties.

These days most of the buzz is around CQRS and event sourcing. This schema has some connections to event sourcing, since entity tables are now append-only logs with the entire history remaining intact. However, the schema of the entity change events aren't domain events as they are in event sourcing.

CRUD applications don't get much attention anymore, but I agree with Martin Fowler that most simple domains are still better served by a CRUD model than by CQRS, at least given our current tools. Domain entities usually define proper concurrency boundaries (and if they don't, your domain model is probably wrong!), so this schema is a natural fit that still ensures a domain model can naively enforce its invariants in ordinary code, and if those changes succeed then those invariants are preserved in the entity store.

Sunday, October 4, 2015

Efficient Curried Application

I just wanted to jot down some thoughts on compiling eval/apply curried application using an efficient register-based calling convention. This translation is actually quite simple once you get the trick. Leroy's Zinc abstract machine showed the way: evaluate arguments right-to-left instead of the typical left-to-right. The ZAM wrote these arguments to a stack, but in our case we want them to go to the register file first.

So we evaluate arguments right-to-left and write them into registers left-to-right, ie. argN goes in reg0, argN-1 goes to reg 1, ... arg0 goes to regN. At some threshold, we'll run out of registers. If your program is in direct-style with a stack, this threshold is often defined by the callee-save registers of the native calling convention. If your program is in CPS form, then you can use the entire register file.

Once you run out of registers, you have to spill the remaining arguments to the stack. However, the stack is simply a register pointing to some base address from which all other values are accessed by an offset from that base. But so is the closure record! If you lay out your closure record in memory so it has the same layout as the stack frame would, then you can just treat it like a small stack. This can save many memory access operations moving data between closures and registers.

Suppose our register spill threshold is 2 registers (reg0 and reg1), with a third register left for the remaining arguments (reg2), and consider the following C-style function:

int foo(int x, void *y, int z);

Here we have three arguments, one of which will need to be spilled to the stack since we can only allocate two when calling foo. Arguments z and y will go in registers, while argument x will go on the stack. However, the compilation of foo will require a certain layout in memory for optimal efficiency as well (assuming 32-bit pointers):

foo_papp2:
  loadi reg0, reg2 // reg0 = *reg2
  sub1 reg2, 4     // reg2 += sizeof(int)
foo_papp1:
  loadi reg1, reg2 // reg1 = *reg2
  sub1 reg2, 4     // reg2 += sizeof(void*)
foo:
  ...              // foo: reg0=arg2, reg1=arg1, reg2=&struct{arg0}

Hopefully the pseudo-assembly is clear. Function foo gets "prologues", one per argument which is used for partial applications. Each prologue loads an argument from the "stack"/closure record into the registers and decrements the pointer so it points to the next argument. By the time all the registers are used up, you should have exactly the "stack frame" that you need left, and the registers-passed arguments are all in their proper place.

For instance, when you partially apply foo to 1 argument, the function pointer value of the closure record is actually pointing to foo_papp1, and when you partially apply with two arguments, it's pointing to foo_papp2. This pattern is the same for any number of arguments, and the code will all have the same size for each argument, which we will need later when building partially applied closures.

Now consider the following curried applications of foo:

int(void*,int) foo_1 = foo(0);
int(int) foo_2 = foo(0, somePointer);

Hopefully the pseudo C-style closure syntax is clear, ie. int(char) is a closure taking a char and returning an int. The above are two different partial applications, foo_1 which applies only the first argument, and foo_2 which partially applies the first and second arguments. These are what the closures will look like in memory:

foo_1 →
function = &foo_papp1
arity = 2
applied = 1
x = 0
foo_2 →
function = &foo_papp2
arity = 1
applied = 2
y = somePointer
x = 0

Now when you try to apply values foo_1 and foo_2 to the remaining arguments, this actually performs a dynamic check to see how many arguments are required. For instance, fully applying a closure with the signature int(int, int) elaborates to the following pseudo-code:

obj_t returnValue;
RETRY:
switch (closure→arity) {
case 0:
  closure = closure→function();
  goto RETRY;
case 2:
  returnValue = closure→function(arg0, arg1);
  // reg0=arg1, reg1=arg0, reg2=&closure->arity + 1
  break;
case 1:
  closure = closure→function(arg0);
  returnValue = closure→function(arg1);
  break;
}

You have to dynamically check if you're applying all remaining arguments. I handle the case of arity=0 to show that intermediate closures do exist, but you don't typically need that specific case.

When you're performing a partial application, you also have to check the arity and build intermediate closures referencing the proper function address, which changes depending on the arity. Given a closure and performing a partial application, you have to make adjustments to point to the new function pointer. So given a 3 argument closure to which you're applying n < 3 arguments:

closure nclo;
obj_t returnValue;
if (closure→arity == closure→applied + n) {
  returnValue = closure->function(arg0, ..., argn);
} else {
  nclo = malloc(2 * sizeof(void*) + closure→arity - closure→applied + n * sizeof(void*));
  nclo→arity = closure→arity - n;
  nclo→applied = closure→applied + n;
  nclo→function = closure→function - SIZE_OF_FUNCTION_PROLOGUE;  // compute offset of new entry point
  nclo→arg0 = arg0;
  ...
  nclo→argn = argn;
  memcpy(&nclo->argn + sizeof(void*), &closure→applied + sizeof(void*), closure→applied * sizeof(void*));
}

The most general case to handle is when you're in fully polymorphic code for which you cannot know if applying the closure you have is a partial application. This is a combination of the last two elaborations, but still requires only a simple switch statement to handle all cases.

Now the most boring case: when a function is not a closure and it's being fully applied with all arguments at a call site: well in this case you fill the registers as usual, then you push the remaining arguments on the real program stack, and pass the pointer to that activation frame as the "stack" argument. Simple!

This is only a high-level overview for myself, so no doubt I've skipped some important details, or gotten them wrong while jotting these notes down. Please let me know if you spot something!

Summary

Hopefully that didn't ramble so much that the core idea was obscured. The core idea is that all closures are functions taking a fixed number of arguments defined by the number of caller-save registers. These arguments are passed in registers, and a last register is reserved for a "stack frame" holding the remaining arguments. The closure record can act as this stack frame, so you don't have to copy values between records and the stack on closure application.

Furthermore, each function has a "prologue" generated for each argument laid out prior to the actual function being executed in memory. These prologues are used when applying partially applied functions, and they load any previously applied arguments from the closure record into registers, assuming all the reserved registers aren't already used.

This is basically an implementation of the eval/apply strategy, with a few tweaks to reduce the amount of redundant code generated and reuse as much memory as possible. I don't think there's anything particularly novel, but I haven't seen all the details laid out like this (although I may of course have ommitted some details I consider "obvious", even if they aren't to others). If you have any further suggestions, I'd love to hear it!

Limitations

I don't discuss any of the complexities surrounding unboxed types of non-pointer size, which considerably complicates calling conventions. Most functional languages require a universal representation for this reason, usually of pointer size, which is what I assume here. See my series on Garbage Collection Representations to get a feel for how these values translate to machine code, and how this would impact closure representations.

Wednesday, August 26, 2015

Algebra.NET: A Simple Algebra eDSL for .NET

Algebra.NET is a simple library designed to facilitate easy expression and manipulation of algebraic functions. For instance, here's a simple function:

Function<Func<double, double>> a = Algebra.Function(x => 2 * x + 1);

We can compile such a function to efficient IL:

Func<double, double> func = a.Compile("times2plus1");

Or we can apply some algebraic identities to rewrite it:

Identity associative = Algebra.Identity(x => x + 1 == 1 + x);
Identity mulEqAdd = Algebra.Identity(x => 2 * x == x + x);
Console.WriteLine(a);
Console.WriteLine(a.Rewrite(1, associative, mulEqAdd));

// Prints:
// ((2 * x) + 1)
// (1 + (x + x))

Rewrites can sometimes loop forever (consider "x + y == y + x"), so the Rewrite method takes a number indicating the maximum number of iterations to perform all the rewrites.

All the usual arithmetic operations are available, including an extension method for exponentiation:

var f = Algebra.Function(x => x.Pow(3));
Console.WriteLine(x);

// Prints:
// (x ^ (3))

Design

As of this writing, Algebra.NET is a functional example of a simple term rewriting system. Term rewriting is usually pretty awkward to express in an object-oriented language, and I banged my head against the keyboard to figure out a nice way to do it, until I hit on just doing unification (of course!).

So I reused the term language and added an equality operator to generate an identity that conceptually maps one term to another. I then perform unification on the left hand side, and generate a set of substitutions to transform the matching term into the right hand side of the identity.

It was ultimately quite simple, consisting of 3 methods on Term:

Term Rewrite(Identity e, Term[] bindings)
bool TryUnify(Term e, Term[] bindings)
Term Subsitute(Term[] bindings)

Rewrite tries to recursively unify the Identity's left hand side with the current term using TryUnify. On success, the 'bindings' array will have been populated by TryUnify with the substitutions to perform, so it substitutes the bindings into the identity's right hand side to generate the new term.

There are only 3 term types: constants, variables and binary operations. Negation is handled as a binary operation "0 - x" for simplicity. The unification methods on each of the term types are only a few lines of code each, but are quite powerful!

So if you want to understand expression compilation to CIL, unification, or term rewriting, this is pretty much as simple as it gets.

Algebra.NET doesn't perform any term simplification at this point, only term rewriting. Some rewrites may of course be simplifications, but a term like "0 - 3" will not be simplified to "-3".

Future Work

As mentioned, Algebra.NET doesn't perform simplification, so that's a big one. I started developing this to work on symbolic and automatic differentiation for numerical optimization problems. I'm aware of other .NET libraries for this, but I didn't like how clumsy it was to write algebraic expressions, nor did they have nice and extensible facilities for rewriting expressions. So I created this in my spare time and intended to continue fleshing it out as needed.

Monday, August 10, 2015

Sasa v0.17.0 Released

A few new features in this release, and a few bugfixes and enhancements in MIME parsing. As before, the online docs are available on my site, the full release including experimental assemblies on Sourceforge, and the production assemblies on Nuget. The changelog:

 * sasametal can now replace columns and FK associations with enums that are
   provided separately
 * sasametal can now output an interface file, which generates an interface
   describing the database entities
 * many MIME fixes thanks to a few bug reports on Sasa discussion list
 * added System.Data, which provides useful extensions to ADO.NET and
   IQueryable, like a simple interface for batched SQL queries
 * MIME parser now accepts an optional parsing parameter that controls various
   parsing options, like how strict e-mail address formatting accepted,
   and whether HTML views should be parsed into MailMessage.Body
 * added enum as a primitive to Sasa.Dynamics IReducer and IBuilder
 * fixed Sasa.Enums.IsSequential check

Sasa.Dynamics probably isn't used much, but in my experiments I found it increasingly important to handle enums as a primitive, so type reduction and construction (IReduce and IBuild respectively) now includes it as such.

I created sasametal to ease the pain of maintaining several legacy systems that depend upon Microsoft's Linq2Sql. It first renames some associations to make them more meaningful, since Linq2Sql does a bad job of this in some cases. This made some integrations with ORMs like EntityFramework and NHibernate a little easier, but there still remained some painful points.

One of these pains was handling enums, ie. an entity set whose members are effectively fixed. Sasametal now takes enums into account via a command-line option, /enum:[file path], that accepts a file listing table-to-enum rewrites, one per line. For instance, if an entity Foo had a StatusId column with an FooStatus enum for the status, we could call sasametal like so:

C:\somepath> sasametal ...[other option]... /code:FooDb.cs /enum:enums.txt

and the enums.txt contains merely:

dbo.FooStatuses=Namespace.Statuses.FooStatus

You should provide the full table name including schema that sqlmetal would generate, and provide the fully qualified name of the enum you want to substitute. Sasametal will then eliminate the FooStatus entity and any FK associations linked to that entity, then it will update the type of any properties for that foreign key to use the enum type. The entities generated by sasametal are thus somewhat more efficient, and I've found it easier to use Linq2Sql as-is for quick projects against legacy systems.

When combined with sasametal's other new feature, generating interfaces describing the entities, it also eases the transition to more sophisticated ORMs that already handle enums.

Monday, June 15, 2015

Sasa v0.16.1 Released

Just a bugfix release, mainly for the MIME parsing. Changelog:

* added support for 8bit transfer encoding, even if not supported by .NET <4.5
* support content types whose prologue is empty
* added support for arbitrarily nested multipart messages
* added alternative to Enum.HasFlag that invokes Sasa's faster enum code

As usual, online docs are available, or the a CHM file is available on Sourceforge.