Friday, December 27, 2013

Of Optional and nulls

Here at last is that rant about Optional<T> I've promised for so long. Let me preface it by saying that I am not about to propose an ideal way of handling nulls in Java; I don't think Java's null handling will ever be great. That said, there are better and worse ways of doing it, and I think Optional<T> isn't the best way. What's worse, it's edging out a better way.

For the unfamiliar, Optional<T> is a Guava class that aims to eliminate NullPointerExceptions. It has two forms: Optional.absent() and Optional.of(T item). Rather than a method passing back a nullable Foo, it returns an Optional<Foo>. You then call isPresent(), followed by get() iff the item is present.

Optional<Foo> myFooOpt = tryGetFoo();
if (myFooOpt.isPresent()) { // like a != null check
    Foo myFoo = myFooOpt.get();
    // work with the foo
} else {
    throw NoFooFoundException(); // or whatever

The idea is that since you have to call get() to get at the Foo, you'll probably remember to check isPresent first — and thus, no NPEs. It seems reasonable enough, but there are two big problems with it. First, it's verbose; and second, it's not backwards compatible.

The verbosity comes down to a lack of pattern matching in Java. Optional<T> is inspired by functional programming languages that have pattern matching — think of it (very roughly) as an instanceof check combined with an un-constructor. Here's how you'd use Haskell's equivalent of Optional<T>:

case tryGetFoo of
    Just foo -> handleFoo foo
    _ -> handleNoFoo

See how much cleaner that is? Optional<T>-type constructs really benefit from a terse way to get at the wrapped object. Pattern matching lets you do this two ways: by combining the isPresent() and the get(), and by therefore eliminating the need for that temporary, throwaway reference to myFooOpt.

Java is trying to move away from verbose boilerplate; one could argue that the driving force behind both Java 7 and 8 is conciseness, not new features. So why is the Java world embracing the overly-verbose Optional<T>?

The backwards compatibility problem is more clear-cut: existing libraries can't be retrofitted with Optional<T> without huge changes to how overload and method resolution is handled. For instance, Map.get returns V — you can't just change it to return Optional<V> without breaking a lot of code.

Before Optional<T> got cool, one idea people had was to use annotations to do static analysis on the code. Mark a field as @Null, and you know it can be nullable; try to use it without checking for nullity, and you'll get a warning. Nullity can be propagated through result types and arguments, and it all checks out at compile time.

The best part is that you can retrofit it to existing classes. Map.get will never return an Optional<V>, but it could return a @Null V.

There were a few different attempts at these checks, each leading to different sets of annotations. If I had it my way, we'd see one of these — preferably a concise one — get Oracle's official blessing and widespread usage.

A type checker has to be conservative, and that means that you'd have to assume that legacy code always returns nullable references. On the other hand, for new code you'd want an un-annotated method to be assumed to be @NotNull (to cut down on verbosity). This mismatch could be solved in three ways.

  • Classes compiled annotated with a new @NullChecked annotation would also have their methods assumed to be @NotNull.
  • All newly compiled code would assume @NullChecked
  • The type checker could take additional inputs in the form of files that list methods which should be treated as @NotNull regardless of their bytecode.

The third one of those would mean that you could mark methods as not-nullable without touching their bytecode at all. This could be useful for some serialization issues, but more importantly, it would let people locally update projects without waiting on their maintainers.

With that migration path in place, compilers could start treating unsafe dereferencing as errors rather than warnings. And maybe, just maybe, Java can recognize it as important enough as to warrant syntactic sugar: T? as shorthand for @Null T. Kotlin employs a similar trick, and while I haven't actually used it, it sure looks nice.

There are other tricks you can do with annotations that expose a lot of power (including how it interacts with subtyping, etc), at the cost of more complexity. I'm not sure Java needs all those — but even without any of them it's still at least as powerful as Optional<T> — with the added benefit of backwards compatibility.

I'm not sure why annotation-based static analysis never caught on. Maybe the pushes were too fragmented, and developers weren't willing to hack in ugly ways to solve backwards compatibility (like my "additional inputs" file)? Maybe the edge cases are just too many and complicated? A quick google search didn't give me any answers.

Wednesday, December 25, 2013

Explaining Effes using easy words

I read a nice thing today: the person who wrote it was explaining what they do using only easy words. (He got that idea from another place.) That person works on some very hard problems, but he was still able to explain them. I thought I would do the same for Effes (which isn't as hard as what that other person does!).

In order to get a computer to do something, you have to say what you want in a different way than normal talking. There are many, many ways to do this, but only a few are used by most people. I want to come up with a new way of talking to a computer, but I know that it won't ever be used by most people. I'll probably be the only person to ever use it, if I ever finish it at all! So why do I want to do this?

First of all, it's fun. I like learning about new ways of talking to computers, so coming up with one of my own is interesting. This is the most important reason, and it's why I sometimes don't work on this for weeks, if I'm not in the mood. But I also have some ideas I haven't seen before, and which I think might be fun to try.

Most ways of talking to computers have a way of saying that one thing is a kind of another thing. This idea is very important. You can say that a dog is a kind of animal, and so is a cat. This means you can think of both a dog and a cat as being just an animal — in which case you can ask it to walk or eat — or you can think of a dog as exactly being a dog, in which case you can ask it to sit (which you can't ask a cat to do).

Most ways of talking to computers focus on that idea, but Effes focuses on another one: that a dog is an animal added with something that sits. This lets you add a dog with even more things — like something that chases balls. You can even say that something runs, eats, sits, and chases balls, without saying that it's a dog. That means if you have a horse, you can say easily that it does all those things, without having to say that there is such a thing as a "running, eating, sitting, ball-chasing animal," which a dog and horse are, but a cat is not (remember, the cat does all of those things except sitting).

This idea seems simple, but there are hard parts to it. A dog and a person can both eat, but let's say a person can get full while a dog never can (they like to eat a lot!). So if you have something, and all you know is that it eats (you don't know if it's a dog or a person), then it's hard to know if it should be full after it eats.

A bigger problem is if you add a dog and a person together. That doesn't really make sense, but you can still ask the computer to do it! If you have such a thing, and you ask it to eat, then is it full? Its person-half is full, but its dog-half isn't. But a thing can't be both full and not-full. (There are real cases that are like the dog-person but more normal, but for now let's focus on the dog-person.)

The answer to this problem in Effes is that sometimes the dog-person is both full and not-full, and the computer thinks about things both ways. But other times, you tell the computer that one of the halves is more important, and then the computer only thinks about that half's answer to the question, "are you full?" So if you ask if a dog-person is full after it eats, the answer could be "yes and no," or "yes" (if the person half is more important) or "no" (if the dog half is more important). You get to pick which it is.

So far, I have only thought about some of these ideas. My next step is to get the computer to actually start thinking in this new way.

Wednesday, December 4, 2013

An argument for using both composition approaches

A recap of where we are with object composition: In my last few posts about object composition, I initially thought that simple composition methods don't work because of mismatches between compile-time and run-time information.

Instead, I thought I'd have to go with a model analogous to Schrodinger's cat, where a composed object's methods can be bound to multiple implementations at the same time. Each of those implementations is run, and the results are themselves composed into a single object for the result value. Like a cat that's both alive and dead, the method is bound to both implementation A and implementation B — and its result is both A' and B'.

There's a certain beauty to that, but it's pretty complicated. I also suspected it may result in slow code as objects get more and more complicated at runtime. So I thought some more and came up with a solution that leads to a simple, non-Schrodinger object.

But maybe these concerns are premature on my part. I'm basically trying to do two optimizations — for simplicity, and for performance — without having measured how bad they are.

On the other other hand, back when I first proposed the Schrodinger approach, I noted that eventually you'll need to collapse the wave function, so to speak. At the end of the day, you need to print "I'm a tea pot" or 3.14159; you can't print their superpositioned composition.

So then, maybe the solution is to use both approaches. The Schrodinger approach will be the standard composition mechanism (not in any formal way, but as a coding recommendation), while the "simple" approach will be used to collapse the wave.

To anchor things, let's call the standard composition operator <>, and the simple one </ (you'll see why in a bit).

I'll also define a reserved type Simple which can only be composed into other objects using the simple composition operator. So, if a method takes Simple Foo Bar and you have a Foo Bar (which may be superpositioned), you need to compose it using the simple operator: Simple </ myFooBar.

This handles the print case, for instance. stdout.print will take a Simple String, and it's up to the caller to ensure that its variable is already Simple, or to collapse it otherwise.

One of the lingering questions I had in the back of my mind was what to do when two objects of the same type are composed: SomeInt(1) <> SomeInt(2). Now the answer seems obvious. With the Schrodinger composition operator, just superposition them both in. With the simple operator, the left one will get folded in first, and the right one will then collide and be discarded.

That's why I picked </ as the operator. It looks like it points to the left, and thus conveys that the left operand has precedence in case of collisions.

There's not really a compelling argument for this complexity. Why don't I just stick with the simple composition approach? Basically because I think the Schrodinger variant seems fun and worth playing around with.

Monday, December 2, 2013

Saving object composition from complexity

In my last post about object composition, I concluded that simple composition approaches don't work because of the mismatch between compile-time and run-time information. But I realized a few days later that I can define a simple composition.

The trick is to cheat: come up with the answer I want first, and then work backwards to fill in the rest!

What I do is to add a new step at the beginning of composition. In this step, we start out with a "composition target." We then fold the composition operands' components in two phases: first the ones that are part of the target, and then the rest.

What's the target type? At compile time, it's empty; at run time, it's the compile-time type.

As always, I'll illustrate with an example. In fact, I'll use the same example as before. Here's the background stuff:

type List[A] =
    ... (put/get)
    size -> Int
type Box[A] = ... (put/get)

type Container:
  isEmpty -> Bool -- abstract method

type Container Box[A]:
  @override isEmpty -> Bool = TODO

intsList : List[Int] = list(1, 2, 3)
intsBox : Box[Int] = box(4)

tmp = intsBox <?> Container
intsContainer : Container = tmp

... and here's the composition:

composed = intsContainer <?> intsList
composedSize = composed2.size

The problem we had before was that intsContainer is only a Container at compile-time, but it's a Box[Int] <?> Container at runtime. When Container is composed with List[Int] at compile time, no conflicts are found, so the resulting type is Container <?> List[Int]. But at run time, Box[Int] and List[Int] collide and cancel each other out, and the resulting object is only a Container, which doesn't have a size method.

The composition target saves the day. Everything works the same at compile time, leading to a type of Container <?> List[Int]. At run time, we take the following steps:

  1. Decompose both operands, so that the composition is (Box[Int] <?> Container) <?> List[Int].
  2. The target type is Container <?> List[Int], so take those components out of the composition and fold them into the resulting object. We now have:
    • A folded object of Container <?> List[Int]
    • Remaining object of (Box[Int] <?> ∅) <?> ∅, which simplifies to just Box[Int].
  3. Fold the rest of the components (just Box[Int]). Box[Int] collides with List[Int], so discard it — and there's nothing else to fold.

So the resulting object is a Container <?> List[Int], which is exactly what we expected at compile time!

I wrote above that the target type at compile time is empty, but even that can be improved upon: it's the closest available explicit type declaration, if any. So, if you had:

boxContainer = intsBox <?> Container
composed2 = boxContainer <?> intsList

... then it would result in a compile-time error, since intsBox : Box[Int] collides with intsList : Box[Int] (they both define put and get methods). But if you had:

composed3 : List[Int] = boxContainer <?> intsList

... then the target type is List[Int], meaning that this gets folded in first. When the conflict with Box[Int] is detected in the second folding stage, Box[Int] is discarded. The resulting type of the composition (this is all at compile time, remember) is List[Int] <?> Container, which is then "upcast" to List[Int] when it's assigned to composed3.

Thursday, November 21, 2013

Half-thread-safe collections in Java

I thought about an "interesting" (ie, I'm probably the only one who finds it interesting) question yesterday. Is it possible to define a thread-safe collection in Java that doesn't transfer objects across threads safely?

I know, that sounds weird; let me explain. The thread-safe collections in the JDK all have documentation to the effect of "actions in a thread prior to placing an object into a BlockingQueue happen-before actions subsequent to the access or removal of that element from the BlockingQueue in another thread."

My question is: is it possible to write a collection such that the collection itself is thread-safe, but it doesn't provide that sort of guarantee? I'm using "collection" colloquially as "some class that provides put-like and get-like operations."

That is, given:

class Foo {
    int val;

// Thread A:
Foo fooIn = new Foo();
fooIn.val = 5;

// Thread B:
Foo fooOut = weirdCollection.get();
assert fooOut.val == 5;

... is it possible to create a weirdCollection such that it is apparently thread-safe, yet the assertion would fail?

You might ask what I mean by "apparently thread-safe." The easiest definition might include that its implementation contains no data races, but that's too strong. After all, java.lang.String contains an intentional data race (in hashCode()), but it's considered thread-safe.

Let me define "apparently thread-safe" to mean that the collection itself appears to work in a thread-safe manner from the outside. For instance, if the get method is supposed to block until an element is put in, then you should expect that (a) it will return reasonably soon after an element is put into the collection (i.e., almost instantly) and (b) the result will never be null (let's assume the put method doesn't accept nulls).

Here's what I came up with:

public class Holder<E> {
  private E unsafe;
  private volatile E safe;

  public void put(E element) {
    if (element == null) throw new NullPointerException();
    this.unsafe = element; = element;

  public E get() {
    E result = unsafe;
    // We weren't lucky enough to get the unsafe reference
    while (result == null) {
      result = safe;
    return result;

So, my first step is to demonstrate that it's apparently thread-safe (as defined above). To do that, I have to demonstrate that get() will return a non-null reference soon after an element.

The get() method starts by grabbing the value of unsafe and assign it to result. At this point, result may or may not be null. If it is, the code reads safe in a busy-wait loop until it's not. The nature of volatile fields tells us that the JVM will update this state reasonably soon after the write to safe in put. Once result is null, we never assign it to anything else. Actions within a thread always appear sequential, and since result is thread-local, we're guaranteed that it won't flip back to a non-null state.

(Side note: the JVM isn't actually obligated by the JLS to update the state any time soon! It could decide to park the thread for a year, for instance, or order the read to safe after a year's worth of actions in other threads. But in practice, people expect volatile fields to get published essentially instantly, and for them to not be would be a serious performance bug in the JVM. It's similar to how the spec doesn't say how quickly i++ has to happen, but there'd be hell to pay if it weren't nearly instantaneous.)

Okay, so get() returns a non-null value almost-instantly after put. Now I need to demonstrate that this publication is unsafe. This is actually pretty easy! Imagine that result = unsafe assigns a non-null value in that first line of get. If that happens, the body of the while loop will never be executed, meaning that thread B will never read the volatile variable safe. This means there is no happens-before relationship between any of thread B's actions and any of thread A's!

Essentially, thread B got the reference that A unsafely published, but it did so purely out of the kindness of the compiler and CPU's cache. That kindness doesn't come with any other guarantees, such as that the memory referenced by that object has also been flushed. And so, the compiler/cache/whatever is perfectly free to reorder the assignment foo.val = 5 to happen after thread B reads foo.val, which would then have its default value of 0.

I wonder how far this could go. I suspect that it can't go very far: even something as simple as a singly-linked list would get into trouble, as actions like removing a node would be fraught in data races that would be hard or impossible to detect and compensate for in the code (as we did by resorting to the spin on result = safe above).

Tuesday, November 19, 2013

I'm going to have to maybe-kill the cat

In my last post, I discussed problems with the runtime binding of composed objects: if object a has a method foo, and object b also has a method foo, then how does the composed object (a <?> b) behave? In this post, I'd like to explore a separate but related pair of problems: what's the type of a <?> b, both at run-time and compile-time?

(Before you get too far into this post, a disclaimer: this post describes a method of composition and then explains why it doesn't work. If that strikes you as meandering and pointless — if you prefer to read about ideas that might work, rather than those that definitely won't — then you might want to skip this post.)

As before, let me set this up with an example. As an added bonus, you'll get to see the latest revision of Effes' syntax, which is pretty close to complete (now I really mean it!). Since I haven't laid out how composition works, I'll use <?> as a placeholder for "some variant of composition." With that said:

type List[A] = data Node(head:A, tail:List[A]) | Nothing
  put e:A -> List[A] = TODO
  get -> (Maybe[A], List[A]) = TODO

type Box[A] = Maybe[A]:
  put e:A -> Box[A] = TODO
  get -> Maybe[A] = TODO

myList : List[Int] = TODO
myBox : Box[Int] = TODO
myContainer = myList <?> myBox

Here we have two simple types: one representing a linked list, and one representing a box that can contain zero or one items. Their details aren't important. We also have three references: one of type List[A], one of Box[A], and one representing the composition of the first two.

As I discussed in the last post, we have some options with a call like myContainer.put 1. We can take the Schrodinger approach, in which we invoke put on both components of myContainer and then compose those two results; or we can pick one as the "winning" binding and only invoke it.

Let's say we pick the single-bound, "winner" approach; that's the option I was leaning towards in the last post. To make it concrete, let's say that the left-hand side always wins out. That seems reasonable, but what about this situation:

myContainer2 = myContainer.put 123
myVal : Maybe[Int]
myVal = case myContainer2 of
  Box[Int] -> myContainer.get
  _ -> Nothing

In this snippet, we check to see if myContainer is a Box[Int]. It is, so the first pattern will match. This casts myContainer2 to Box[Int] and, within that new context, invokes get. Note that get here has to be bound to the Box version; the List[A] version has a different return type (it returns a pair containing the list's possible head and its tail).

The problem is that myContainer.put was bound to the List[A] method, meaning that the Box component of myContainer never had 123 inserted into it: the value of myVal is Nothing, not 123! This is fully consistent, but it's confusing and violates the principle of least surprise. There has to be a better way!

One possibility is to limit composition such that myContainer2 does not have a runtime type of Box[Int]: the second (catch-all) pattern matches, and the value of myVal is Nothing.

Of course, we always want the runtime type to be a subtype of the compile-time type, so this new approach means that the composed object's compile-time type can't include Box[Int]. We can't generalize that to all compositions, or else the whole idea of composition falls apart and becomes shorthand for the not-very-useful function a <?> b = a.

We can come up with a more precise limit on composition, one that doesn't just throw away the RHS altogether. One approach to compose objects in multiple phases. Here's one such algorithm, in rough terms:

  1. Decompose the RHS object to its constituent objects.
  2. Fold each one of those into the LHS, one at a time. As you fold each object, check to see whether it has any methods that conflict with methods on the composed object; if so, ignore that object (don't fold it in).
  3. Check whether the resulting object has any abstract methods (that weren't given a concrete definition in the composed object). If so, remove the objects that introduced those methods.
These three phases happen separately for both the compile-time type and the runtime type. Here's an example, starting with some types and some objects:

type List[A] =
    ... (put/get, as above)
    size -> Int
type Box[A] = ... (put/get, as above)

type Container:
  isEmpty -> Bool -- abstract method

type Container Box[A]:
  @override isEmpty -> Bool = TODO

intsList : List[Int] = list(1, 2, 3)
intsBox : Box[Int] = box(4)

intsContainer1 = intsBox <?> Container

The List[A] and Box[A] types are as before, except that List[A] now also has a size method.

Let's look at that last line. The LHS is Box[Int] with methods put and get, while the RHS is Container with just one method, isEmpty. There's no overlap in methods, so the composition is straightforward and results in Box[Int] <?> Container for both the compile-time and runtime type.

Okay, so that case works fine. But what about this?

intsContainer2 : Container = intsContainer1
composed2 = intsContainer2 <?> intsList
composedSize = composed2.size

At compile time, that last line looks like Container <?> List[Int], which has no conflicts and thus doesn't remove any types. But the Container component's isEmpty method isn't implemented, so it's removed: the resulting type is List[Int]. We call its size method, which is a pretty reasonable operation to call on a list.

At runtime, the composition looks like (Box[Int] <?> Container) <?> List[Int], and there is a conflict: List[Int]'s methods collide with Box[Int], and we therefore don't fold List[Int] into the composition. That means that the resulting type is just Box[Int] <?> Container — the inverse of the compile-time type! And in particular, there's no size method on that object. Boom.

I've tried a few variants on this sort of decompose-and-recompose theme: decomposing both sides, different handling of abstract methods, etc. Inevitably, the mismatch between runtime and compile-time types always blows up in my face. I don't think there's a way around it.

Unfortunately, I think this leads me to the conclusion that if I'm going to do composition, I have to do composition all the way: Schrodinger-style. This means I'll need to figure out various issues about collapsing the wave form: how does pattern-matching work, and what happens when an object is composed with an object of the same type? For instance, how does "foo" <?> "bar" work?

Friday, October 11, 2013

Of object composition and a maybe-dead cat

There's a problem with Effes that could prove deadly, at least to object composition: composed objects can conflict in ways that can't be caught at compile time, and there's no satisfying, simple way to resolve the conflicts. It's a problem I realized a few weeks ago while thinking about various use cases for composition.

To warm up, consider these types:
  • Animal
  • Dog is Animal
  • Happy
  • (Happy Dog) is Animal

Dog and Happy are simple types, and (Happy Dog) is their conjunction. Animal is an "interface-y" type with implementations provided by Dog and (Happy Dog). Let's say balto has a compile-time and runtime type of Happy Dog, and that Animal provides a method speak. Does balto speak invoke the implementation for Dog or Happy Dog?

That's not so tricky (I said it was a warmup!). Happy Dog is intuitively more specific than Dog, so we should invoke its speak. Let's try something just a bit trickier:

type Animal:
  speak -> String
data Dog is Animal:
  speak = "Woof!"
data Cat is Animal
  speak = "Meow!"

balto = Dog
felix = Cat
dogCat = balto @ felix
spoken = dogCat speak

This is trickier. The object dogCat is both a Dog and a Cat, both of which provide equally-specific overrides of Animal::speak. Which gets invoked? If your instinct is to just reject that at compile-time, consider this slightly more dynamic variant:

type Animal:
  speak -> String
data Talking, Dog, Cat -- no inherent relationships to Animal
type Talking Dog is Animal:
  speak = "Woof!"
type Talking Cat is Animal:
  speak = "Meow!"

balto = Talking Dog
felix : Cat = Talking Cat
dogCat = balto @ felix
spoken = dogCat speak

Here, felix has a run-time type of Cat but a compile-time type of Talking Cat, which is Animal. This means we can't detect a conflict at compile time, but dogCat still has two equally-specific implementations of speak at runtime.

We can come up with something clever based on compile-time contortions: take only the left object's implementation when they're composed, for instance, or take only the one whose compile-time type provides an implementation and fail to compile if they both provide an implementation. These would all work for the examples so far, though they're not very inspiring.

The problem is that Effes, like other functional languages, lets you do pattern matching — that is, runtime type checks. This is the killer:

type Animal:
  speak -> String
data Talking, Dog, Cat
type Talking Dog is Animal...
type Talking Cat is Animal...

balto : Dog = Talking Dog
felix : Cat = Talking Cat
dogCat = balto @ felix

spoken = case dogCat of
  Animal: dogCat speak
  else: "whatever"

balto and felix never have a compile-time type of Animal, and neither does dogCat (its compile-time type is just (Cat Dog). Yet both of them have a runtime type of Animal, and we intuit that dogCat should, too. The question, again, is what that speak method does.

There aren't many good, simple solutions to this that I can think of. There are bad, simple solutions; and there are interesting, complex ones. These are the ones I came up with:

  1. A runtime failure at balto @ felix, since it would create a possibly ambiguous object.
  2. A runtime failure when dogCat is matched against Animal and then invokes speak, since this is the aforementioned ambiguity.
  3. Pick one of the implementations by some arbitrary rule, like the left-hand side of the conjunction.
  4. Run both methods and return the conjunction of their results.

I really want to avoid runtime failures here: it's too easy to stumble into this case. The last option is interesting in a computer sciencey way, but it creates a possibly complex world. Let's call this the Shroedinger approach, since it creates an object that's like a superpositioning of barks and meows. What if we then run that object through a pattern match?

who = case spoken of
  "Woof!": "It was a dog"
  "Meow!": "It was a cat"
  else: "What was it?"

If we took the Shroedinger approach for speak, then spoken is ("Woof!" @ "Meow!"); it can match against either of the first two cases. Pattern matching traditionally executes just the first branch that matches, but this approach is inconsistent with the Shroedinger approach that got us this spoken in the first place. So, let's stick with the Shroedinger philosophy of taking all of the cases and conjoining them: the resulting object is a conjunction ("It was a dog" @ "It was a cat"). Of course, the expression "It was a dog" could instead be some method, one which itself might return a "superpositioned" object; and the same could be true of "It was a cat". By the time we finish the case statement, who might be pretty darn complex!

In short, we have three options for handling ambiguity: runtime failure, subtle behavior that depends on composition being non-commutative, or subtle behavior in which each ambiguous decision split the program into a more ambiguous runtime state. The last of those options is the most novel and interesting, but eventually we have to collapse the wave-form: most of the time, we need to end up printing either "the dog says woof" or "the cat says meow."

One option is to provide both philosophies: two parallel case statements, one of which executes all branches and the other of which executes the first it finds. A similar option is to provide a sort of decomposition; given an object with a compile-time type Foo, return an object which is only a Foo, with all other composed objects removed.

Even if we take this approach, all this superpositioning makes the code a lot less strictly typed. The type system provides a very low bound on each object's guarantees! Traditional type systems generally make guarantees like, "it'll be a list, but we don't know what kind of list." Shroedinger-Effes will be able to say "it's an empty list, but we don't know if it's also a non-empty list."

It also raises the question of whether methods are overridden, or if they're always superimposed. Remember our first Balto, the Happy Dog? We intuited that its speak should override the "less specific" implementation provided by Dog. Does that still happen, or do we Shroedinger it up and invoke both?

There is one simpler option. I wrote above that because balto and felix each have a runtime type (but not compile-time type) of Animal we'd want their composed object to also have a runtime type of Animal. What if this weren't the case? That is, that runtime behavior could only ever override types and methods that are known at compile time, but not to perform arbitrary runtime checks such as whether dogCat is an Animal? That may be the way to go; I'll think on it for a bit.

Tuesday, September 10, 2013

Maybe it's optional?

A lot of functional and functional-inspired languages don't have the concept of null. Instead, they have types called Maybe or Optional — basically a box of 0 or 1 items. Effes is going to take that approach, but I might put a twist on it.

In a nutshell, the idea behind Maybe (I'll settle on Haskell's terminiology) is that there's a Nothing that represents the absense of something, a Just Foo that represents one Foo, and a Maybe Foo type which can be either a Nothing or a Just Foo.

Like other functional languages, Haskell has syntax (called pattern matching) that's kinda-sorta like an instanceof check plus a downcast in Java. Putting it all together looks something like this:

sayHi :: (Show e) => e -> String
sayHi Nothing = "Nothing to say hi to!"
sayHi (Just e) = "Hello, " ++ (show e) ++ "!"

(The (Show e) => syntax just means that e has a show method, which is like Java's toString.) In Effes, a direct translation would be a disjunctive type:

data Nothing
data Just[A] = elem : A
type Maybe[A] = Nothing | Just[A]

sayHi (m:Maybe[A Stringable]) -> String:
  case m of
      Nothing: "Nothing to say hi to!"
      Just e: "Hello, {{e}}!"

Because Effes has a more flexible type system, we can actually get away without the Just part of the Maybe pair. Instead, it looks something like this:

data Nothing
type Maybe[A] = Nothing | A

sayHi (m:Maybe[A Stringable]) -> String:
  case m of
      Nothing: "Nothing to say hi to!"
      e: "Hello, {{e}}!"

There's not a really strong driving force for this, except that it seems a bit cleaner. Instead of a Maybe being "either nothing or a box of one something," it's "either nothing or one thing." Plus it takes advantage of my cool new type system, so that's nice too.

The problem is when the A type is itself a Maybe: Maybe[Maybe[A]]. If we see that it contains a Nothing, does that mean we didn't have anything, or that we had one Nothing? To prevent unreachable code, I'd probably want the type checker to reject this altogether: Maybe[Maybe[String]] would be a type error.

That's not terrible, I guess, but the erroring type could be nestled in some data structure. For instance, if a linked list uses Maybe to signify an end, then LinkedList[Maybe[String]] wouldn't compile — and probably with some unintuitive or frustratingly un-actionable error message.

On balance, I'm leaning towards keeping the Just type. It doesn't add much complexity to the Maybe type, pattern matching keeps the call sites simple, and it eliminates ambiguity.

Friday, September 6, 2013

(Finally!) A use case for composed objects

I've been playing with Effes syntax off and on for the last couple weeks, and I happened across a use case for conjunctive types. I'll admit: even though I thought the idea was neat when I came up with it, I never knew what I'd use it for — it just seemed like it should be useful for something. So it was nice that a use case came up without me actively looking it.

Having mostly settled on the smaller syntax stuff like function declarations, I decided to start putting it together by writing a JSON parser. The JSON type itself is a disjunctive type, which is very intuitive:

type Json = Null | String | List[Json] | ...

But as it turns out, conjunctive types — informally, "TypeA = TypeB and TypeC" — are useful as the parsing function's result type.

Parsers often break the principle of single responsibility for the sake of usability: they're responsible for parsing text and providing context in the event of failure. It's not a major problem, but it does pose some design questions. For instance, how much context should the parser keep? The current line number? Column number? Previously seen text? How much previous text? Note that it doesn't need any of this for its main task, which is parsing.

In my parser, I solved this by having the parser return either a Json object or a failure composed with the remaining string: Failure("expected a comma") @ string (I've picked @ as my composing token.) The full result type is Json | (Failure @ String).

That doesn't sound exciting at first, but remember that a string is just a list of characters, and a character is itself an object — meaning it can itself be composed! In other words, what the parser thinks of as List[Character] might actually be List[Character @ MyContextFoo]. That context component could contain the current line and column position, the previous few chars of context, etc.

That means that the parser is agnostic of all that contextual information, which after all it doesn't need. It keeps chipping away at the string, oblivious of the fact that the characters it sees are in fact rich objects instead of plain Chars; in fact, the string is oblivious of that, itself.

At the application level, parsing is now a three-step process: get the input string, transform it into a string of "rich" characters (basically by mapping each character and then using those mapped characters to create a new string), and parsing that enriched string.

One problem I haven't resolved yet is how to statically type all this. It's not enough for the programmer to know that the string is enriched; how can they convey this to the type system in such a way that the information gets through to the result type? This is important, because otherwise the language devolves to a static-dynamic hybrid. That's a slippery slope that will probably lead to mostly dynamically-typed programs.

I don't want the parser to have to know that it's working with a subtype of String if that's only relevant to the result type. If nothing else, that puts the onus on the API author to always use general (and presumably more verbose) declarations; the strength of this idea is that it puts the power in the call site's author's hands. On the other hand, I do want the input string's "subtype" to bubble back up, so that the return type is promoted to something like Json | (Failure @ MyCoolString). I have some ideas, but the devil is in the details...

Friday, August 9, 2013

Time to get cracking

"This business isn't for men of words," said Borja. "It's for men of action."
A. Perez-Reverte, The Club Dumas

I have some ideas for Effes that I haven't written about yet, but it's time to take a break from this exploratory spec work and get some concrete stuff done. I'm going to handle this in four stages:

  1. Feature freeze: Done, as of... now. In order to make Effes what I want it to be, I will eventually features such as easy exec'ing to shell (hopefully as easy as it is in bash), easy parsing of inputs from shells, and other things. Eventually I'll probably need a multithreading model, too. But for now, I'm going to say that anything I haven't discussed so far isn't making it to this first pass.
  2. Informally figuring out the syntax: In other words, me toying around in vim to figure out what I want programs to look like.
  3. Coming up with a spec: Basically, stabilizing the doodles from the previous step and formalizing them.
  4. Implementation: I have no idea how I'll do this. Hopefully I'll be brave enough to try and use Haskell until Effes can bootstrap itself. I'm going to have to read up on type inference, aren't I? Yikes.

Stage 2, which is the one I'm in right now, probably won't yield much interesting blog material. Stage 3 might, and I hope stage 4 will — though it'll probably more about me learning Haskell than about Effes, at that point. In the meanwhile, I've got some rants and mouth-foaming that might kill some time. Didn't I promise a rant about Guava's Optional<T> a few months ago?

Wednesday, August 7, 2013

Conjunction disjunction, what's your function?

It's time to turn my attention to functions. Like virtually any language, Effes has functions; like any language written in the last few years, those functions are data. (At what point does the term "functional programming language" start to lose its meaning, given that every language these days incorporates FP principles?)

Every function in Effes has a type, which is just its signature. For instance, doubler(i : Int) = i * 2.0 would have a type of Int -> Double. Subtyping basically follows the Liskov substitution principle: f : Vehicle -> Door is a subtype of g : Car -> Part because you can use f anywhere you'd be able to use g.

Like data types, function types can be composed into conjunctive and disjunctive types. Take the following:

data Cat, Person
countLegs : (Cat -> Int) = c -> 4
sayHi : (Person -> String) = p -> "Hi, I'm #{}"

countHi = countLegs sayHi

countHi is a conjunction of countLegs and sayHi. Invoking this function doesn't chain or compose them; only one will get invoked. Think of it as a box that contains both functions: invoking this box means opening it, taking out one of the functions, and using it.

This follows the data trait analogy: a Cat Person type refers to an object that is both a feline and a human; it also can be thought of as two separate objects stuck together. When you have a Cat Person and ask for its fur type, you're looking at its cat-like nature; when you ask for its social security number, you're looking at its person-like nature. Similarly, countHi has both a countLegs-like nature and a sayHi-like nature.

(There's a slight difference here in that a conjunctive data type can be used in functions that look at both of its aspects; this doesn't apply to functions.)

This means that the input to countHi can either be a Cat (in which case countLegs is invoked) or a Person (in which case sayHi is invoked). In the first case, the result is an Int, and in the second case it's String. Putting that together, we get a type of (Cat | Person) -> (Int | String). More generally:

(Ia -> Ra) (Ib -> Rb) : (Ia | Ib) -> (Ra | Rb)

Disjunctions follow the same analogy; a Cat | Person type refers to a single object which is a feline or person, so functional disjunctions work the same way. Imagine a type that said "I'm either countLegs or sayHi". To pass an argument into it, you'd have to guarantee that the argument could be passed to either function; it would have to be both a Cat and a Person. Depending on which function got invoked, the result would be either an Int or a String.

(Cat -> Int) | (Person -> String)
 : (Cat Person) -> (Int | String)

(I0 -> R0) | (I1 -> R1) : (I0 I1) -> (R0 | R1)

Notice the symmetry here: The input to a conjunction of functions is the disjunction of their inputs, while the input to a disjunction of functions is the conjunction of their inputs. In both cases, the result is a disjunction of their results.

One edge case is when the input to a conjunction is composed of both input types. For instance, what happens if we break out the beakers and engineer a Cat Person? If we pass it to a conjunction countLegs sayHi, both functions in that composed function can process it. Which gets to? (This isn't a problem for a disjunctive type, since its object is just a single function.)

I'm not positive how to handle this case. One solution is to have both functions process it, and then compose the result. So, passing Cat Person to countHi = countLegs sayHi would result in a (4 "Hi, I'm Yuval") : (Int String).

Friday, July 26, 2013

Another blow against strict immutability

I know I've talked about mutability before, but I realized the other day yet another argument against taking immutability too far: it complicates logging.

I'll take Haskell as my usual favorite example, but this time I won't put it in a favorable light. Because it treats all I/O as non-pure, mutable behavior, any logging you do has to be within the IO monad. This means you have to change your application's structure to support logging. You have to do so within a monad, and, if that monad doesn't happen to be IO, it needs to be a transformer monad. If it's not, and you want logging — find a new monad.

(Full disclosure: I haven't written any real apps in Haskell, so I haven't needed logging past what the repl shell gives me. Maybe all of this is not a problem in practice.)

You can cheat a bit: there's a debug module that lets you log from pure code, but it's not meant for production. For the most part, pure code just can't be logged.

To me, logging is critical. It's important both for "interactive" debugging (as I'm writing new code) and for figuring out what went wrong when a bug crops up in the field. At the same time, it's orthogonal to the logic of the application, and so should dictate as little as possible about how the application is set up. If I have to turn a pure function into an impure one just so I can log some values, that seems like the tail wagging the dog.

Even though Effes is restrictive about mutable types, it considers loggers immutable because their actions (namely, printing bytes to a file) can't directly change the behavior of Effes objects. This means you can use a logger anywhere you like, which is good.

Thursday, July 25, 2013

Polymorphic upcasting?

In my last post, I summarized Effes' polymorphism by observing that you can't know exactly how an object will behave unless you create it. That's actually not so weird — it's how all OO code works. I know exactly how an ArrayList works in Java; I have pretty much no clue how a List I get from a function will work (unless I look at that function's code, of course). The twist that Effes brings is that the binding is done based off of an object's specific composition, not based off its class. It's much more fluid.

That last post had various examples, but here's an interesting one: you can actually upcast an object's function resolution! Here's how:

data Loud; Animal; Cat
Cat is Animal

speak(c:Loud Animal) : String = "Stomp!"
speak(c:Loud Cat) : String = "Meow!"

getAnimal() : Animal = Loud Cat
animal = getAnimal() // 1
loudAnimal = Loud animal // 2


At line // 1, we get some sort of Animal. We don't know what kind, but we don't need to because of polymorphism. As it happens, it's a Loud Cat, which means it'll meow at us when we ask it to speak.

At line // 2, we take our animal and make it loud. The interesting thing to note here is that the animal already was loud, though the compiler doesn't know it: specifically, it was a Loud Cat. But, since the compiler doesn't know that, it creates a new object, with new bindings. At that point, the only thing the compiler knows is that it's a Loud Animal, so it binds it to the stomping version of speak.

This behavior is a bit counterintuitive at first. By doing what most systems would call a no-op (I even said it would be, last month!), an object's behavior changes. Specifically, adding the Loud trait to an object that already had it ended up changing its function binding.

This may seem weird, but I'd like to give it a try. My hypothesis hope is that the tradeoff of more deterministic object creation is worth it.

Incidentally, this rebinding would not happen if the operation was known to be a no-op at compile time:

getLoudAnimal() : Loud Animal = Loud Cat
animal = getLoudAnimal()
loudAnimal = Loud animal

Here, animal is already Loud, so adding Loud to it becomes a compile-time no-op, with no rebinding. While this particular example is silly, this use case is actually useful if the redundant trait has parameters that you'd like to change:

data Microchipped(id : Int)
data Cat

getAnimal() : Microchipped Animal =
  Microchipped(123) Cat

animal1 = getAnimal()
animal2 = Microchipped(456) animal1

Here, we're changing the animal's microchip id without having to worry about upcasting its function resolution.

Wednesday, July 24, 2013

Subtyping and function resolution (at last!)

I've been putting off for too long my ideas for subtyping and function resolution, because I've wanted to get things juuust right. Instead, I'm going to just outline my ideas and move on, because I want to start coming up with a grammar and writing some actual Effes. I've been dragging my feet for too long!

So, subtypes first, then function resolution.

Type hierarchy in Effes

  • A is a subtype of B if there is a "class-based" implementation that says B is A. For instance, if we declare that Stack[A] is Sizeable where..., then Stack[A] is a subtype of Sizeable
  • A is a subtype of B if A is a conjunction type whose parts include B. For instance, FourWheeled Vehicle is a subtype of FourWheeled and Vehicle.
  • Union types act as a single unit and don't establish any subtype relationships with their components. So:
    • Cat | Mouse isn't a subtype of Cat or Mouse.
    • Neither Cat nor Mouse are a subtype of Cat | Mouse.
    • However, since Cat | Mouse is a single unit, Cute ( Cat | Mouse) is one of its subtypes.
  • Any type is of course a subtype of itself.
  • Without getting into too much details, subtypes work with the components of conjunction types, not just the full conjunction. For instance, if Box is a subtype of Container, then Big Box is a subtype of Big Container.
I think for generic types, I'll say that Box[Gum] is a subtype of Container[Candy] if Box is a subtype of Container and Gum is a subtype of Candy. This is extremely up for grabs; it can wreak havoc on collections, but that's because they're mutable. Effes has a much stricter set of restrictions on mutability, which I think eliminate the problem.

Function resolution

In a nutshell: functions are bound to an object according to the most specific type information known about it at compile time. (That last bit is important!) For each function f, find the most specific subtype that defines f, and bind that version of f to that object.
-- Define some types
data Animal; Ascii; Cat; Dog
Dog is Animal
Cat is Animal

-- Define some methods
speak(a:Animal) : String = "Zzzz" -- all animals sleep
speak(c:Cat) : String = "Purr"
speak(c:Ascii Cat) : String = ">^-^<"

-- Invoke them
speak(Cat) -- Purr
speak(Dog) -- Zzzz
speak(Ascii Cat) -- >^-^< 
peak(Dog Cat) -- compile-time error
speak(Ascii Animal Cat) -- >^-^<
The first three of the function invocations should be pretty straightforward. The fourth case is a compile-time error because Dog Cat is "equally subtype-y" to both Dog and Cat, so the compiler doesn't know which method to use. (Maybe I'll provide some overriding behavior in the future, but for now I'm just going to disallow it altogether.) The last one is an edge case — hey, every language has to have 'em! The reason is that type conjunctions are commutative and associative: (A B) C is the same as B (C A). So, even though Ascii Animal Cat doesn't look like anything meaningful, it can also be expressed as (Ascii Cat) Animal, which is a subtype of both Ascii Cat and Animal (as well as of Ascii and Cat, but that's not important for this example). Since Ascii Cat is itself a subtype of Animal, it is the most specific subtype and thus supplies the function. One last thing, which is the bit about binding at compile time. Let's tweak the menagerie a bit:
data Ascii; Hungry; Cat

speak(c:Ascii Cat) : String = ">^-^<"
speak(c:Hungry Cat) : String = "Meow!!" -- at 5am...
speak(c:Hungry Ascii Cat) : "4d 65 6f 77 21 2`"

getKitty() : Cat = Ascii Cat
c1 = getKitty() c2 = Hungry c1
In this example, c1 has a compile-time type of Cat and a runtime type of Ascii Cat. However, where the object was created (namely, the only expression in the body of getKitty), it had a compile-time type of Ascii Cat, so speak(c1) is bound to speak(c:Ascii Cat) and would result in ">^-^<". The last line is where things get potentially unfamiliar. Even though c1 has a runtime type of Ascii Cat, it has a compile-time type of just Cat, so Hungry c1 has a compile-time type of Hungry Cat, not Hungry Ascii Cat. This means speak(c2) would result in "Meow!!".

This feels a bit weird at first blush: it's a blend of polymorphism and... something that feels un-polymorphic. I may change my mind, but the reason I like it is that you can know, easily and deterministically, how functions will be bound just by looking at where an object is created. That is, you don't have to reason about where the object came from and what its runtime type might be; you just look at what it is, and there you go.

The basic rule is that you can't know exactly how an object will behave unless you create it.

Monday, July 22, 2013

The path to function resolution is a winding one

Eek, it's been a while since I've written here. I've been putting off a few posts on subtypes and function resolution, because my ideas aren't fully fleshed out. But here's an update, which is mostly some insight into how my thought process is changing.

I originally had a fairly complicated subtyping scheme in mind, because I wanted to support various ways of composing types. A lot of that had to do with the fact that I wanted to be able to write code like:

data Foo(foo : String)
data Bar(bar : String)
data Buzz -- no args

a = Foo("something")
b : Foo = a Bar("another")
c : Foo Buzz = b

The first two assignments are simple, but the third is... weird.
  • a is a Foo
  • a Bar("another") is a Foo Bar, which is a subtype of Foo and can thus be assigned to a Foo-typed reference — in this case, b
  • c : Foo Buz would be neat if it worked, but it'd require jumping through hoops with the subtyping

The reason I want that last line to work is that Buzz may have some neat behavior, including an override of default behavior. Here's a more concrete example:

getNames() : Stack[String] = ...
useNames(names : QuickSize Stack[String]) = ...
useNames( getNames() )

I'm using the definitions of Stack[A] and QuickSize from a while back. Basically, Stack[A] is Sizeable but has O(N) performance for getting a stack's size, while QuickSize can override a Sizeable object's size method to provide O(1) performance.

The problem is that useNames wants that O(1) performance, while getNames doesn't provide it. My original idea was to allow this with some weird, complicated subtyping rules; that's a bad idea, and I'm abandoning it. My next thought was to allow type declarations to implicitly create composite objects, such that this:

c : Foo Bar = Foo(789)

...would be sugar for this:

fTmp : Foo = Foo(789)
bTmp : Bar = Bar()
c : Foo Bar = fTmp bTmp

I think I'm going to abandon this as well, because it's still complicated and doesn't buy much. In fact, given that I want type inference to eliminate most explicit typing, putting so much semantic importance on declared types is actually a pretty counterproductive plan.

So here's my new idea: do nothing!

Here's how it works. Let's say you have getNames() and useNames as above. In that case, the call site just has to add the QuickSize itself:

useNames( QuickSize getNames() )

That's not so hard. It works because QuickSize (getNames()) creates a new object, the composition of the getNames() : Stack[String] and QuickSize. ("Creates" in the conceptual sense; it may be optimized to just a runtime check).

Better yet, the useNames method can get rid of the QuickSize requirement altogether, since it's just an optimization that polymorphism can handle within the method:

useNames(names : Stack[String]) =
  names = QuickSize names

So, that's where I am right now. This simplifies the subtyping system a lot, though I still have to figure out what to do with generics and covariance (Stack[Int] vs Stack[Num]). And those simplifications in turn simplify function resolution, which is where all this is really headed in the end.

Thursday, July 11, 2013

CoffeeScript should handle callbacks better

I want to add a quick addendum to yesterday's post about best practices. I mentioned the staircase problem caused by Node's reliance on callbacks: if one action is a prerequisite for other actions in a method (for instance, you query a database and then act on those results), the rest of that method ends up indented.

Node has another big problem, which is that its target language, JavaScript, is awful. Luckily, one of Node's third-party modules, CoffeeScript, provides a decent language that compiles down into JavaScript. We get all the Node goodness without the CoffeeScript badness!

Since CoffeeScript is a new language that can evolve quickly, and since one of its main use cases is Node, and since Node relies heavily on callbacks... why not add some sugar to make callbacks a bit nicer? I propose a way to bind callback arguments to left-hand variables. This is actually pretty similar to what Haskell does with its do notation, and for pretty similar reasons.

Let's take a simple, imperative-with-exceptions snippet of code:

  res1 = func1 arg1
  [res2a, res2b] = func2 res1
  if res2a is "foo"
    res3 = func3 res2b
    doBar res3
catch err
  handle err

That's pretty simple. Watch how gross it turns when we use callbacks instead of just returning back the results:

func1 arg1, (err, res1) ->
  if err?
    handle err
    func2 res1, (err, res2a, res2b) ->
      if err?
        handle err
      else if res2a is "foo"
        func3 res2b (err, res3) ->
          if err?
            handle err
            doBar res3

My suggestion is to create some sugar for that. It would look something like this:

do and throw err...
  (err, res1) <- func1 arg1
  (err, res2a, res2b) <- func2 res1
  if res2a is "foo"
  else do...
    (err, res3) <- func3 res2b
    doBar res3
  catch err
    handle err
Notice how similar this is to the original, easy-to-read, imperative style. The general idea is simple: the new do... syntax introduces a block of code in which callback variables can be bound on the left-hand side. Every time that happens, it starts a new callback nested in the previous one. If you provide the and throw varname syntax, then it treats left-hand bound variables of this name as errors, and if one ends up being non-null, its callback will run the code in the catch block and nothing else.

I won't pretend this is a small bit of sugar; it probably has some interesting edge cases, and the concept might be a bit weird to grok for someone who's new to it. But it's an elegant solution to a real problem that's pretty significant for a major part of CoffeeScript's target audience.

Wednesday, July 10, 2013

Tutorials need a section on best practices

I took a bit of a break from Effes over the Fourth of July weekend to relax with friends, watch some TV and teach myself node.js by jumping into a small project. The process reminded me of a problem I've hit a few times in various self-learning exercises: docs for languages and frameworks tend to focus almost exclusively on syntax, APIs and other hard facts, but they don't talk much about best practices.

With Node, my question was how to get around the "staircase problem," where indentations march the code off the right side of the screen. Node is non-blocking and uses callbacks for just about anything that goes to the outside world. For instance, its mysql module puts its result set into a callback. That has some neat benefits, it leads to indentation hell. Here's an example in CoffeeScript:

m = mysqlPool()
m.query getFizzes, (err, fizzes) ->
  if err?
    console.log "Error: #{err}
    foos = [ for fizz in fizzes]
    for foo in foos
      do (foo) ->
        m.query selectBars, [foo], (err, bars) ->
          if err?
            console.log "Error 2: #{err}
            console.log("Found a bar!", b) for b in bars

That barely even fit in this blog layout; I had to change bar to b in the last line. And while it's true that blog layouts aren't where most coding happens, the staircase problem is annoying and makes code hard to read.

Now, this is not the biggest problem in the world, but it does make code a bit hard to read if the sequence of actions gets much longer (it's only two above). I solved it in one kinda-hairy section (4-long sequence with a conditional, oh man!) with a bit of refactoring, but the resulting code was actually a bit harder to follow because it broke up the logic's natural flow.

There are other ways I could have solved the problem, and there's at least one third-party module that may help, but here's the real point: the Node docs don't help me out. They don't even acknowledge the problem. If the best thing to do is to use async with CoffeeScript, Node should tell me so.

Maybe it's unfair to pick on Node's docs, since there's barely anything as far as a tutorial. You've got a "hello world" example on the front page, a link to the built-in library's APIs, and that's about it. But it's not just Node.

For instance, the intro guide for Ruby on Rails promises to teach the reader "the basic principles of MVC," but it never mentions MVC again, and doesn't give much guidance for when to put logic in a controller vs a model. Github doesn't talk about the benefits of forking vs branching — the first three hits for "git fork or branch" on google are all on Stack Overflow. Guice's tutorial doesn't tell us where to put our injector. And so on.

I'll give the RoR guide some slack, because MVC has been around for a while; maybe they just assumed that most of their readers would already know it. But in general, the more newfangled a technology is, the more its project should tell newbies not just how to use it, but how to use it well.

Tuesday, July 2, 2013

Immutability and randomness

In my last post, I proposed a working definition for mutability: in short, an object mutable if code can directly affect its state. I mentioned one odd implication of this rule: certain OS calls like getCurrentTime are considered immutable even though their results are different each time. An even trickier problem is that of randomness and pseudo-randomness.

Let's take a random number generator first. There are various ways to produce truly random (or very close to it) number sequences: using the weather, relying on hardware inputs, or even leaning on quantum mechanics. A language can't affect any of these, so random number generators that use them fall within the realm of immutable objects by my definition.

On the other hand, a pseudo-random number generator's sequence is completely based on code, once its initial state is set. The PRNG has some seed, and when asked it does some math to determine a new seed and a result for the random. This means that PRNGs are "normal" objects that have to participate in a language's mutability rules.

For languages that have immutable objects, this makes PRNGs awkward to work with; rather than just asking for the next random number, you have to ask for a pair (randomNumber, nextPrng). If you forget to do this, your pseudo-random sequence will be boringly predictable. Here's an example from Haskell (using ghci):

> prng <- getStdGen > next prng (750537749,750578441 40692) > next prng (750537749,750578441 40692) > next prng (750537749,750578441 40692)

As you can see, this is quite boring; the reason is that the second element of the returned pair (shown here as "750578441 40692") is the crucial next-immutable-PRNG, which we're not using. Here's that code done right:

> prng0 <- getStdGen > let (r1, prng1) = next prng0 :: (Int, StdGen) > let (r2, prng2) = next prng1 :: (Int, StdGen) > let (r3, prng3) = next prng2 :: (Int, StdGen) > r1 750537749 > r2 1579754317 > r3 1580611703

This is the behavior we want, but look how much more code it took! Even ignoring the ":: (Int, StdGen)" bits (that's Haskell needing some help with its type inference), it took twice the lines of code and required littering the block with one-time-use prngN variables.

This is a case where I think Effes' semi-mutability really shines. Once you get the PRNG object, you can use it all you want, and it'll mutate its internal state to update its seed as it goes. If you pass it to a function, that invalidates (at compile time) your copy; you have to get a different one if you want to continue generating random numbers. This is a tad more cumbersome, but not much (there are ways to alleviate it, such as "forking" the PRNG before passing it to the function).

Monday, July 1, 2013

Immutability isn't an exact science with these clowns

The last few posts have read like a reference manual more than a blog, so I'm going to switch things up a bit and write about immutability at a high level. This post will have an Effes-based perspective, but it won't tie in directly to the language.

The question of the day is: what makes an object immutable? The easiest and (almost) most restrictive definition is that nothing about the object can change — its bytes are determined at construction time and never altered afterwards.

Well, what about an object that lazily caches the result to some method — a lazy-evaluated asString field for the object's toString representation? The first invocation of toString will modify the object's bytes, but I would argue that it's still immutable because its observable behavior has not. This is my version of the equivalence principle: if you can't tell from your rocket ship that the universe has changed, then it hasn't in any real sense.

Let me take it a step further: is stdout mutable? Let's assume that a write to stdout never fails, that the stream is never closed, and that the stdout object doesn't contain state such as "how many bytes have been written so far." In this case, from the language's perspective, the stdout object has no state, which is just about the most immutable a thing can be.

On the other hand, most output streams are mutable, because they can be closed. Streams can also fail due to things like a full disk or dropped connection, but I see those as two different scenarios: in one, the behavior of the object changed due directly to an action the code took, whereas in the other, the behavior changed due to an external event.

This is an important distinction, because otherwise even the original, "most restrictive" definition above is not enough. Let's say we want to define immutability strictly as "the object's behavior never changes," so that an uncloseable output stream is mutable (because it may fail, thus altering its behavior). Now consider an object with no pre-calculated string representation; its toString method generates a new string each time. This requires allocating memory, which itself may fail — so with our super-strict definition of immutability, even this object isn't immutable. With this definition, the only immutable objects are one with immutable, public fields and nothing else: even a getter could fail due to a stack overflow.

And so, I submit this as a working definition of mutability, at least for Effes: an object is immutable if its behavior can't change in a deterministic way as a direct result of actions expressible in the language.

This definition has an interesting implication: a method that gets the current date/time from the OS is immutable, even though it gives a different result each time! (Let's assume the language doesn't have features built into it that set the system clock.) As weird as this sounds, it actually makes sense, given the problems that I think immutability solves — specifically, no scope can change the date/time for another scope. You won't ever invoke getCurrentTime expecting it to have been set by another scope even though it wasn't; and you won't ever invoke getCurrentTime from one scope expecting it not to have been changed, even though somebody else changed it.

Friday, June 28, 2013

When is a new object really a new object?

Since I just brought up object construction, it's worth discussing when a new object is really created, as opposed to when a previous object can be reused. This will also anticipate some of the function resolution requirements, which I'll introduce in a few posts. As usual, I'll lead by example.

First, some data types to work with:

data Red; Green; Blue nicknamed Color
data Colorful(color : Color)
data Box(size : Int)
data Fragile

Hopefully this should be getting familiar by now. We start with three stateless traits, the union of which is nicknamed Color. We then introduce two stateful traits, followed by a third trait to mark fragility. Now let's create some objects:

a = Colorful Box where
    color = Red
    size = 27
b = Fragile b
b2 : Box = b
c = Fragile b
c2 = Fragile b2
d = Colorful c where
    color = Blue

The first assignment just creates a composite object, as I described in the last post. Easy.

The second assignment (the one to b) creates a new object. It has to, because (as I'll elaborate in the upcoming function resolution post) the object's runtime type is maintained even if the object is "upcasted" later, as happens with b2. We need some way of knowing that b is Fragile while a is not, and the easiest (onlyest?) way to do that is by creating a new object.

But Fragile has no state, which means all Fragile objects are identical. In fact, internally, Fragile could be just a metadata marker on the object. That means that when we re-add the Fragile trait in c = Fragile b, we can actually reuse the same b object. This optimization can be done at compile time.

The c2 assignment is also a no-op, but only at runtime. Even though the compile-time type of b2 doesn't include Fragile, at runtime we can see that the b2 object is already Fragile, and so we simply return it.

Contrast that to the d assignment, which also re-sets the state of one of the component traits. In this case, we do need to create a new object in the general case: objects are immutable, so we can't change the state of c, but we definitely need to store the fact that we now have a blue box where we used to have a red one. (If the new color happens to be the same as the old one, we reuse the object in principle; I'm not sure if this check would be cheaper than unconditionally creating a new object.)

Implicit in all of the above is that there is not a way to check for referential equality — that is, that the programmer won't ever care (or know) if the runtime reuses an object. I think this is a good idea even without these optimizations, so I'm going to throw that into the Effes "spec." Truth be told, I've been assuming it all along.

Incidentally, since the compile-time type of c is Fragile Colorful Box, we could have just written d = Colorful c (without the where clause). This would say that we want to change none of the fields, in which case the whole thing is a no-op at compile-time (as c was). If Colorful had more than one field (maybe it has an alpha : Float), we could have used this syntax to change only one of the fields. This would obviously still create a new object.

On the other hand, if we'd written d2 = Colorful c2, then we'd have to provide the field values. c2 has a compile-time type of Fragile Box, and the compiler would require that if we add color to this box, we need to specify which color. The fact that c2 already has a color at runtime is irrelevant; the compiler will require that the program specific values for all Color fields (in this case, just the one), and the runtime will create a new object that overwrites these fields.

Thursday, June 27, 2013

Creating composite objects

Up until now, I've talked mostly about types. In this post, I'm going to take a slight detour and talk about objects. This won't be the most jaw-dropping of posts, but it will be helpful for the discussion on subtypes, which is necessary for talking about function resolution.

I'll start by knocking out two really easy cases: uncomposed types and union types.

data Foo = Foo(foo : Int)
data Bar

a = Foo(123) -- uncomposed type
b : Foo | Bar = Foo(456)

In this example, a has an inferred compile-time type of Foo, and the code creates a new Foo object whose foo value is 123. Yawn. Then, b has an explicit compile-time type of Foo | Bar, and the code again creates a Foo object. Yawn again. But all of a sudden...

c : Foo Bar = Foo(789)

Ah, this is interesting. The object being created is a Foo, and yet it's being assigned to a Foo Bar, which is essentially a subtype of Foo! This would be like a snippet of Java code reading Car c = new Vehicle() (if Car is a subclass of Vehicle). That's not allowed in Java, so why would it be in Effes?

What's really going on in that example is this:

fTmp : Foo = Foo(789)
bTmp : Bar = Bar()
c : Foo Bar = fTmp bTmp

Just as a conjunctive composed type is created by just writing its two component types side-by-side, a composed object is created by writing its two component objects side-by-side. Simple as that!

The original syntax of the c assignment was actually sugar. If the right-hand side of an assignment is of type A, and the left-hand side is of type A B, and B is a type which doesn't require any explicitly-provided state, then an object of type B is assumed. That is, the original c : Foo Bar = Foo(789) was automatically expanded to c : Foo Bar = Foo(789) Bar() because the Bar type doesn't require any explicitly-provided state.

What happens if you do need to add state? You have two options, both analogous to the syntax for constructing uncomposed objects. You can use the parenthetical constructor syntax for each uncomposed object, and just list the objects side-by-side (this is the syntax I've been using in this post so far). You can also use where syntax, listing all of the fields you want to set in the indented field-assignment block. You can prefix any field name with its type to qualify it; this is optional in most cases, but required when field names collide.

data Foo = Foo(foo : Int, fizz : String)
data Bar(bar : Int, fizz : Int)

fooBar1 = Foo(123, "Hello") Bar(456, 789)
fooBar2 = Foo Bar where
    foo = 123 -- could have been 'Foo foo'
    Foo fizz = "Hello"
    bar = 456 -- could have been 'Bar bar'
    Bar fizz = 789

Note that the order of these fields doesn't matter — you can interleave them, whatever. Each field's name unambiguously identifies it, and they all belong to the single type Foo Bar, so there's nothing special about any particular order. That said, it's probably good form to group fields by component type.

I'm considering adding an additional syntax, which really treats Foo Bar as the new, separate type it is:

fooBar3 = (Foo Bar)(foo=123, Foo fizz="Hello",
                    bar=456, Bar fizz=789)

This has a certain symmetric elegance to it, but it's kinda ugly and potentially confusing. I was going to disallow it, but I think I have to let it in because of nicknames. While the example above is ugly, this starts to make sense:

nickname FooBar = Foo Bar
fooBar4 = FooBar(foo=123, Foo fizz="Hello",
                 bar=456, Bar fizz=789)

I suppose I could allow that syntax only if the composite type has been nicknamed (and is being constructed as such), but this feels like it would complicate the language for not much gain. Better to say that the (Foo Bar)(...) syntax is allowed but discouraged. At least, I hope that's better.

Tuesday, June 25, 2013

Sugar and the "data" keyword

In my last post, I discussed the data keyword as sugar for stateful, struct-like traits. This is mostly pure sugar, although it's also the only way to define a stateless, methodless trait (like Nothing). I'd like to justify those decisions, and also introduce two more pieces of sugar.

As I touched on in an earlier post, there is a balance to be had between deep abstractions vs. ease of use. Syntactic sugar, if designed correctly, can help that balance by highlighting certain aspects of an abstraction. This is useful both for the language design, which can now have a high-level abstraction whose concrete implications are clearer; and from an individual program's design, where it's more obvious which part of the abstraction is important for a given type or object.

The data syntax aims to do this by recognizing that a type whose only functions are getters feels different than one with abstract virtual methods. The first is used purely to store state, whereas the latter doesn't even care about state directly — just about behavior. Effes combines both of concepts into traits, so sugar can help clarify which aspect is more important for a a given type. When you want to focus exclusively on the state-storing ability of a trait, data is a better option, and when you need the behavior-defining aspect of a trait, the default syntax is a better (well, only) option.

As for requiring data syntax for stateless, methodless traits, this is partly pragmatics and partially a philosophical consideration. Given that the syntax for an abstract trait is something like this:

  size : Int

... how would Nothing look with this syntax? It'd probably be something like one of these:

-- rest of your code here, unindented

Nothing -- no colon
-- rest of your code here, unindented


The first two feel like weird, dangling tokens; I don't like them from an aesthetic perspective. The last one borrows from Python's pass statement, which inserts a runtime no-op. It was designed for exactly the kind of situations Nothing would have using default trait syntax: the language requires something, but you want nothing. I've always thought this was a bit ugly. Within the context of a function, one could just use return instead; and within the context of a class definition, pass suggests that something weird is going on — a methodless, init-less, stateless class is a weird beast. In Effes, such a thing is indeed useful, but it feels very much like a data type; so, rather than introducing new syntax for it, or generalizing the default syntax to allow weird, ambiguous-looking constructs (like the first or second examples above), I just require the data syntax, which does the job just fine.

Finally, as promised, here are two more minor pieces of sugar for the data syntax. Firstly, you can define multiple types on one line, separated by a semicolon. And secondly, you can follow a data definition with "nicknamed foo" to automatically create a nickname for the union of all of the types defined on that data line. So this:

data LT
data GT
data EQ
nickname Comparison = LT | GT | EQ

...can be expressed more succinctly as:

data LT; GT; EQ nicknamed Comparison

Again, this tries to smooth the transition between abstractions and ease of use. One common use case for a type system is to define enum types; a comparison is either less than, greater than, or equal to. In a language like Haskell, these are different constructors of the same type. In Effes, they would be defined as the union type of the three traits, and one would almost definitely want a nickname for that union type. This sugar allows us to emphasize the enum-like characteristics of the union of stateless, methodless traits.

The use case for this sugar is definitely enum-like types, so I'm tempted to declare that the sugar only works if all of the data types are stateless. This feels slightly more hacky, but it's also easier to reverse: generalizing the syntax will be backwards compatible, whereas restricting it (from all data types to just stateless ones) in the future could break code. I don't anticipate that backwards compatibility will be a major problem for Effes, but I think I'll take the safer approach for now, as an exercise in language evolution if nothing else.

Monday, June 24, 2013

Introducing: Effes traits

Over the past few points, I've gone over some of my ideas for data types and assumption-types in Effes. The latter were originally intended to be a generalization of the standard concept of statically-checked types, but in my last post I talked about giving up on that idea. In the post before that, I ended things by saying that I wanted to unify some of my types of types, and I also got into some of the function resolution issues that have come up. In this post, I'll try to unify all of those concepts under a new concept that I'm calling a trait.

Effes' traits are different from normal traits in two important ways. Firstly, they're not just add-ons for fundamental types — they are fundamental types. Secondly, whereas traits in statically compiled languages are usually added to types ("Cars are Resellable"), in Effes they can be added to individual objects ("this Car is Resellable"). This is to encourage a dynamic, composition-focused type system.

Here are some examples of traits, using examples I've shown before. In many cases, the examples are unchanged.

-- abstract trait
Sizeable: -- abstract trait
  size : Int

-- another abstract trait
  push A : Stack[A]
  pop : Stack[A]
  head : Possible[A]

-- "class-based" implementation of Sizeable
Stack[A] is Sizeable where
  size = case head of
    Nothing : 0
    _ : 1 + pop

-- "object-based" implementation of Sizeable
QuickSize Stack[A]:
  let s : Int = size -- "size" given by "Stack[A] is Sizeable"
  push _ : QuickSize Stack[A] where
    s' = size + 1
  pop: QuickSize Stack[A] where
    s' = size - 1
  size = s

We start off with two abstract traits, which are similar to interfaces in Java. We then add a "class-based" implementation of Sizeable, which says that anything which is Stack[A] is also Sizeable. This is class-based because it applies to all objects whose type is (or includes, in a composite type) Sizeable. Next, we introduce a (conjunctive) composite type QuickSize Stack[A] that provides an O(1) implementation of size

My previous posts had a data type, but I'm now unifying those with traits. This declaration is still legal:

data One[A] = value : A

... but it is now just sugar for a stateful trait:

  let value : A
  value : A = value

Fields in a stateful trait are private to the scope that declared them ("QuickSize Stack[A]:" in the first example, "LinkedNode[A]:" in the second), and within that scope, names resolve to fields before functions (that is, a field always hides a function).

While we're the topic of stateful traits' fields, note that they start with the keyword let and may optionally be assigned a value. If they're assigned a value, this value can't be set except within the scope that defined the field; if they're not, this value must be set when the object is created. This can be done using the syntax TypeName(fieldName = value) or TypeName where fieldName = value. In the first form, multiple fields are delimited by commas, and the field name and = operator are optional if there is exactly one field. In the second form, each assignment must be on a new line, indentented relative to the first type name, but the whole thing can be on one line if there is exactly one field. Some examples:

a = One(value=27)
b = One(27)
c = One where
  value = 27
d = One where value = 27
e = Pair(first=23, second=8)
f = Pair where
  first = 23
  second = 8
-- notice that the two "inlined" form are not
-- available to Pair because it has more than
-- one field

The data syntax is mostly just sugar, but it does allow the creation of stateless, methodless traits like data Nothing.

With this new, unified type system, the function resolution rules I described earlier no longer apply. I think a new system shouldn't be hard, but I'll leave that for the next few posts in the interest of keeping this one from snaking off too far.

Wednesday, June 19, 2013

Assumption by any other name make an ass of u and mptions

Until now, I've been using the term "assumption" to describe several ideas: interfaces that define behaviors, stateful traits that include implementations, polymorphism and more. Why haven't I used more standard terminology?

Before I get there, a slight diversion. People who work in Java will sometimes ask a question along the lines of, "I want a Map<K,V> where the keys can be of any type, and the values will depend on the key type." For instance, an Integer key would correspond to a value of MyVal<Integer> (for both gets and puts), a String key would correspond to MyVal<String> values, and so on.

The short answer is that you can't define a Map of that sort in Java. The next question is: why not? Isn't that reasonable? And then the real answer comes in: a type system is just a limited theorem checker, and we've known since Turing that we can't come up with an algorithm that will prove everything you want to prove; so we'll always have to draw a type system's line at some place, and this is where we happened to draw it. If Java allowed for those kinds of types, there'd always a slightly more sophisticated use case that you could ask about and wonder why not.

That got me thinking: what if I designed a type system that, though still limited, had an expanded scope? Most type systems define what types a given object can work with (based on its type). What if I designed one that defined what values an object can work with?

Take a quicksort as an example. The "main" recursive step will look something like:

return (quicksort left) + pivot + (quicksort right)

Well, what happens if you get confused (it's late and you're watching TV while coding) and instead flip the two sides:

return (quicksort right) + pivot + (quicksort left)

My idea was that you could define an assumption called Sortable that would describe which values can be, for instance, pushed into a stack:

Sorted Stack[A] requires A :: Ordered st
    push a requires
        size == 0 or
        a >= pop

Now, in order to push into a Sorted Stack[A], firstly the A type has to have an ordering defined (as it does in any statically typed language that would deal with sorted structures), and secondly, you must have proven in the code that the input is >= the stack's head. For instance:

A :: Ordered ->
badPush(s :: Stack[A], v :: A) :: Sorted Stack[A] st
    push s v -- compilation error!

A :: Ordered ->
tryPush(s :: Stack[A], v :: A) :: Sorted Stack[A] st
    if (size s == 0) or (v >= pop s)
        push s v

In the first example, the code couldn't prove that the assumption required of a Sorted Stack[A] is true, so the code failed to compile. In the second example, it could prove it, and everything's fine.

So, that's why I went with the term "assumption." The idea was that the basic components of a program wouldn't be types, but a bunch of assumptions of a more general nature asserted on objects.

The more I played with it, though, the less promising it looked. All but the most trivial examples became too difficult to express, and I kept needing an "escape hatch" for assumptions that would be cumbersome or impossible to prove in the code. The escape hatch would be in the form of telling the compiler, "I know you can't prove this, but trust me on it." Basically, I felt that the escape hatch would be needed so much that it would become the primary form of typing, in which case the language effectively becomes a dynamically typed language that requires static compilation — the worst of both worlds.

So, I've decided to abandon this effort, and instead focus on a more standard definition of types. I'm going to call these traits, though they're different from traits in some other statically compiled languages in that they're attached to individual objects, and not to other types (as they are in Scala, for instance).