Testing equivalence of rationals using natural encodings

I’ve recently been spending a decent amount of my free time in self-study of Ulrich Kohlenbach’s Applied Proof Theory: Proof Interpretations and their Use in Mathematics.  As someone who has no experience with Constructivism, it is definitely challenging, but there is no better way to learn than hands-on.  Here I explore one of the topics the book mentions – encoding rational numbers into the naturals – using Haskell.

The idea is simple – take operations on $\mathbb{Q}$ and transform them into operations on $\mathbb{N}$.  We’ll use a special encoding to assign a unique $n \in \mathbb{N}$ for each $q \in \mathbb{Q}$ (yes, there are just as many natural numbers as there are rationals!).  We can then perform operations (in this exercise, equivalence tests) in the encoded set ($\mathbb{N}$) and have the results be valid in the set we care about ($\mathbb{Q}$).  Some of the steps below will seem silly, but this is just an exercise in theory.

The gist is available here.

Before we jump in, we’ll establish some prerequisite machinery for our exploration.  First, we’ll take the Nat instance from the exploration on infinite search and expand it a bit using the nomenclature in Kohlenbach.  Next, we’ll borrow the least function from Seemingly Impossible Functional Programs, slightly modified:

least p = if p 0 then 0 else 1 + least(\n -> p (1 + n))

Lastly, we’ll build the one-point compactification of $\mathbb{N}$ for use in the infinite search monad.  The premise of this monad is that by defining an infinite set, an algorithm for exhaustively searching that set is automatically generated.

natural :: Set Nat
natural = union (singleton Z) (S <$> natural)

Now we’re ready for math.

Rationals can be thought of as pairs of integers, one being the numerator $num$ and the other the denominator $den$.  There are two challenges in trying to encode this information into $\mathbb{N}$.

  1. The encoding has two inputs and one output.
  2. Rationals have sign, but the naturals have no sign.

We’ll tackle 2 first.  If $q$ is positive, then its numerator $num$ is mapped to the even natural $2|num|$ and the odd natural $2|num|-1$ otherwise.  In this way, we can encode the sign information in $q$ into the parity of a natural $n_{num}$.  The denominator $den$, on the other hand, maps to separate natural $n_{den}$ through $|den| – 1$.

Now that we have $num$ and $den$ mapping to two naturals, we’ll map the pair of naturals $(n_{num}, n_{den})$ to a single natural $q_c \in \mathbb{N}$, with $q_c$ meaning an encoded rational.  Using the Cantor pairing function

$$j(x^0, y^0) := \left\{ \begin{array}{lr} \mbox{min }u \le (x+y)^2 + 3x + y [2u = (x+y)^2 +3x + y] \mbox{ if it exists} \\ 0\mbox{ otherwise} \end{array} \right. $$

The function numden2QCode below performs this encoding, returning a natural number of type QCode representing a rational from the rational’s numerator and denominator.

numden2QCode :: Integer -> Integer -> QCode
numden2QCode num den  | (num >= 0 && den > 0) || (num <= 0 && den < 0)  = j (fromInteger $ 2*abs(num)) (fromInteger $ abs(den)-1)
                      | otherwise                                       = j (fromInteger $ 2*abs(num)-1) (fromInteger $ abs(den)-1)

j :: Nat -> Nat -> QCode
j x y = if k <= t then QCode k else QCode 0
        where   k = least (\u -> 2*u == t)
                t = (x+y)*(x+y) + 3*x + y

Testing for equivalency of rationals using QCodes might seem like a straightforward process of comparing the QCodes themselves, but one characteristic of the numden2QCode encoding we’ve developed is that $(num, den)$ (1, 2) has a different QCode than (2,4), even though these rationals are equivalent.  To test for equivalence, we’ll project the QCode back into the $n_{num}$ and $n_{den}$ pairs .

$$ j_1(z) := \mbox{min }x \le z[\exists y \le z (j(x, y) = z)] $$

$$ j_2(z) := \mbox{min }y \le z[\exists x \le z (j(x, y) = z)] $$

Here, $z$ is our rational encoding $q_c$, $j_1$ returns $n_{num}$ and $j_2$ returns $n_{den}$.  For example,

>j1 $ j 2 3
S (S Z)

>j2 $ j 3 4
S (S (S (S Z)))

Lastly, to test equivalence, we’ll extend QCode to be an instance of the Eq typeclass.  The trick here is that we need to handle odd and even $n_{num}$ cases separately, since that indicates the rational is positive or negative.  Once we back out the $n_{num}$ and $n_{den}$, we convert back to rational num/den form and equate using cross multiplication.

instance Eq QCode where
(==) n1 n2 | evenNat j1n1 && evenNat j1n2 = (j1n1/2)*(j2n2 + 1) == (j1n2/2)*(j2n1 + 1)
           | oddNat  j1n1 && oddNat  j1n2 = ((j1n1+1)/2)*(j2n2 + 1) == ((j1n2+1)/2)*(j2n1 + 1)
           | otherwise                    = False
           where j1n1 = j1 n1
                 j1n2 = j1 n2
                 j2n1 = j2 n1
                 j2n2 = j2 n2

Finally, we can equate some rationals!

print $ numden2QCode (-1) 1 == numden2QCode (-2) 2 -- True
print $ numden2QCode (-1) 1 == numden2QCode (-3) 3 -- True
print $ numden2QCode (-1) 2 == numden2QCode 2 (-4) -- True
print $ numden2QCode 1 2    == numden2QCode 2 4    -- True
print $ numden2QCode 1 2    == numden2QCode 1 3    -- False

Note that you’ll probably want to compile these into an executable like I did because they are slow otherwise.  In executable form, they are done in about 1.5 seconds.  In a future post, I’ll explore why a few changes to the Num instance for Nat reduced execution time from 88 seconds down to 1.5.

 

Natural numbers and the infinite search monad

My previous post explored the usage of infinite search techniques to exhaustively search the natural numbers to find a witness satisfying a total predicate, or decide when no such witness exists.  Turns out, these “searchable” spaces form a monad, available in the infinite search package on hackage.

To use the monad for search across the naturals, we have to first define the set of naturals.  Defining the set is all that is required to start performing infinite searches.  Here is one of my failed attempts at defining the naturals:

natural :: Set Nat
natural = foldr1 union $ map (singleton) $ iterate S Z

What’s wrong with this definition?  It defines the set {Z, S Z, SS Z, …}, which represents the entire space of naturals.  Let’s try to do a search:

> search natural (\x -> x+2 == 1)
Nothing

> search natural (\x -> x*2 == 1)
(infinite loop)

Why does the second predicate above, which works just fine using search’ and lyingSearch from the previous post, fail on this predicate?  The issue is that the search’ function searches across the one-point compactification of the naturals (i.e. including infinity).  Our definition of natural above doesn’t include infinity because every element of the set ends in Z due to the iterate function.  Without infinity, the set is not compact, and hence not searchable.  Thanks goes out to Luke Palmer for providing the correct (and prettier!) definition:

natural :: Set Nat
natural = union (singleton Z) (S <$> natural)

This recursive set definition defines infinity as part of the set, so now…

> search natural (\x -> x*2 == 1)
Nothing

 

 

Infinite search explained

The Haskell subreddit recently had a post about the cleverest piece of Haskell code you know, and the comments ignited a discussion of Martin Escardo’s exhaustive search of the Cantor set, which can be thought of as the set of all infinite chains of 1’s and 0’s.  The techniques are exciting, but initially confounding for many, including me.  I set out to fully understand the processes involved and devoured as many of the sparse resources as I could find.  I didn’t really start to get a mental grip on it until I started playing with code – specifically Luke Palmer’s infinite search code.  This code doesn’t search the Cantor set but instead exhaustively searches the natural numbers, and it’s just a little easier to dissect mentally as a precursor to the Cantor set.  I think I now know how it works, what its limitations are, and how to break it.  My goal is that by reading this post, you’ll come away with the same feeling.

Disclaimer: I am very much a Haskell newbie and not a computer scientist by training.  I just think this stuff is cool.

Representation of natural numbers

First, let’s cover the representation of the domain, the natural numbers.  The representation of natural numbers is defined recursively from Zero:

data Nat = Zero | Succ Nat

That is, a natural number is either Zero, or it is the Succ (successor) of another natural number.  So for instance 0 = Zero, 1  = Succ (Zero), 2 = Succ (Succ (Zero)) etc.  While this may seem cumbersome at first, the key advantage to representing the domain this way is that it makes comparison operations much more powerful.  Consider the comparison

Succ (Zero) /= Zero

which represents that 1 is not equal to 0, which is true.  Now consider what happens if we reformulate the expression as

Succ (?) /= Zero

where ? can be Zero, Succ (Zero), etc.  By examining only the heads of each side of the expression, we are no longer comparing 1 with 0; we are comparing every number greater than or equal to 1 with 0, and like before, the expression is still true.  If we compare

Succ (?) /= Succ (?)

then by examining just the head, we can only claim that the two values being compared are possibly equal.  We cannot even claim to know which values are being compared, except to say that neither is Zero.  So in this case, we have to examine more than just the heads of each expression to determine equivalency.  We in fact have to examine as much of the expression as necessary to find a Zero on either the left side, the right side, or both.  If a Zero does not exist, such as in

infinity = Succ infinity

infinity == infinity

then equivalency can never be determined.  If you try to execute this in GHCi, you’ll get an infinite loop.

These comparison advantages are the reason why Palmer says that in order to use these infinite search techniques, the domain must be representable as a recursive polynomial type, and why Escardo says that the predicate must be computable.  These conditions allow us to construct seemingly infinite comparisons as long as one side of the comparison reduces to something finite eventually.

Testing equivalence

I implemented by own instance of Eq for Nat instead of using Haskell’s derived implementation.  You can see it in the full code listing at the bottom if you’re curious.  Let’s test out some comparisons:

*Main> Succ (Succ (Succ (Zero))) == Succ (Zero)
-- Comparing Succ == Succ --> Possibly equal
-- Comparing Succ == Zero --> Not equal
False

Here we are comparing 3 and 1.  The salient point here is that we can determine that the two values are not equal by only comparing the first two elements of each representation.  The same is true when comparing to infinity:

*Main> infinity == Succ (Zero)
-- Comparing Succ == Succ --> Possibly equal
-- Comparing Succ == Zero --> Not equal
False

Expanding the Num instance

Palmer’s Num instance includes the operations (+), (*), and fromInteger.  The purpose of the first two is clear.  fromInteger takes an integer literal and transforms it into our Nat representation so we can compare it with another Nat representation:

*Main> fromInteger 2::Nat
-- Transforming 2 into Nat form.
---- Returning Succ as element of the transformation
---- Returning Succ as element of the transformation
---- Returning Zero as element of the transformation
Succ (Succ Zero)
*Main> Succ (Zero) == fromInteger 2
-- Transforming 2 into Nat form.
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Zero == Succ --> Not equal
False

The operation (-), however, is not defined.  We can define it by adding the following to the Num instance:

Zero   - y = Zero
Succ x - Zero = Succ x
Succ x - Succ y = (x - y)

This definition of (-) carries two caveats.  First, Zero minus anything is Zero.  Consistently, any n – (n + k), where k >= 0 is Zero.  This can lead to unexpected behaviors like

Zero - 130 == Zero
-- Comparing Zero == Zero --> Equal
True

so we need to ensure we are using it appropriately given the properties we have defined for it.

The second caveat is that, unlike addition, subtraction between two Nats cannot be done lazily.  With addition, we know that for any x and y

Succ x + y = Succ (x + y)

so we don’t need to know what x and y are in order to start returning the result because we know it begins with Succ.  However, with subtraction we have

Succ x - Succ y = (x - y)

so we have no idea whether the result will start with Succ or Zero until we traverse x and y and find a Zero in one of them.  Indeed, something like

infinity - infinity

will never return a result.

The search’ function

With the background out of the way, let’s examine the algorithm itself, starting with the search’ function.  You’ll notice that my code throughout this post is slightly different than the Palmer code and littered with trace calls.  These will be useful later when we start exploring examples.

search' fTop    | trace "Starting search.  Starting top level predicate call (TLPC)" fTop $ bestGuess = Just bestGuess
                | otherwise       = Nothing
        where
        bestGuess = trace "The TLPC has no defined x yet, so start building the best guess with lyingSearch" lyingSearch $ fTop

The important thing to note about search’ is that all it does is apply the predicate fTop to a single natural number, bestGuess.  That’s all.  In fact, the entire infinite search algorithm begins when this single application of fTop begins to be evaluated, and ends when fTop finishes its evaluation (the Maybe wrapping notwithstanding).  I changed the name of the predicate in the search’ function to fTop to try to capture this special significance.

The lyingSearch function

The bestGuess that search’ applies fTop to is merely the result of the lyingSearch call made above.  So what does lyingSearch do?  It recursively builds the best-guess solution for satisfying fTop while fTop is being evaluated. Let’s examine the code.

lyingSearch :: (Nat -> Bool) -> Nat
lyingSearch fRec = if trace "Can we finish the best guess with a Zero and succeed?  Let's try it" (fRec Zero)
                    then trace "Yes, so end best guess search with Zero.  Next, complete TLPC" Zero
                    else trace "No, so continue best guess with Succ and do lyingSearch again" (Succ (lyingSearch (fRec . Succ)))

The input argument to lyingSearch, fRec, does two things:

  1. Includes the predicate we are trying to satisfy
  2. Records the progress of the best-guess solution

The record keeping is achieved by composing fRec with an additional Succ each time lyingSearch decides that the next element of the best-guess solution is Succ.  Thus, when search’ first calls lyingSearch with fTop, the best-guess is empty because no Succ’s have yet to be tacked on the end.

Let’s go through lyingSearch step by step. Assume that fRec is fTop composed with zero or more Succ’s that represent the current state of the best guess.  By applying fRec to Zero, we are testing whether or not the next element of the best guess should be Zero.  If true, then the next element should indeed be zero because the predicate fTop will be satisfied.  If false, then we just assume that the rest of the best guess must be Succ plus whatever is produced by another call to lyingSearch.  Assuming this about the best guess does not guarantee that the predicate will be satisfied; we just know that it won’t be satisfied with Zero.  In this way, lyingSearch could eventually return a result that does not satisfy the predicate (a lie).

Example: finding a solution

Let’s go through a few search examples and examine the traces that are produced step by step.  I’ve included some additional annotations to the trace output in parentheses to add a little specificity where needed.

predFxn1 x = let lhs = x*x
                 rhs  = 4
             in trace ("-- Comparing LHS with RHS") (==) lhs rhs
*Main> search' predFxn1
Starting search. Starting top level predicate call (TLPC)
-- Comparing LHS with RHS (This is part of the TLPC)
The TLPC has no defined x yet, so start building the best guess with lyingSearch
Can we finish the best guess with a Zero and succeed? Let's try it
-- Comparing LHS with RHS (This begins the application of fRec to Zero)
-- Transforming 4 into Nat form.
---- Returning Succ as element of the transformation
-- Comparing Zero == Succ --> Not equal (This ends the application of fTop to Zero)
No, so continue best guess with Succ and do lyingSearch again
-- Transforming 4 into Nat form. (This is part of the TLPC)
---- Returning Succ as element of the transformation (The first element of the best guess is Succ!)
-- Comparing Succ == Succ --> Possibly equal (4 begins to be compared to (best guess)^2. Best guess so far is just Succ)
Can we finish the best guess with a Zero and succeed? Let's try it
-- Comparing LHS with RHS (This begins the application of fTop . Succ to Zero)
-- Transforming 4 into Nat form.
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Zero == Succ --> Not equal (This ends the application of fTop . Succ to Zero)
No, so continue best guess with Succ and do lyingSearch again
---- Returning Succ as element of the transformation (The second element of the best guess is Succ!)
-- Comparing Succ == Succ --> Possibly equal (Continues the comparison of (best guess)^2 to 4 in TLPC)
Can we finish the best guess with a Zero and succeed? Let's try it
-- Comparing LHS with RHS (This begins the application of fTop . Succ . Succ to Zero)
-- Transforming 4 into Nat form.
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Zero as element of the transformation
-- Comparing Zero == Zero --> Equal (This ends the application of fTop . Succ . Succ to Zero)
Yes, so end best guess search with Zero. Next, complete TLPC
---- Returning Succ as element of the transformation (Continue and complete comparison of (best guess)^2 to 4)
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Zero as element of the transformation
-- Comparing Zero == Zero --> Equal
Just (Succ (Succ Zero))

This is all well and good, but how does infinite search determine that there is no solution?

Example: finding no solution

No solution exists when we try to satisfy an impossible predicate.  We saw in the last example that whenever we try to end the best guess with Zero but fail, we assume the right path forward is to use a Succ instead.  If the predicate always returns false, then our best guess becomes an infinite chain of Succ’s.  So then how does the algorithm know to give up the search?

predFxn2 x = let lhs = x*x
                 rhs  = 3
             in trace ("-- Comparing LHS with RHS") (==) lhs rhs
*Main> search' predFxn2
Starting search.  Starting top level predicate call (TLPC)
... Uninteresting sections of trace omitted here.  Best guess is now Succ (Succ (Zero)) ...
Can we finish the best guess with a Zero and succeed?  Let's try it
-- Comparing LHS with RHS
-- Transforming 3 into Nat form.
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Zero as element of the transformation
-- Comparing Succ == Zero --> Not equal (This is the first time (best guess)^2 overshoots 3)
No, so continue best guess with Succ and do lyingSearch again (Best guess is now Succ (Succ (Succ (Zero))))
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal (This is part of the TLPC)
Can we finish the best guess with a Zero and succeed?  Let's try it
-- Comparing LHS with RHS
-- Transforming 3 into Nat form.
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Succ as element of the transformation
-- Comparing Succ == Succ --> Possibly equal
---- Returning Zero as element of the transformation
-- Comparing Succ == Zero --> Not equal
No, so continue best guess with Succ and do lyingSearch again (Best guess is now Succ (Succ (Succ (Succ (Zero)))))
---- Returning Zero as element of the transformation (We've reached the end of 3's Nat representation in the TLPC)
-- Comparing Succ == Zero --> Not equal (This is the conclusion of the TLPC)
Nothing

What’s interesting here begins on line 14.  The right hand side of the comparison, 3, is not changing, and we know the operation x*x results in a longer and longer series of Succs.  With this information, you and I could conclude that this predicate will never be satisfied.  The algorithm, however, doesn’t conclude that no solution can be found until the same comparison fails in the TLPC (line 31).

To summarize, the infinite search algorithm uses the predicate in two ways:

  1. Deciding what the next element of the best guess should be
  2. Deciding whether or not the best guess succeeds or fails

Usage 2 imposes a constraint on the structure of the predicates we can look to satisfy.  Specifically, the algorithm can only declare that no solution exists if the predicate evaluates to False when given an infinite chain of Succs (i.e. infinity).

Breaking infinite search

With the constraint on the predicate in mind, let’s try to break infinite search.  Here are a couple of simple predicates that work just fine, one with a solution and one without.

predFxn3 x = x*x == 6 - x
predFxn4 x = x*x == 7 - x
*Main> predFxn3 infinity
False
*Main> predFxn4 infinity
False

Now we know that for either of these predicates, search’ will give up the search if no solution exists.

*Main> search' predFxn3
Just (Succ (Succ Zero))
*Main> search' predFxn4
Nothing

Great.  What about these modified predicates, again one with and one without a solution?

predFxn5 x = x*x == 6 + x
predFxn6 x = x*x == 7 + x

When evaluated with infinity, both of these predicates generate an infinite loop because, at the end of the day, we’re evaluating infinity == infinity.  However, predFxn5

*Main> search' predFxn5
Just (Succ (Succ (Succ Zero)))

returns its solution, while search’ predFxn6 looks for one forever and never gives up.  We can try to be clever and reformulate the predicate like this

predFxn7 x = x*x - x == 7

to eliminate infinity from one side, but all we’ve done is replace the non-terminating operator (==) with the non-terminating operator (-).  Alas, predFxn7 will not work, either.

Other explorations

Hopefully I’ve been able to make infinite search a little less confounding, or at least this particular application of it.  Here are some other questions to ponder for the curious mind.  I’ll be pondering these myself.

  • Can Nat infinite search be modified to handle predFxn6 or predFxn7?
  • Can these techniques be used to perform epsilon-delta proofs on compact subsets of the reals?
  • Must all lists contain at least three items?

Code listing

{-- Playing with Haskell infinite search --}

import Debug.Trace

data Nat = Zero | Succ Nat
    deriving (Ord, Show)
infinity = Succ infinity   -- This line is interesting but not necessary

fromInteger' 0 = trace "---- Returning Zero as element of the transformation" Zero
fromInteger' n = trace "---- Returning Succ as element of the transformation" Succ (fromInteger' (n-1))

instance Num Nat where
    Zero   + y = y
    Succ x + y = Succ (x + y)

    Zero   - y = Zero
    Succ x - Zero = Succ x
    Succ x - Succ y = (x - y)

    Zero   * y = Zero
    Succ x * y = y + (x * y)
    fromInteger n = trace ("-- Transforming " ++ show n ++ " into Nat form.") $ x
                    where x = fromInteger' n

instance Eq Nat where
    Succ x == Zero = trace ("-- Comparing Succ == Zero --> Not equal") False
    Zero == Succ x = trace ("-- Comparing Zero == Succ --> Not equal") False
    Zero == Zero = trace ("-- Comparing Zero == Zero --> Equal") True
    Succ x == Succ y = trace ("-- Comparing Succ == Succ --> Possibly equal") (x == y)

-- lyingSearch f returns a Nat n such that f n, but if there is none, then
-- it returns a Nat anyway.
lyingSearch :: (Nat -> Bool) -> Nat
lyingSearch fRec = if trace "Can we finish the best guess with a Zero and succeed?  Let's try it" (fRec Zero)
                    then trace "Yes, so end best guess search with Zero.  Next, complete TLPC" Zero
                    else trace "No, so continue best guess with Succ and do lyingSearch again" (Succ (lyingSearch (fRec . Succ)))

search' fTop    | trace "Starting search.  Starting top level predicate call (TLPC)" fTop $ bestGuess = Just bestGuess
                | otherwise       = Nothing
        where
        bestGuess = trace "The TLPC has no defined x yet, so start building the best guess with lyingSearch" lyingSearch $ fTop

predFxn1 x = let lhs = x*x
                 rhs  = 4
             in trace ("-- Comparing LHS with RHS") (==) lhs rhs

predFxn2 x = let lhs = x*x
                 rhs  = 3
             in trace ("-- Comparing LHS with RHS") (==) lhs rhs

predFxn3 x = x*x == 6 - x
predFxn4 x = x*x == 7 - x
predFxn5 x = x*x == 6 + x
predFxn6 x = x*x == 7 + x
predFxn7 x = x*x - x == 7

 

 

(Continues the comparison of (best guess)^2 to 4 in TLPC)

My letter to NBC to save Community

Community is a fantastic television show whose future is uncertain.  Instead of finishing its normal season in January after the holiday break, it will be back in April-ish (hopefully).  Below is the letter to NBC I wrote in support of the show.  I also submitted it to reddit, so you might find some additional discussion there.

Dear NBC programming executives,

I am writing this letter by hand to impress upon you the sincerity of my love for Community. Since my fellow Community devotees are rallying to express their support for the show and undoubtedly inundating your offices with similar letters, I shall keep my comments brief.

I am a 25 year old, highly educated male who earns more money than I can reasonably spend. I have no Nielsen box, nor do I have a land line telephone. While I almost always watch the latest episode of Community during the original broadcast, my total time spent watching Community is dominated by Hulu, which I use to rewatch episodes multiple times during the following week. For example, in the days following its broadcast, I watched “Advanced Dungeons and Dragons” about 15 times. Additionally, I purchased both DVD sets.

Community’s quirkiness, sharply written dialogue, and steadfast commitment to building a richly woven, self-consistent narrative are just some of the reasons why it is my favorite show. It artfully immerses the fantastic and ridiculous into the backdrop of an endearing group of friends that I actually would like to know.

Community is one of the two shows I watch on my TV and not my computer. The other, the Today Show, is background noise while I get ready for work. Without Community, my view of the NBC brand would regress to the opinion I held after the cancellation of Conan’s Tonight Show – “that channel” that has the Olympics every few years. I urge you to continue keeping Community as a core part of your comedy lineup. It is literally the only NBC show I make time for.

Thank you for your consideration,

Daniel Robert Austin

Fairfax, VA

PS – It would be unwise to replace Community with bland, timid crap in an effort to better compete with the other networks. They are masters at it.

Problems with Matlab parfor data disappearing

I work a lot in Matlab at work, and Matlab’s object-oriented capabilities in particular.  I encountered a problem that took me a couple hours to debug, and hopefully I can help someone else avoid the experience.

I had my code set up to use a regular for loop when processing serially and a parfor loop when processing with a Matlab pool.  The code worked fine in the serial case, but when using parfor, some of my objects’ properties seemed to be vanishing into empty matrices.  Very literally, one line before the parfor, the object had correctly populated data, and one line within the parfor that data was gone.  No errors from Matlab except errors later on down the road caused from the property being empty.  No indication of what was going on.  No luck finding solutions online.

If you find this happening, the problem is likely with the saving and loading of your object.  Matlab communicates data to each matlab pool worker by serializing the objects, which essentially means saving them in the client thread and having each worker load it back up.  If your object doesn’t save and load correctly, then it won’t get into the parfor loop correctly.

To test the saving and loading, construct an object of the class that’s giving you problems, save it, clear your workspace, then load it.  Inspect the object for missing or incorrect properties.  If you see a problem with a property, check to see if it has a custom set method defined, comment it out, save the updates (and remember to clear classes), and try again.  You should see the data is no longer corrupted.

So what’s happening here?  When Matlab loads objects (either from a mat file or through serialization), it invokes the set method for all properties being loaded.  If an error occurs in a set method during this process, Matlab gives up on setting that property, provides no notice that an error occurred, and just leaves it as the default specified for that property.  Unless your properties block in the classdef says otherwise, the default value for the property will be an empty matrix.

(I have also had problems with handle objects getting into and out of parfor loops, but that wasn’t the problem here.)

In my case, the error I was getting was due to a self-consistency check in my set method.  Before setting the property A, I check to make sure that it is consistent with some other property B.  However, property A was being loaded before property B, so the consistency check always failed on load.  I worked around this by checking to make sure property B was non-empty before doing the consistency check.  This didn’t pose any problems for my class because in normal usage, my property B must always be defined before property A.  Unfortunately, this wasn’t the case when loading or serializing.

An Aside on Set Methods and Loading

In my experience with Matlab OOP, the automatic invoking of set methods on load or serialization has been a significant headache.  As far as I know, the main reason for doing this is so the class version can be checked, and the loaded property modified if necessary to maintain backwards compatibility.  In my experience, I use set methods almost exclusively for type checking and consistency checks on inputs, so any object that doesn’t throw an error has an allowable state, is saved in that allowable state, and should be loaded in that allowable state.  Checking the state again when loading by invoking the set methods is not needed for my uses, and has caused me headaches such as the one outlined above on more than one occasion.

Admittedly, Matlab’s mlint says that you shouldn’t reference one property from another property’s set method.  While I usually follow mlint’s suggestions, this is one I ignore outright as it significantly reduces the utility of set methods in my opinion.  Having users set properties in a natural way, and having the set methods automatically check the correctness of those properties, is by far the best use of set methods I can think of.  If anyone knows of a better way to achieve the same functionality, or a better reason to use set methods, I would love to hear it and learn more.

Sprite Clipper v0.92.0 released!

A new version of Sprite Clipper is out there!  v0.92.0 includes a center reshape anchor point, but the most significant new feature is simple drag-select functionality, so you don’t need to click on individual sprite clips if you don’t want to.  Check it out when you get a chance, and keep that feedback coming!

Getting a used phone from GreenCells for T-Mobile Value plan

I recently upgraded my cell phone plan since my contract was nearly up and T-Mobile started offering some pretty interesting Value plans, as they call them.  Normally when you sign up for a new contract, you get the opportunity to purchase a heavily discounted, new phone.  The catch here is that part of your monthly payment actually goes to pay off the discount you got.  In fact, you’ll likely end up overpaying for your discount over the 2 years or so of your contract because the phone subsidy component of your monthly bill is still included even if you’ve long since payed off your discount.  To some people, though, the instant discount you get in the short term is worth the amount you might overpay over the course of 2 years.

I’m not really that kind of person.  I prefer to bin my expenses into individual, readily identifiable purchases, which is why  T-Mobile’s Value plans appealed to me.  With the Value plans, the customer is entirely responsible for the cost of the phone, and the monthly bill you get is truly a service bill and not some amalgam of handset expenses and service.  If you want a “discounted” phone, you can get a new phone with a 50% down payment, and then pay the balance off gradually as part of your monthly bill.  In that sense, the added handset expense is kind of like a phone subsidy you can opt into, except the extra expense ends after the phone is payed off.

Additionally, the rates are pretty reasonable.  T-Mobile’s coverage isn’t as good as the Big 3, as I understand, but I’ve used it for two years already without any major headaches, and I’ve found the customer service to be really responsive and easy to work with.  I signed up for the 500 Minute + Data Plus plan, which gives 2 GB of high speed data per month, unlimited “slower” data after that, unlimited text, nights and weekends, in-network calling, and 500 minutes to cover anything else, for less than 50 bucks, which was what I was already paying for 300 texts and 300 minutes in the My Faves plan.

Music to my ears.  I wanted to avoid the phone payment charge in my bill, and I also wanted to avoid dropping $400-550 dollars on a new smart phone.  So, used phone it was.  I didn’t know anyone who was looking to sell a MyTouch 4G, which was my preferred replacement handset after researching it online for a while.  I monitored craigslist, but the prices for the used sets were not nearly as discounted as I expected.  I didn’t want to wait for weeks to catch the right craigslist deal, so after some digging I found GreenCells.

GreenCells

GreenCells is a used phone store online.  Its mission, loosely, is to lower the environmental impact of cell phone disposal by buying and reselling used cell phones.  They sold me a MyTouch 4G for $250, which was a little higher than what I was hoping for, but worth it in the end, I think, to get a 1-year warranty and a customer service line to call if something should happen.

So what kind of quality can you expect from GreenCells?  My phone arrived within 3 days, and looks great.  It had about as much wear as I would expect a new phone of mine to get in a week, so I’m definitely satisfied.  It even came in the protected foam case that a new phone would.

You can see in the image that  some promotional materials and a Quick Start guide are included, but a full instruction manual is missing.   Here’s the phone itself, which came with a protective cover already applied.

 

The flash from the camera probably makes it seem like it’s in worse shape than it really is.  I’ve been using it for a couple days now and have had zero problems with it.  Everything I’ve tried works splendidly.  I’ve been really satisfied with the experience so far.  If something changes, I’ll be sure to update this page.

One thing I do want to mention though is that the phone I received did not have its data wiped.  All the contacts and messages from the previous owner were left on the phone.  This was a minor inconvenience because I figured out how to do a data wipe quickly enough, but I would have expected the phone to be wiped since, you know, GreenCells says on its site that you can feel comfortable selling to them because they wipe all the data.  Well, at least in this case it wasn’t true.  I’d happily buy from GreenCells again, but I don’t think I would ever sell to them unless I did a really thorough job sanitizing my phone myself beforehand.

 

Sprite Clipper v0.91.1 released

A new version of Sprite Clipper has been released!  This version incorporates some user suggestions from the good people at the Push Button Engine forums, and one user especially.

The updates include

  • Select All now selects in an up/down, left/right order.  Before the order was based on sprite area, and users were frustrated that the sprite ordering on the sheet was not preserved.
  • A new packing method, Grid, has been added.  This puts the sprites in a uniformly spaced grid, which is a format that many game frameworks expect.
  • Fixes to small 1 or 2 pixel positioning bugs.
  • The default connected criterion is now 8 connectivity with a 3 px half edge.  This is probably a better choice for most situations than straight 8 connectivity.

As always, comments, bug reports, complaints, etc. all welcome.

Quick update – Games, Reality, and Haskell

It had been a little while since my last post, so I wanted to just jot down some stuff I’ve been doing lately.

First, in order to learn Haskell, I started working though Haskell’s 99 Questions, which is a series of short functions of increasing difficulty.  I have my solutions hosted on my github repo.

The other thing I have going on is I’m reading Jane McGonigal’s Reality Is Broken.  The book leverages a lot of research in positive psychology to argue that games (the book takes the slant of video games, but really any kind of unnecessary obstacle activity such as rock climbing) are “better” than reality.  The basic argument of the book so far is this: if modern psychological research shows us that people are in their happiest and most activated state when playing games, then why must we endure our normal lives?  Why can we not use the best aspects of games in igniting human happiness and incorporate them into our societies to improve the quality of life of everyone?

I’m less than 100 pages in so far, and I have mixed feelings.  The book does a pretty good job of highlighting interesting findings in positive psychology, and the points made in that regard I feel are spot on.  I would read a short paragraph and think to myself, “You know…you’re right.  That *does* make my happy.”  On the other hand, I think the book really stretches to apply the research findings to the field of video games.  The author, being a game designer, clearly has a bias in this sense, but that also drives her passionate prose style.  She spends a lot of time explaining the mechanics of certain video games like SMB 2 and Tetris, but then fails to really drive home her point that video games are the ideal medium in which to engineer human happiness.  It’s almost like she thinks it is self-evident that games and positive psychology go hand in hand.

Again, I’m only a little ways in, so maybe I’m trying to jump ahead to points she makes (or doesn’t make) later.  I just learned recently that my good friend Seth read this book and wrote a paper on it for a class, so I’m eager to chat with him about it.

Understanding Monads in Haskell

Update: I started reading up on monads again recently, and here is the best explanation I’ve found.  Also, the update caused my Beckman links to vanish unfortunately…

I’ve been digging through the Haskell resources I posted earlier (and some other ones ), and I must say that treatment of  monads has been pretty poor.  There are all kinds of cognitive metaphors out there, the least clear and most ridiculous of which has to be the space station analogy.  However, I think after seeing a few key resources the picture is starting to come together.

First is a great pair of videos from Microsoft’s Brian Beckman.  It’s geared toward concurrency, but he talks about Haskell’s State monad as a way of automatically mutating an explicit (vs implicit) state.  Doing this non-monadically would mean passing in the current state into each function, but with monads, you can construct larger operations by composing the monads together, and the monads will handle all the work of passing around and updating the state.

The videos are pretty long, but quite interesting and really help to introduce the way a particular monad (the State monad) handles compositing.

The next resource is Monads as a computation. This page generalizes the concepts introduced in the videos. Essentially, monads a way of describing how computations are chained together to form more complex computations. The logic of what happens between computations (such as pre- and post-processing) before moving on to the next step is built into the monad (specifically the return and >>= functions).

Lastly, Learn You a Haskell (which, incidentally, I’m beginning to like more than RWH as the concepts get more sophisticated) has a chapter on monads, which goes in to how Maybe and List monads handle their own processing between steps. For instance, if at any point in a Maybe monad chain Nothing is the result, then that Nothing is propagated to the end of the chain (canceling later computations on the way).

From my understanding of monads (so far), they are a convenient way to automatically handle any book keeping that needs to be done between successive operations (Writer monad, which can be used to log all operations performed) or for controlling the flow of operations in a standardized way (aborting operations with Maybe, handling list flattening with List).