FIBS: Functional interface for Interactive Brokers for Scala

Announcing a small side project I have been hacking on: a Scala wrapper library for the Interactive Brokers TWS API.  The TWS API uses a message-passing design, where you pass a message asking for a quote, for instance, and then you get a series of messages back, making up that quote.  It is up to you to keep track of which response messages are referring which quote request, and the whole thing involves a lot of mutable state.

I, however, wanted to write an application staying as functionally pure as possible, with little or no shared mutable state, which could process many quotes at once.  I wanted to do this using my favorite functional language du jour, Scala/scalaz.

So, I created FIBS, which is a terrible backronym that I am claiming stands for Functional interface for Interactive Brokers for Scala.  It is in its infancy, and currently only supports the following operations:

  • Realtime Stock Quotes
  • Historical Stock Quotes

There is much, much more in the API to flesh out.  One of the other things I don’t like in the TWS API is that it is up to you to know which parameters are appropriate for the request you are making.  (For example, only certain order parameters make sense for a given order type.)  So, I want to enforce this logic with the type system.  You, as the consumer fo my API, should never pass a null value because that parameter makes no sense in the context of what you are doing.  The type system should enforce this.

Instead of waiting for various pieces of information to return, you will get a Promise or a Stream, or some other monad to allow you to move forward with your code while IB is doing its thing.

So, I am making progress on the API slowly, as I need it for my application.  I wanted to make it public to see if anyone else was interested.  I could also use some domain expertise in knowing exactly what parameters are appropriate, when.  My experience as a quant is strictly amateur.

Check it out on Bitbucket!

Posted in Uncategorized | Tagged , , , , , | 3 Comments

Building Inline Comments for Pull Requests and Commits

(or, what I did over summer vacation)

For the last few months, I have been working with the Bitbucket team at Atlassian.  I switched over to this team at the beginning of the summer to help build a new inline commenting feature on pull requests and commit pages, to help make the tool more useful for code reviews in a team.

It was a great project, and I wrote up a story of how we built inline comments for pull requests and commits over at the Bitbucket blog.  Check it out!

Posted in Uncategorized | Leave a comment

Proof that Netflix is the only thing holding the USPS afloat

"Netflix Only" chute at the post office

Posted in Uncategorized | Leave a comment

LivingSocial/Fiesta Americana Resort Bait & Switch Scam

UPDATE: I contacted LivingSocial about this issue, and they responded in a timely manner, and offered a partial refund, which I feels remedies the situation.  I am leaving this post up to tell the story, and to alert people that they should always be on guard for issues like this, but at least LivingSocial has made a good effort to rectify the situation.
My wife and I recently returned from a trip to Los Cabos, Mexico, where we stayed at the Fiesta Americana Grand Los Cabos Golf & Spa Resort.  We purchased the resort package through “Living Social Escapes” (archived as PDF in case Living Social removes the page).  As excellent of a time we had while enjoying the warm weather and the beautiful beach, some of the items included in the offer we felt were misleading and blatantly untrue.

Under one of the bullets in the “escape kit” it states: “Daily Continental Breakfast, Afternoon Cocktails, and Hors D’œuvres at the Grand Club and Free Upgrade to Grand Club Level for First 50 Vouchers Sold”.  This gives the impression that daily continental breakfast, afternoon cocktails and hors D’oeuvres are included in the package deal and that the first 50 vouchers sold will receive free upgrade to Grand Club Level.  After being denied the continental breakfast, we spoke to the front desk.  The front desk people read it as “all of the items in this statement are only for the first 50 people.  This is a very tricky and misleading statement. Furthermore, in the body of the webpage, it states

“You’ll know you’ve been good this year as you enjoy meet-and-greet services and private ground transportation from Los Cabos International Airport, private check-in and check-out service, daily continental breakfast, and a $600 resort experience credit.”

This states nothing about the “first 50 vochers sold”.  We probably would not have bought the package had we know that the continental breakfast was not included, and instead found a package that included the amenities we were looking for at the right price.

Under another bullet it states:  “In-Room Bottle of Wine and Canapés Daily”.  When we first arrived, there was a bottle of wine and a plate of canapés in our room waiting for us.  The reminder of the days, there was no wine or canapés.  When we asked the front desk about this, they told us that a bottle of wine was only supplied on the first day and that Living Social lied about that in their ad. One front desk woman even admitted that the statement was a bait and switch tactic.

Overall, we are very unimpressed with Living Social’s misleading tactics.  We would love to be able to take advantage of the wonderful offers they have available, however they have lost a lot of our trust.

Posted in Uncategorized | Tagged | Leave a comment

Accessing Erased Type Parameter Information in Scala

One of the things holding scala back from being a more robust language is the fact that it runs on the JVM. (On the other hand, this is also one of its strengths–you can easily interoperate with existing Java code and libraries, as well as any other code that runs ont he JVM, like Groovy or Clojure.) Because Scala runs ont he JVM, it suffers from type erasure, which means that any generic type parameters that are in the code are lost after compilation, and are no longer present in the byte code, and thus at runtime. (This was a design decision made by the Java team when they introduced generics in Java 1.5, in order to preserve byte code backward-compatibility.)

In order to work around this, Scala introduced the Manifest class, which captures the type information during compilation, and allows you to access this type at runtime.  This was originally designed to allow for the creation of arrays of the generic type at runtime, but it can also be used for other custom generic types.  It uses implicit parameters to capture this information.

The problem (that I had), though is that you can’t seem to access a class’s type parameters directly in a method of that class.  So, this does’t work:

import scala.reflect.Manifest
class MyClass[T: Manifest] {
  def myType(implicit m: scala.reflect.Manifest[T]) = m.toString
}

You won’t get a compilation error, but you will just get ‘Nothing’ as your type inside your method:

scala> new MyClass[Long]
res17: MyClass[Long] = MyClass@2769aba2
scala> res15.myType
res18: java.lang.String = Nothing

This is because there was no type parameter to the method itself, and the [T] in the method’s implicit argument is not the same [T] that is used to parameterize the class.  If we add a type parameter to the ‘myType’ method, we can get closer to what we want:

class MyClass2[T: Manifest] {
  def myType[U](implicit m: scala.reflect.Manifest[U]) = m.toString
}

Then, we can try it out:

scala> new MyClass2[Long]
res19: MyClass2[Long] = MyClass2@6c6455ae
scala> res19.myType[Int]
res20: java.lang.String = Int

Okay, so now we are getting some type information out, but it really isn’t ideal.  Note that the type parameter inside the ‘myType’ method was ‘Int’, which was the parameter we passed to the method, not ‘Long’, which is what we passed to the class.  What if we want to get the class’s type parameter?  We certainly don’t want the calling code to have to keep track of the type parameters in multiple places (once at the constructor call, and once more at the method invocation).  So, we can add a second method to help smooth this out:

class MyClass3[T: Manifest] {
  def myType[U](implicit m: scala.reflect.Manifest[U]) = m.toString
  def myTypeWtihoutExtraParam = myType[T]
}

And, then, we get:

scala> new MyClass3[Long]
res30: MyClass3[Long] = MyClass3@543d8ee8
scala> res30.myTypeWtihoutExtraParam
res31: java.lang.String = Long

So, you can see that the type that was passed in to the class constructor as a generic parameter is now available in the method, without the method caller needing to supply it after the object is created.

A couple of things to note:

  • Don’t forget to import scala.reflect.Manifest
  • The type parameter in the constructor definition is context-scoped, so the ‘: Manifest’ part is important.  This technique won’t work if you just specify the parameter as ‘[T]’.  If you leave this off, you will get an error that reads, “No Manifest available for T.”
Posted in Uncategorized | Tagged , , , , | 1 Comment

US orders news blackout over crippled Nebraska Nuclear Plant– Coverup or Hoax?

In the Morning, I came across an article from the Pakistani news source, The Nation, titled US orders news blackout over crippled Nebraska Nuclear Plant.  If this is true, it is terribly troubling–both because of the implications of a nuclear meltdown in the midwestern United States, as well as the idea of a government coverup to try to silence the news media from reporting on it.

However, the news article that is ‘reporting’ on it is of questionable origin.  The general tone of the article stinks of anti-US propaganda, with passages like this:

Obama’s fears of the American people turning against nuclear power, should its true dangers be known, appear to be valid as both Germany and Italy (whose people, unlike the Americans, have been told the truth) have turned against it after the disaster in Japan and vowed to close all of their atomic plants.

Parts of the article sound like it was just written by (presumably American) critics of Obama, since the entire article is blaming him for the coverup, and not the government in general.  Other parts, though, like the passage I quoted above, seem to have more of a general anit-US slant.

So, the author of this article clearly has an agenda, of some sort.  But, does that mean that it is entirely false?  Or is there some grain of truth to this whole thing?

From what I can tell, in the 5-10 minutes I just spent researching this:
  • There is indeed a 4-level scale for rating nuclear power plant ‘events’ in the United States.
  • Fort Calhoun Power Plant recently (June 8, 2011) had a small fire, which caused a partial evacuation, and triggered a level 2 ‘Alert’ event.
  • Fort Calhoun Power Plant issued a level 1 (the least-alarming type of event) event in connection with the flooding of the Missouri River.
  • The FAA has indeed issued a no-fly NOTAM order for the area.
  • There was an article titled “Low-level emergency declared at nuclear power plant” from JournalStar.com that was supposedly relating to this flooding alert.  The article is no longer at the URL linked from Wikipedia.  Does this indicate a coverup?  Maybe, maybe not.  Strangely, the retrieved date cited on Wikipedia is a while before the alert was issued by the NRC.  This makes me wonder how likely the link was to ever have been there.
  • There are a lot of random blogs, of varying degrees of crack-pot-itude talking about conspiracies, coverups, etc.  I don’t know if they are all just in the echo chamber, and this was all sparked by that one article from Pakistan, or if there is more to it.  There was an article on a Ron Paul-related blog, dailypaul.com linking to the Pakistani article, so it is definitely making its rounds.
Well, that’s all for now.  Lets hope the meltdown & coverup story is just a hoax.
Posted in Uncategorized | Tagged , , , , | Leave a comment

Mapping site should be faster now

Over the past few weeks, I have been noticing that the site I created to allow users to plot multiple addresses on Google Maps would sometimes become very slow when a lot of people are using it. So, I started to investigate the cause.

(I am going to split this post into two sections, the first aimed at non-programmers, and the second at programmers, in case they are interested. I realize that most of the people that use my maps site probably don’t really care how it works under the hood, so the first part is for them. On the other hand, if I can share what I learned from this experience with other programmers, then they can read the whole post, focusing on the second half.)

Overview

The site uses a database to store the information about maps that people choose to save. I found that one of the database parameters was the most likely culprit for the slowness. Basically, it was optimized for a database that gets information read from it almost all the time, but when data is written to it, the whole thing locks, and doesn’t allow other people to read data from it while that write is happening.

So, now that I had found the likely cause of the issue, I started investigating how to fix it. I found that there were too methods, so I just picked the one that seemed easier. After a few hours of issuing the command to make this change, it appeared to still be working. Not sure if something was wrong, and the process stalled, I cancelled and re-issued the command. This was probably my biggest mistake. I later realized that the process just takes a long time, and I should have let it run. I basically did this a couple of times, trying the other method to make this change as well. Of course, this site is something I do in my free time, so I had to do other things during this process as well, such as go to work, sleep, and eat. In the end, once I realized that the process was working, and I should just let it run, it took about 10.5 hours.

Long story short, the site is back up, and hopefully it will be faster from here on out.

Technical Details

The parameter that I was trying to change was the MySQL storage engine. Originally I had it set as MyISAM, most likely because that was the default when I created the database. MyISAM is extremely fast when you are doing just reads from a table, but when you do a write, it will lock the entire table, preventing any other threads from access it for reading or writing. I suspect this is what was causing the performance issues with the site.

So, I decided to change it to the InnoDB storage engine. InnoDB is still fast for reads, but not as fast as MyISAM. One of the big advantages to InnoDB, though, is that it uses row-level locking, instead of table-level locking. So, if one request is adding a new address to the table, another request can be reading some other addresses. (The one that is currently being written won’t be able to be read, but it is very unlikely that someone would be trying to view a map that has not been fully created yet.)

First Approach

So, there are two main approaches to changing the database storage engine. The first one I tried, which seemed simpler at the time, was to issue the following command:

ALTER TABLE tablename ENGINE = InnoDB

I found this on Farhan Mashraqi’s site during my research, so I decided to try it first (after making a backup to the database, of course).

I restored the backup on my laptop, and decided to give it a test run. I tried it on some of the smaller tables, and it seemed to work. Since I wanted to get the upgrade done quickly, and get on with my life, I didn’t try upgrading the whole database on my laptop, and just started doing it on the production server.

This was really my biggest mistake. I should have done the whole upgrade on my laptop, start to finish, before trying to change anything on the real site. (I don’t really know why I did this; I never would have been so sloppy with a similar task at work.)

So, as I said, I started making the change on the real site. After a could of hours, it still wasn’t finished, and I noticed that in the MySQL Administrator GUI, on the Catalogs tab, the value in the ‘rows’ column was jumping all over the place. It would go up, down, and change in multiple different tables. This didn’t seem to make any sense, and led me to believe that something was wrong, and that perhaps I should restart the process. As it turns out, the row count is just bogus, and shouldn’t be trusted. But, I didn’t realize that at the time, so I restarted the process, and went to bed.

Second Approach

In the morning, I checked on it, and it was still chugging along. I figured that something must be wrong, since it was taking so long, and the row counts were still acting weirdly (I didn’t know they were bogus yet). So, I decided to try the other method of changing the storage engine, which is to take your backup file form mysqldump, and change the parts where it says ‘MyISAM’ to ‘InnoDB’, and then restore it. I decided to do this with the sed command, since this backup file is rather large, and I wasn’t sure if vim would choke on it. The following command did the trick:

sed s/MyISAM/InnoDB/ backupfile.sql > updatedbackupfile.sql

Then, I just ran that SQL script. It also took forever. I thought that perhaps the issue was that my site runs on a virtual server (from slicehost, and overall I am very happy with their service), this means that the disk i/o can sometimes be slow. It generally isn’t a problem, but it is one of the disadvantages to a virtualized server. Since what I was doing was very i/o intensive—reading a large file, doing a little bit of processing, and then writing the data out into the database, which resides on disk—I thought that perhaps this was the culprit. So, once again, I canceled the process.

The next (and final) try, I did the same thing, except I had the modified backup file that I was trying to restore on my laptop, and I restored it over the network. This meant that the server, with its virtualized disks, was only responsible for writing the data to disk, and not reading it from disk. It read the data from the network. I also learned that the row count in the MySQL Administrator GUI was bogus, and I shouldn’t worry myself with it.

A little more than 10 hours later, the database is back up and running, and I removed the maintenance message from the site!

Lessons Learned

  • Always do something first in a test environment before touching the production environment
  • Database actions that change entire large tables take a long time. Let them run
  • Always do something first in a test environment before touching the production environment
  • The row count in the MySQL Administrator GUI is bogus. Don’t even look at it.
  • Always do something first in a test environment before touching the production environment
Posted in gmaps.kaeding.name | Tagged , , , , | 32 Comments