As part of a college group of 7, I’recently finished working on a bespoke CSV merger and filterer. This process was over a few months, and has been delivered to the client now.
Due to some miscommunication with our client, we were under the impression that many filters would be applied to the same data. Due to this, our original application used the following technologies:
- SQLite embedded database
- Java GUI
- Haskell Error Checker
Subsequently, we discovered that the primary focus would be merging, and the filtering was to be an on-the-fly operation. This meant that storing all the records in a database would be a timely exercise, which only proved to slow the entire process down almost to a halt.
As a result, we ended up using an in-memory filtration system. Due to this, we could eliminate the need for a database, and limit the bottlenecks in the system to that of a processing variety.
The current application utilizes:
- Java GUI
- Java filtering
- Haskell error checking.
On a side note, it would be fair to say that we’ve now dealt with our fair share of SVN difficulties. This was one of the first projects we had where we needed to properly co-ordinate our coding efforts, and manage versioning in an efficient manner.