Brand new message decoder/encoder!
Jul 7th, 2017
Wow! There is no other word for what we're about to announce!
For the last few years OE Classic depended on third party message decoder/encoder which worked OK, but always had certain annoying misbehavior as well as lack of flexibility in message decoding and bugs which were never fixed. For example, some messages were decoded improperly and there was no ability to user-customize them, such as user-selection of code-page so the incorrectly shown characters would be decoded properly for misbehaving messages. It was not very fast either and it offered no ability to skip decoding of unnecessary email items or decode them when they are really needed (on demand).
On top of that, some of these components are also very expensive - some of them cost a few thousand USD (per year!). Although we would happily pay for that, assuming they worked per our specification. But they did not.
Lacking control over the code and after years of asking (read: almost begging) for developers to fix these and other issues or at least make it more flexible, we finally got tired of all that and promised ourselves that we won't cause our users to suffer because of the lack of their understanding of what message decoder really needs to do.
So we decided to roll our own message decoder/encoder. It took about 2 weeks so far but the work done is just AMAZING. Over 2 times faster than the current one when decoding and over 3 times faster when encoding a message. Full flexibility to finally fix all the misbehaving messages which surfaced over the years. Ability to give user choice of code-page and much better auto-detection when code-page is not properly specified. And also, much faster evaluation of the message to determine if decoding is really needed or not.
The end result of this is that the operations related to message decoding (which is opening or downloading of any message) will be significantly faster and this will give users really great experience. It is highly addictive to get used to this speed - once we got used we didn't want to go back to the old one, but there is some more work to be done with regard to testing before we can roll it out to the public as we want it to be made with the same highest standards our users have come to expect from us over the years.
This means it has already passed through millions of auto-generated messages to test for any possible failures with zero errors. It also went though a few gigabytes of our test store folder.
This is big change although it won't be as visible as some other features - but this one literally will be experienced by every user of the program because 100% of the messages will be passing through this decoder/encoder so it is the core of the program. It is of utmost importance for this to be fastest and top quality piece of programming and any compromise with it just wasn't an option.
There is still some work to be done and we will be rolling it out gradually, but do expect a great decoding accuracy as well as speed improvement from it once it is merged into the main program!