Extracting the World Map of Uncharted Waters 2

Over a year ago, I set out to remake a game called Uncharted Waters 2. This month I reached a milestone, having extracted the tilesets and tilemap of its world map. As no one has detailed this, I’d like to contribute.

Uncharted Waters 2 world map
The world map in Uncharted Waters 2

Much of my work is thanks to an old discussion on a Chinese forum, where a user called botx had outlined the general algorithm. My implementation, along with the relevant game files, are available on GitHub.

At a Glance

The game splits the world map into three parts. The first part contains Europe and Africa, the second part Asia and Australia, and in the third part you will find the Americas. Each part consists of 30 * 45 blocks. 12 * 12 large tiles form a block, where each large tile in turn consists of 2 * 2 regular tiles.

Both the regular and large tiles are tilesets. The regular tiles are 16 * 16 pixels in size, of which there with 128 variations. The large tiles are 256 permutations of four regular tiles, and 32 * 32 pixels in size.

In short, the approach is to extract these two tilesets first. Then, move onto the blocks and figure out their indices. Map each index to a large tile, which in turn maps to 2 * 2 regular tiles. Afterwards, perform some extra processing for, among others, coastal tiles.

The raw data comes from the game files WORLDMAP.000, WORLDMAP.001, WORLDMAP.002, DATA1.010, DATA1.011 and DATA1.018. To get them, you need to uncompress WORLDMAP.LZW and DATA1.LZW using a tool such as "LS11 Archiver".

Uncharted Waters 2 world map breakdown
A breakdown of the three parts of the world map, blocks, large tiles and regular tiles

Regular Tileset

Uncharted Waters 2 world map regular tileset

The first half of DATA1.011 (16384 bytes) contains the 128 regular tiles. Each tile is 16 * 16 pixels in size, where each pixel uses 4 bits.

To extract all pixels, read 1024 bits at a time. Form the first pixel by combining bit 0, 256, 512 and 768 (left-to-right offsets). The second pixel by 1, 257, 513 and 769, and so on. Map these 4-bit values to RGB colors, drawing each tile left-to-right, top-to-bottom. The mapping varies based on time of day in-game; refer to my implementation for details.

Example: Bit 255 is 1, 511 is 0, 767 is 1 and 1023 is 1. The 4-bit binary number is 1011, giving us the value 11. This maps to #007161 and, being the 256th pixel, defines the bottom right of a tile.

Looking at the resulting tileset, there are obvious placeholders. There are also tiles used for ports and villages. If you take a longer look, you will notice all the terrains and tiles that appear to be coasts. Their significance will become clear soon.

Large Tileset

Uncharted Waters 2 world map large tileset

DATA1.018 contains 256 large tiles that are 4 bytes each, giving a file size of 1024 bytes. The 4 bytes describe a permutation of 4 regular tiles, left-to-right, top-to-bottom.

Example: The 17th large tile consists of bytes 60, 61, 62 and 63, which have the values 116, 117, 118, 119. Mapping them to the regular tileset gives us the tiles that form how a port looks.

For the first 16 large tiles, however, the logic is different. Ignore DATA1.018 and express their index value in binary, where 0 maps to 0 (regular sea tile) and 1 to 65 (regular land tile).

Example: The 3rd large tile has an index value of 2, which is 0010 in binary. This gives us the values 0, 0, 65, 0, which is all sea except the bottom left tile.

Like regular tiles, there exists large tiles that are redundant and never used. Some of them even map to out of bounds regular tiles, for which I have replaced with transparent tiles.

Blocks

WORLDMAP.000, as well as the two other world map parts, contain 30 * 45 blocks. A block is 12 * 12, where each index maps to a large tileset. In contrast to extracting regular and large tilesets, blocks are less straightforward.

The game describes each block using a template, together with data about how it differs from it. In total, there are six templates. To start extracting blocks, do the following:

  • Skip the first 2700 bytes.
  • Read 8 bits. The value of the three rightmost bits map to a template. The leftmost bit is 1 if the block, in the rare event, matches the template. Otherwise, proceed reading further.
  • Read 144 bits, and take note of when and where 1s occur as they state deviations from the template.
  • For each 1 encountered in the previous step, read 1 byte and use its value to correct the template.
Uncharted Waters 2 block templates
A block is based on one of these six templates

Contrived example:

  • The first 8 bits are 00000101. The three rightmost bits are 101, which refers to template number 5. The leftmost bit is 0, meaning there are differences.
  • Reading the next 144 bits, we find that the 13th and 131st bit are 1s.
  • We continue to read 2 bytes, and find that the first byte has a value of 16 and the second byte 17.
  • The resulting 12 * 12 block is all sea except two large tiles: a port at (0, 1) and a village at (11, 10).

Each block takes up a varying number of bytes — some take just a byte, while others 1 + 18 + the number of 1s within the 144 bits.

Coastal Tiles

Uncharted Waters 2 world map coastal tiles
Replacing regular sea tiles, before and after

Having extracted all blocks, replace regular sea tiles that neighbor land to give coasts a less jagged look. To determine their replacements, iterate all regular sea tiles and do the following:

  • Form 8 bits by going counterclockwise through each adjacent tile. The first, leftmost bit comes from the top left tile. The obtained bit is 1 if the adjacent tile is land (non-water and non-coast), and 0 otherwise.
  • Using this 8-bit value, add 256 to it to get n. Read the nth byte of DATA1.010, of which its value replaces the regular sea tile.

Example:

  • Of a regular sea tile, all three adjacent tiles to the top are land while the rest are sea. This gives us the bits 10000011.
  • Adding 256 to the 8-bit value of 131, we get 387. As the 387th byte of DATA1.010 is 2, change the regular tile from 0 to 2.

Deserts and Other Terrain

Uncharted Waters 2 world map deserts and other terrain
Three sequences of before and after: filling deserts, updating polar regions, and updating temperate zones

Of the few deserts in the world map, blocks only contain data about their edges. To fill their bodies, iterate regular desert tiles (89) and replace the right and bottom tiles to also be 89 if they are land tiles (65). Iterate left-to-right, top-to-bottom, including the newly replaced tiles. Afterwards, replace all desert tiles which border non-desert.

Blocks for the most part only express a single, default terrain, where regular land tiles are 65. Based on specific rules, apply corrections to them and coastal tiles. For the polar regions, which are the first and last block rows, add 16 to cover them by ice. For the temperate zones, rows 1 to 13 and 31 to 43, add 8. These corrections are only applied to default terrain: some land south of the Straight of Magellan, while in the last block row, is not covered by ice.

This section contains few details as I have not understood how the game applies them. If one has gotten this far, however, defining the mapping rules for desert "coasts" should not be an ardous undertaking.

As a side note, I made changes to a handful regular tiles to correct what appears to be minor oversights.

Closing Thoughts

I found the approach Uncharted Waters 2 uses to compress data fascinating — utilizing two tilesets, storing blocks as template differences, and deriving coastal tiles and terrain. It’s funny how this contrasts what we can afford doing today, bundling entire web browsers with desktop applications.

Next up for my remake is making it possible to sail around the world. I foresee a couple of challenges:

  • How should the canvas be drawn? Its appearance changes often, as the color map varies with the in-game time. Drawing the entire map as a base, which I did for ports, is out of the question as this map is much larger.
  • Uncharted Waters 2 has AI fleets that sail around the world, and with a purpose. This means that coding their behavior will be needed. They need to be able to find their way to all ports, as well as hunt you down in the case of corsairs.
  • Investigation needs to be carried out to understand how the game simulates wind and ocean current. Are things like the Gulf Stream, westerlies and trade winds implemented in the game? Where and how do storms occur?

The Cruel yet Inspirational Sport of Boxing

Over the past two years, what’s surprised me about myself is that I’ve taken up an interest in the sport of boxing. The sport where two people put on gloves and get onto a 6 m square platform, enclosed by rope. As the bell sounds, over the course of 36 minutes, they start boxing each other with the ultimate goal of scoring a knockout. The smallest margin of error can change their lives, all the while millions of spectators are cheering them on.

Canelo vs. Golovkin
Canelo Álvarez and Gennady Golovkin, two of the best boxers in the world, when they clashed on September 16, 2017

While it doesn’t take long to realize the sheer brutality of boxing, I realized more things as I gained a better understanding of the sport. Here are three things I realized, while drawing certain parallels to society as a whole.

Cruelty Beyond the Actual Fights

Something which is plaguing boxing, and has been for decades, is systemic corruption.

If a fight goes the distance, the scorecards of three judges determine the victor. But judges score boxing, like figure skating and diving, in a subjective manner. Outside of obvious knockdowns, judges look for the boxer who best controls the action and acts as the aggressor. They also look for the boxer who lands the most clean and hard punches, and the boxer who is able to defend better. In other words, the scoring is in stark contrast to the black and white nature of tennis and golf.

Boxing also has a lack of regulation and oversight. There’s no central authority in the form of a national commission. The sport also lacks structure, as there are no tournaments, leagues or schedules (outside of amateur boxing).

In the midst of all chaos, the power brokers are the promoters. They set up the deals and arrange the fights. They can also be the ones responsible for the travel, lodging and food costs of the judges and the referee. Promoters can also have direct ties to the manager of a boxer, the very person whom should be representing the best interests of the boxer. In short, boxing has conflict of interest written all over it.

Roy Jones Jr. vs. Park Si-hun
Park Si-hun lifts the rightful winner Roy Jones Jr. into the air. Park retired from boxing after the Olympics, and Jones would go on to become one of the best boxers of his generation.

An infamous example of corruption in boxing, although this was amateur boxing, happened at the 1988 Summer Olympics in Seoul. In the finals, Roy Jones Jr. beat his opponent Park Si-hun in a one-sided affair. Yet, when the result was announced, the hands raised by the referee were Park Si-hun’s. Park had an embarrassed look on his face, and in a display of human decency, lifted Jones into the air.

In the recent superfight between Canelo Álvarez and Gennady Golovkin, one of the scorecards sparked controversy yet again. Amidst all the discussions following the fight, one that stood out to me was Teddy Atlas debating for an entire hour on ESPN. Never in my life have I seen someone speak with this much passion, while expressing their anger and disgust. And if you take the time to understand his background you’ll understand why. He’s a veteran trainer who loves the sport and has devoted his life towards it. He’s one of the few people who knows what boxers have to go through to be successful. He knows the sacrifices they have to make. He knows what they put on the line day in and day out. Yet, due to corruption, their hard earned accomplishments can be taken away from them in one fell swoop.

It makes you wonder. What hurts more? Taking all those punches leading up to that moment, or swallowing an unjust loss?

A personal takeaway from this, is that life can sometimes be brutally unfair. And it doesn’t even have to be as a consequence of corruption. We have to remind ourselves this every time we’re too fixated on a certain goal or ambition in life. We have to ask ourselves if we’re also enjoying the actual journey itself, rather than the thought of reaching the destination. Because one day, unforeseen things can happen beyond our control, preventing us from ever reaching that destination.

The Importance of Marketing

To be regarded as a great boxer, you have to prove you can beat other boxers that are perceived to be great. As boxing lacks structure, you cannot force an opponent to step inside the ring with you.

If you aren’t marketing yourself well as you rise through the ranks, a consequence will be that other good fighters will evade you. If your boxing skills are through the roof, but you cannot sell out arenas and generate pay-per-view revenue, it makes little sense for other promoters to risk their boxers on you. On the other hand, if you’ve built up your personal brand well, opponents will line up to fight you even though they have little chance of beating you.

Some boxers have a harder time than others. They lack natural charisma. Their fighting style is too technical, as opposed to being an aggressive knockout artist. They don’t come from a country where the entire nation will rally behind them.

Floyd Mayweather and Conor McGregor face-off
Two master marketers in Floyd Mayweather and Conor McGregor showing how it's done

The boxer who mastered the art of marketing was Floyd Mayweather, having generated $1.3 billion in revenue throughout his career. Through boastfulness and flaunting his wealth, he created a persona that people hated. As he was such an exceptional boxer, he dangled his undefeated record like a carrot on a stick. Casual fans were paying for the chance of seeing him finally lose, while hardcore fans marveled at his skills.

Earlier this year, Mayweather came out of retirement to fight Conor McGregor in a boxing match. As they both walked away with hundreds of millions of dollars after the fight, I can’t help but think about a subject I touched on before. While both men are entertainers and great in their own right, the fight sold as well as it did because people believed McGregor had a chance. The marketing campaign led people to believe this would be a competitive match, rather than a spectacle. It was successful, because the average person doesn’t realize that, despite boxing and MMA being combat sports, they are still worlds apart. Leading up to the fight, when high-profile boxers (without a vested interest in marketing it) were asked about who would win the fight, you could tell it annoyed them. They felt that the suggestion alone of McGregor having a chance was disrespectful towards the sport of boxing.

This is a feeling I can relate to every now and then when it comes to software. I feel like some people, who lack an understanding of software and what it takes to create great products, marginalize the very profession I care so much about.

The Inspirational Side of Boxing

Looking beyond the cruel surface of boxing, what I find is something inspirational. It astounds me that there are people out there with the competitive spirit to step into the ring and excel. That there are people out there born into poverty with all the odds stacked against them. But because they had that innate drive, they endured more hardships and ended up forging a better future for themselves and their family.

The fight is won or lost far away from witnesses - behind the lines, in the gym, and out there on the road, long before I dance under those lights.
- Muhammad Ali

In the world of boxing, my favorite quote is by the late Muhammad Ali. Today, due to the Internet and social media, we focus a lot on instant gratification. We read about accomplishments and watch highlights and award shows. We see couples in happy relationships. We see athletes break world records. We see actors put on masterful performances. We see entrepreneurs sell their startups for millions of dollars. What we don’t see, unless we look for it, are the tens of thousands of hours of work they’ve put in to get to where they are.

Recounting a Year of Overhauling An E-commerce Solution

New Relic chart
In 2017, our Magento application's response time is below 140 ms. Before the end of 2015, it was still hovering around 1000 ms.

So far, I consider what I did during my first year at Paradox Interactive to be my greatest accomplishment. During that timespan, I reduced our Magento application's response time from 1000 ms to 140 ms. I also increased its reliability, paid back some technical debt and took ownership of the entire stack. During the beginning of 2016, I deployed the biggest improvements. For that whole year, compared to the year before, the conversion rate of our Magento store increased by 59%. Revenue also doubled.

As I've departed from doing Magento development since then, I thought I'd closing out this chapter of my career by recounting two memorable challenges during that eventful year.

Integration Woes

Our e-commerce solution uses Adyen to handle payments. While we only sell digitals products today, we also sold physical products in the form of merchandise back then. Our own API backend delivers the digital orders, while a solution called Shipwire fulfills the physical orders.

Adyen Critical Bugs

Adyen logo

The way we integrated with Adyen was through their Magento plugin, which wraps Adyen's API. The primary goal of Adyen and that plugin is to set orders to complete upon successful payment. However, every now and then we would come across orders that got stuck and never progressed to complete. The reason this was happening was due to a race condition, as a result of how that plugin handled callbacks from Adyen. If a callback says a payment was successful, that plugin would update the corresponding order object. As a callback is an HTTP request, spawning a new Apache process, there exists a window of opportunity where the new process has handled the callback while the original process is still updating the order object.

Adyen released a new version of their Magento plugin, fixing amongst others this particular issue. As this version of the plugin seemed to contain large amounts of refactored code, I thoroughly tested it and discovered a critical bug: orders that only contained digital products would never progress to the complete state. While not evident at first, this was because the plugin didn't take into account that order objects can have an absent shipping address in Magento.

Another problem, relevant to us, was how the plugin addressed the race condition. Instead of processing callbacks immediately, the plugin stores callbacks in the database. A cron job is then run every minute to process callback events older than 5 minutes, which added a delay to what we deliver to our customers. As I couldn't see a better, quick solution, I patched the logic to 1 minute.

At a later time, we needed to upgrade our plugin again. While everything seemed fine, something odd was occurring as orders poured in when we released an expansion for one of our games. For some reason, the amount of orders stuck started piling up. Only after two hours of debugging did I understand what had happened:

if($order->getIsVirtual()) {
    $this->_debugData[$this->_count]['_setPaymentAuthorized virtual'] = 'Product is a virtual product';
    $virtual_status = $this->_getConfigData('payment_authorized_virtual');
    if($virtual_status != "") {
        $status = $virtual_status;
        
        // set the state to complete
        $order->setState(Mage_Sales_Model_Order::STATE_COMPLETE);
        
    }
  }
Magento will throw an exception if you try to set an order's state to complete in Magento

When processing callbacks, and for orders containing only digital products, the plugin executes a line of code that sets the state of the corresponding order to complete. In Magento, the order object is a state machine. Directly changing the state, and to complete in particular, will throw an exception. This block of code also seemed unnecessary. The order object is already complete before it's executed.

The reason orders piled up was because the cron job could only process one successful order per minute. As the cron job runs it loops through each callback and corresponding order, but crashes after the first iteration. I didn't spot this bug while testing because I never made enough orders in quick succession to notice something was wrong. It was also hard to immediately understand what was going wrong, as exceptions from cron jobs triggered by Magento don't end up where they usually go, but to a table called cron_schedule in the database.

While I find Adyen to be a superb payment provider, I learned something important. Coinciding with what I observed while working for a large e-commerce firm, e-commerce is still dominated by physical products. If you sell digital products you have to be extra careful with plugins. They are poorly tested for (evidently not in our case) and work under assumptions that may not be true for digital products. The 5 minute race condition also illustrates this. If you sell physical products, adding a 5 minute delay before an integration can pick the order up for shipment doesn't have as adverse of an effect on user experience as for digital products.

Shipwire Order Fetching Logic

Shipwire warehouse
Shipwire handles inventory and fulfills orders

The selling point of Shipwire is that they handle your physical inventory in their warehouses, and fulfill orders for you. While it, similar to Adyen, offers an API, we were using a Magento integration they had built. You fill in the credentials of an API user of your store, allowing Shipwire to every now and then poll unfulfilled orders.

On occassion, it would miss picking up orders. In contrast to Adyen's Magento plugin, the code Shipwire runs is invisible to us making it hard to debug. To complicate things, Shipwire doesn't communicate through a REST API but SOAP, and you can't manually trigger a polling attempt.

In the end, I added a snipped of logging code to a method that all Magento API calls pass through. After examining what endpoints Shipwire called and the payloads, I realized the flaw. As you'd expect, Shipwire fetches all paid for but not shipped physical orders. But the request also applies a filter, fetching only the orders that have an updated_at timestamp later than the last order Shipwire picked up. While this filter is sensible, it doesn't take into consideration the fact that newer orders can be ahead of older orders in their progression. Some forms of payment take longer than others, and customer service might update an order a day or two later.

As it was clear Shipwire's support doesn't handle technical issues of such detail, I solved this problem by overriding the method that all Magento API calls pass through. The overriding code intercepts all requests from Shipwire that try to fetch orders, and subtracts 30 days from the updated_at filter.

Another solution would have been to write our own integration, directly towards Shipwire's API. An opinion that I've formed is that, as your e-commerce solution matures, you should strongly consider moving away from platform-specific plugins and integrations. You should instead write your own integrations towards a solution's "core" API. The core API is by necessity much more tested and stable. While using platform-specific plugins let you get started quickly, they tend to carry two major drawbacks: they are bloated as they need to cover a wide range of use cases; and they are developed by those who are knowledgeable about either the core API or the platform, but not both.

API Backend 504 Gateway Timeouts

This was without a doubt one of the most elusive bugs that I've encountered.

Our store uses our own backend API for a number of things, with the most important ones being account integration, order fetching, and delivering Steam product keys. On rare occassion, calls to our backend failed which could result in a customer not getting their product keys. Through adding better logging to our Magento codebase, I found out that these failures occured for all endpoints. Each failure would result in a response containing an empty body, with a header of "HTTP/1.1 504 GATEWAY_TIMEOUT".

Besides the difficulty of reproducing this, each request passes through a vast amount of servers and services. Our backend is an extensive Amazon Web Services stack where requests go through NGINX, Elastic Load Balancer, Elastic Beanstalk, Apache and Tomcat before reaching our Scala codebase. The response from our backend then has to go through NGINX, Varnish and Apache on the Magento side. After ruling out Magento, my colleague who works with our backend did a series of investigations.

He tweaked timeout and KeepAlive settings, to no avail. He performed analyses on our logs and found that the number of 504 Gateway Timeouts correlated with the number of requests, but there was neither a relation to latency nor load.

In the end, my colleague discovered what had haunted us for almost a year. As I lack in-depth knowledge about our backend, here's how I understood it: our backend nodes have Apache in front of them. Apache was configured to logrotate every minute. Whenever that happens, Apache reloads, thereby dropping all existing requests.

Managing a Managed Host

A consultancy company used to manage our e-commerce solution. They deployed our store on a managed host, managed by a hosting provider. This meant that neither the consultancy company nor we had root access to the server.

feelsbadman

If you're someone who has some experience managing servers, this is a frustrating situation to be in. Part of that frustration was because it amounted to lots of communication time. We couldn't perform trivial tasks such as setting up Newrelic, adding a virtual host or changing a configuration file without going through the hosting provider. The user which we finally had them create for us had so few permissions at first, that we couldn't even read our application's log files.

Another part of that frustration was that they were lacking in whatever solution they were using, as well as in their sysadmins. They lacked transparency and weren't following best practices (to the limited extent of my knowledge). We had to hold their hands too often, and if a problem occurred they didn't attempt to understand the root cause and take measures to prevent it from happening again. To cut them some slack, the vast majority of those who use a managed host are non-technical. Their other clients are thus less likely to see their shortcomings, meaning they can get away with a poorer level of service.

For instance, I recall three incidents that highlighted the challenges:

Backups to the Same Disk

One time, I backed up our production database before a deploy, with the intention of removing it the next day. That night, I received a flood of alerts from Newrelic. To my horror, I realized Magento was returning 503 errors because our server was out of disk space! While our hosting provider answered my email and freed up space, I realized what had happened the following morning: their solution performs nightly backups, but saves the backups to the same disk! The same backups were also causing our application to hang every midnight.

For this particular incident, I was also at fault as I shouldn't have backed the entire database. I should've just backed up specific tables of interest. That way, the nightly backup wouldn't have used up as much additional space.

Varnish 503 Service Unavailable

Our hosting provider waas using an unnecessary server setup. Our server was set up as HAProxy > Varnish > Apache. Varnish was not configured to do anything, and we didn't need load balancing as we were on a single, powerful and underutilized server.

During four occasions throughout the course of two months, all customers ended up getting 503 errors from Varnish when they tried to login or make a purchase. This was odd, as it was something which had never happened earlier. It was also hard for me to debug, as I had access to neither Varnish nor HAProxy. The little access I had to Apache was restricted to our DocumentRoot directories. To make matters more frustrating, every time I bugged our hosting provider to troubleshoot, the problem somehow disappeared. They would then drop the investigation, leading to the same problem resurfacing a week or two later.

In a desperation attempt the fourth time it happened, I asked our hosting provider if they had checked /var/log/ of our server. It was then that they found "zend_mm_heap corrupted" at the end of the Apache error log, the key clue which solved the mystery: our hosting provider periodically upgrades packages on their managed hosts. This time, we ended up with a version of PHP and an OpCache which could cause segmentation faults. These faults had a tendency of only triggering after Apache has run non-stop for several days. Hence, whenever we contacted our hosting provider to troubleshoot they would inadvertently fix the problem by making random tweaks to configuration and restarting.

What surprised me the most about this, was how they didn't even look in one of the first few places you'd look. Also, throughout the whole process they didn't even inform or play with the thought that the package upgrades could've been behind this critical bug. Going back to the server setup, they would've also been less confused if HAProxy and Varnish were not used at all.

HAProxy Misconfiguration

As a final example, there was also an incident when we asked them to swap our wildcard certificate for an EV certificate. When carrying out the changes, they messed up X-Forwarded-Proto in HAProxy so it had a value of "https https". This was allegedly due to a bug in the control panel they were using. This caused our store to become unavailable as users ended up in a redirection loop. While mistakes do happen, this particular mistake took them 30 minutes to rectify. They simply didn't back up the configuration file, so had trouble even spotting the problem.

The Successful Migration

During the second part of that first year, I had gained a well enough understanding of our e-commerce solution and pulled the trigger on migrating it. The goal was to be able to gain better control, and not let a hosting provider cause us distraction. Also, this gave us the opportunity of using PHP 7 which had just become available.

The migration project involved several phases: picking a hosting provider, setting up servers, testing our solution on PHP 7, writing bash scripts for the migration, and performing test migrations.

A couple of days before the migration, we lowered the TTL of our domain's A records. I deployed our codebase and moved over all the media assets. On migration day, I put both our old store and new store in maintenance mode while our IT manager updated the A records. A bash script was then run to migrate the MySQL database as well as the Redis database. Once completed, I took our new store off maintenance mode. The downtime ended up not being more than 15 minutes. (Had I performed the migration today, I would've taken advantage of replication.)

A challenge with the migration involved the amount of communication and coordination required. I decided the exact date and time of the migration together with Marketing and Sales. This was then communicated to other parts of our organization, as well as to both our old and new hosting provider. I also tasked our old hosting provider with forwarding all traffic to the new server.

Honorable Mentions

Screenshot of in-game store
The minimalistic store with its base theme and made up product catalog

Besides the integration and hosting provider woes, there were a few other memorable challenges.

The original codebase of our store wasn't in the best shape, which was something I improved over time. A prominent problem was that almost all the code used for the integrations were crammed inside of a God class. While troubleshooting integration problems and implementing new features, I broke this class down into several where each had a single responsibility. I also reduced tight coupling and removed needless dependencies. For instance, one requirement is that if a customer changes their address during checkout, the new values need to be synced to our backend. Much of this requirement was implemented in the frontend by sprinkling some jQuery into the checkout templates. This causes an unnecessary distraction whenever you redesign your checkout.

I also built a store view for Magento, intended for selling expansions and DLC inside our games through an in-game browser. The hard part involved making the store fast and minimalistic. To do this, you have to have a good understanding of how Magento and particularly the checkout works. In addition, I also created acceptance tests in Selenium covering the entire purchase flow. In the end, this store never launched as it clashed with Valve more than we had anticipated. This was understandable, as Steam players purchasing through this store would deprive Valve of their 30% share.

Reflections

Looking back at the successful year, I feel an overwhelming amount of gratitude. Much of the success was made possible by what I learned at my previous job (a leading Magento consultancy company). My former colleagues inspired and challenged me to learn more about software development, and particularly about PHP, Magento and object-oriented programming. One colleague taught me something that will stick with me for a long time: you shouldn't just blindly learn how to do something. You need to go beyond that, and seek to understand how things work behind the layers of abstraction.

The successful year was also made possible by my manager and closest colleagues, who gave me a lot of freedom to improve our e-commerce solution. It also illustrates the importance of continuous product improvement. While we tend to get lost focusing on new features and the number of them, it's important not to lose sight of the core features of a product. For those core features, we need to endlessly ask ourselves if they can be improved and carry out these improvements.