Hopefully I got back to writing again. I could not find time for anything else than the project I tried too hard to finish. This day hopefully marks the sweet little feeling described as project is finished that can be tasted for a brief while until engaged in the next project. The project is finished feeling is arguably one of the few emotions a programmer is capable of expressing to the outside world. Just kidding. I can definitely laugh as well, especially in the most inappropriate situations.
Every project surfaces different unforeseen problems. Some problems are with the tech stack chosen and some are due to errors in data. With any data manipulation application, the data must be first loaded into the memory. To do so, the data structures have to be created to accommodate the data. This works well until it doesn't, which mean, until there is a discrepancy with the data. Or, in other words, when the data structure does not meet the expectations.
In my situation, one small excess relation among the entities involved led to so much redesigning and rewriting that I am really glad it is over. When the customer confirmed for themselves that the data structure is in fact part of their database, they were surprised in a way resembling "this should not even be possible". What's worse, now when they know about the problem, they are likely already working on changing the data to meet the expected criteria. Such change would remove the data entity relation causing the delays in the first place, effectively rendering all implemented features aimed to work around the discrepancy meaningless.
In the end, it is all my fault I did not examine the data thoroughly at the beginning, but what is the right thing to do? Obsessing over the data and procrastinating with the product? I did a brief data examination and went on to coding.The project was meant as a proof-of-concept (PoC), I could found that the project's main goal could not be done imagined way, rendering the error in the data irrelevant far before it would be found. I counted on my ability to mold the code to import the provided data at one point in the project development cycle, at which point exactly I found the problem. Which was after the PoC was definitely confirmed but after some code had to be thrown away.
We are now spoiled with the rich features for fake data generation that allows us to pre-fill the application with diverse data to test as much edge cases as possible before going live. Yet apparently there is a flip side, because the generated data are very static, there are generally no unexpected relation amoung its entities. The final take from this story is to not to count on too much on the data generated via Faker or similar tool and try to use the real data as soon as possible. Might prevent some headache.