Most software deals with imperfect data. Humanitarian software deals with data that's actively dangerous if it's wrong.
At UNOCHA, I spent two years building platforms that aggregated and visualized humanitarian data — funding flows, people affected by crises, protection of civilians data, response coverage. The numbers on the dashboards I built went directly into briefings for the Security Council, donor meetings, and operational decisions in the field.
If a chart shows 2 million people in need when the actual number is 4 million, that's not a bug. That's a funding gap that leaves two million people without assistance.
Data you can't trust but have to use
Here's the uncomfortable reality: in an active crisis, data is always incomplete. You're estimating population displacement in a region where the census data is ten years old. You're tracking food insecurity in areas where aid workers can't safely access. You're aggregating reports from dozens of organizations that each define "affected population" slightly differently.
The engineering instinct is to reject bad data. Validate it, throw exceptions, require clean inputs. In humanitarian work, you can't do that. If you reject every imperfect data point, you have nothing. The approach is different: accept the data, flag the uncertainty, show confidence intervals, and let the humans make decisions with full awareness of the limitations.
The Protection of Civilians app
One of the projects I'm most glad I worked on was the Protection of Civilians mobile app and dashboards. Tracking civilian harm in conflict zones is grim work. The data is sensitive, the sources are often at personal risk, and the political implications of every number are enormous.
The technical challenge was real — offline-capable mobile data collection, secure transmission, aggregation across different conflict contexts, visualization for both operational and advocacy purposes. But the weight of it was the part that stayed with me. You test differently when you know the data represents actual people in actual danger.
What it taught me
I came out of UNOCHA with a much stronger sense of data responsibility. Every dataset has assumptions baked in. Every aggregation hides something. Every visualization makes choices about what to emphasize and what to suppress. Being aware of those choices — and making them deliberately rather than accidentally — is part of the engineering job, not just the analyst's job.