When a serious error in communication occurs, this phrase emerges almost by default. Behind it often lies a structure where responsibility is diluted, with no relevant authority taking the blame. This raises an urgent question: how long are we going to allow “system failures” to remain a shield for avoiding accountability?
The reality is that IT errors do happen, but the repeated use of this excuse as a way out is a clear example of how the system tends to avoid accountability. Below, we will review some emblematic examples where the excuse of a “system error” was not only the official response but also exposed deeper issues of transparency and accountability.
Examples of “system errors” that made history
In one of the most high-profile instances of system failures, the Iowa Democratic caucus in 2020 was marked by a delay of several days in reporting results due to a “system error.” The app responsible for managing and reporting the results had been launched without adequate testing, leading to major controversy. This failure left candidates and voters without information for days, prompting criticisms regarding the security and transparency of the electoral process in one of the most important political events in the United States. Despite investigations, responsibility was never fully assumed, and the official explanation boiled down to blaming the IT system.
In the field of public data, the Chilean Tax Agency faced a scandal in 2019 when personal data of thousands of taxpayers was leaked. The institution blamed the issue on an error in its security systems but failed to provide information on corrective measures or the control failures that allowed such a vulnerability. The leak generated deep distrust among citizens, especially due to the lack of clarity regarding who should take responsibility and what sanctions would apply. The excuse of a “system failure” was used as a means to avoid deeper scrutiny of the agency’s structure and protocols, leaving many questions unanswered.
Another notable and recurrent case is found in banks worldwide, where system errors have led to errors in customer charges. From duplicated transactions to miscalculated interest, financial institutions have repeatedly blamed “system failures” without providing a clear explanation of the root cause. These failures, which in some cases led to account freezes and put customers in difficult economic situations, generated significant frustration. Yet, transparency was lacking, and banks rarely took additional measures to compensate for the damages caused.
These are just three examples of a long list of cases where the “system error” excuse has been an easy way out for institutions and companies around the world. The frequency of these incidents is such that many readers may recall similar instances from their own experience.
Why is it still so easy to hide behind a “system error”?
In each of these cases, the excuse of a “system error” reveals a troubling pattern: lack of institutional responsibility. Many organizations prefer to protect their image by attributing mistakes to an abstract entity (the IT system) rather than acknowledging human or structural errors. At times, these IT failures are a reflection of poor planning, insufficient budgets for IT infrastructure, or a work culture that avoids pointing fingers at certain ranks.
Moreover, this practice points to a lack of transparency and often indicates a disregard for implementing effective control and review policies. In large organizations, particularly in the public sector, delegation of responsibilities to automated systems and lack of oversight create an environment where system errors become the easy way out.
By using the excuse of a “system error,” attention is diverted from structural issues and the lack of resources allocated to training and improving systems. These errors are often symptoms of poor IT management, where quick and cheap solutions are prioritized over quality and security. This also implies a lack of clear supervision and maintenance protocols, leaving the proper functioning of systems that support critical processes to chance.
This attitude also contributes to a corporate culture that encourages impunity and hinders learning. Without a clear assignment of responsibility, errors are repeated without effective corrective measures being implemented, creating a cycle where the same problem may reappear with no consequences. Thus, errors become perpetuated, and the opportunity for improvement is lost in the convenience of not taking responsibility.
What can be done to put an end to this excuse?
Eliminating the abuse of the “system error” excuse requires deep changes in how organizations, especially public ones, manage these failures. One of the main measures that should be implemented is the standardization in response to errors. This means that, in the event of an incident, institutions should provide two types of explanations: a brief, clear one for the general public, detailing what happened in an understandable way, and another more technical and detailed explanation for those who wish to delve into the issue. This dual approach would reduce information gaps and avoid speculation, contributing to transparency and trust.
Response standards in the private sector
For private companies, it is essential to establish the previously mentioned standard of response, detailing both the error and the measures taken to prevent recurrence. Additionally, a clear, structured compensation mechanism should be implemented for recurring system errors. If a particular type of error repeats itself several times within a given period, the company should have a predefined compensation system, activated automatically.
For instance, in the case of an airline repeatedly experiencing a system error related to reservation management, the company should have a mechanism for immediate compensation for affected users, such as refunds, discounts on future flights, or upgrades in contracted services (e.g., access to superior-class seats). These mechanisms should be clearly established and published on their website, so customers know their rights and the possible compensations they are entitled to if the error recurs.
Moreover, companies should ideally develop a preventive alert system to inform users in real-time about any technical issues, preventing customers from discovering problems unexpectedly. This would not only improve the user experience but also help mitigate the impact of the error on the company’s reputation. Such policies could also include personalized compensation options, allowing customers to choose between options like monetary reimbursement, service vouchers, or discounts on future purchases.
This approach would not only improve transparency and customer trust but also force companies to take recurring errors seriously, establishing corrective and compensatory mechanisms to avoid accumulating complaints and conflicts.
Response standards in the public sector
In the case of public administrations and any organization receiving state funds, the demands should go beyond the standard established for the private sector. Just like companies, public institutions must also have a clear mechanism for compensation for citizens affected by system failures, but with greater rigor in transparency and accountability. In addition to providing a clear explanation and prompt compensation, it is crucial to publicly identify the direct responsibility of the department, service, or institution involved, as well as the entire chain of command.
These mechanisms must not only inform the public but, in severe cases, should also include an external and independent audit to analyze the root of the error and the measures needed to prevent its recurrence. The conclusions of these audits should be accessible to the public, fostering a culture of transparency that avoids cover-ups and the repetition of mistakes.
In addition to compensation and audits, public organizations must be proactive in preventing these errors. This means their quality control policies should focus on detecting and correcting potential failures before they impact users, ensuring system robustness from the implementation phase. The priority should be to prevent problems from escalating rather than simply managing the consequences after the damage is done.
Finally, to foster a true culture of responsibility, it is necessary to assign accountability every time such an error occurs. In the public sector, this implies greater strictness in demanding explanations from department heads, ensuring that not only the error but also the human decisions behind it are evaluated and sanctioned if necessary. In the private sector, these decisions are up to the company itself, but in the public domain, accountability must be inescapable and far more rigorous.
A combination of standardized responses, compensation mechanisms, transparency, independent external audits, and a preventive approach would help restore public trust while ensuring more robust and responsible processes.