This is especially problematic for a site like Twitter, which can have unexpected spikes in traffic and user interest, Krueger said. Krueger compares Twitter to online retail sites, where companies can prepare for major traffic events like Black Friday with some predictability. “When it comes to Twitter, they are likely to have Black Friday on any given day at any time of day,” says Krueger. “On any given day, a number of news events can occur that can have a significant impact on the conversation.” That’s harder to do when you’re eliminating up to 80% of your SRE—a number that Krueger says has been introduced in the industry but cannot be confirmed by the MIT Technology Review. The engineer agrees that the percentage sounds “reasonable”.
The current Twitter engineer doesn’t see a way out of the problem — other than reversing layoffs (the company has supposedly tried to go back somewhat.) “If we’re pushing at breakneck speed, everything’s going to fall apart,” he said. “There is no way around that. We are accumulating technical debt much faster than before – almost as quickly as we are accumulating financial debt.”
The list is long
He offers a backward future where problems pile up as the backlog of maintenance and repair tasks grows longer and longer. “Everything will be broken. Things will break more often. Everything will be broken in the longer term. Things are going to break down in more serious ways,” he said. “Things are going to be complicated until in the end, it’s unusable.”
The engineer said Twitter’s demise into an unusable wreck had been some time, but signs of a rotting process were already there. It starts with the little things: “Fault on any part of whatever client they’re using; whatever service in the backend they’re trying to use,” said the engineer. “They’ll be minor annoyances at the start, but when backend fixes are being delayed, things build up until people eventually give up.”
Krueger says that Twitter won’t blink, but we’ll start to see a larger number of tweets that don’t load and accounts that come in and out of activity seemingly on a whim. “I would expect anything that is writing data on the backend to be subject to delays, timeouts, and more complex types of failure conditions,” says Krueger. “But they are usually more cunning. And they also often take more work to track and resolve. If you don’t have enough engineers, that’s going to be a significant problem.”
Manual retweets and slowing follower counts are signs that this has happened. Twitter engineers designed the fallbacks so the platform can go back up so the functionality doesn’t go completely offline, but instead provides cut-down versions — that’s what we’re seeing, Krueger said. .
Aside from the minor glitches, the Twitter engineer also believes there will be significant outages to come, thanks in part to Musk’s cost-cutting efforts to offload Twitter’s cloud servers like an effort to earn back up to 3 million dollars per day. infrastructure costs. Reuters report That project, which came out of Musk’s war room, was called “Plan Deep Cut.” One of the Reuters sources called the plans “illusory”, while University of Surrey cybersecurity professor Alan Woodward said that “unless they have over-exploited the existing system, the risk is on less capable and usability seems to be a logical conclusion.”
Meanwhile, when things get tough, there’s no more internal institutional knowledge to quickly fix problems as they arise. The Twitter engineer said: “A lot of the people I see who left after Friday have been there 9, 10, 11 years, which is ridiculous for a tech company. When those individuals walked out of the Twitter office, decades of knowledge about how its system worked vanished with them. (Twitter insiders, and viewers from the sidelines, have previously debated Twitter’s knowledge base as excessive concentration in the minds of a few programmers, some of whom were fired.)