Sign up for our popular daily email to catch all the latest EV news!
As the global electric vehicle market grows, the focus often falls on vehicle range and battery technology. However, the underlying charging infrastructure is just as critical. A failed charging attempt can be a minor inconvenience for a private car owner but catastrophic for a commercial fleet operator. So, how do you build a charging network across multiple countries that just works?
To find out, we spoke with Peter Brockhoff, CTO of EV charging solutions provider Floading, and Jamie Hawkins, a Senior IoT Specialist at Eseye, a company that provides the mission-critical connectivity for major charging networks like Shell Recharge and InstaVolt. They explain the challenges of scaling, why standard connectivity isn’t enough, and how their partnership was key to Floading achieving near-100% network uptime across Europe.

Why is reliability such a critical issue as the EV charging market expands?
Peter Brockhoff: The entire industry is reshaping global transportation, and the demand for reliable, accessible, and fast charging is immense. This is especially true in the commercial vehicle space, which is growing incredibly fast. We recently tested the new Mercedes-Benz eActros 600 truck, and it charged at 400 kilowatts on our chargers. These commercial operators are ordering new electric trucks in large numbers, and their business models depend entirely on reliability.
We were just discussing this with a big logistics company. They make twenty-three stops every day with a truck. If it’s not successful in charging at some point, they could miss one to three stops a day, and that devastates their business model. For the general public, the problem is also more acute than some might think. About 30-40% of people in urban areas can’t charge at home, so they rely on the public network. When studies show that one in five charging attempts at public sites are failing, it erodes trust and slows adoption. Those failures aren’t at our stations, I should add, but it highlights the industry-wide challenge.
You describe Floading as a ‘technical CPO’. What does that mean and how does your approach differ?
Peter Brockhoff: A technical Charge Point Operator (CPO) means we don’t just own the assets; we install, maintain, and operate the entire technical ecosystem for our customers. Our promise is that charging should be like getting water from the tap, it’s always there when you need it, and you don’t have to worry about it. We have over a decade of experience, starting with charging for public buses before focusing on the more demanding truck-charging sector. If we can solve it for trucks, passenger cars are easy.
Our core difference is our obsession with data. We are a technical partner that gets our hands dirty. We analyze everything in real-time: the energy flow from the grid, the performance of the chargers, interoperability with every vehicle, and even the integration of battery storage and solar power. This data feeds into our portal, where we track what we call the ‘happy flow.’ If anything deviates from that perfect charging process, we want to know about it and fix it, often before it becomes a real problem. These small data glitches are often precursors to a bigger failure later on.
You faced significant challenges with connectivity as you started to scale. Could you describe those early problems?
Peter Brockhoff: Before we partnered with Eseye, connectivity was a huge bottleneck. We were experiencing significant downtime on our previous network, sometimes for one to three days. Our connectivity partner at the time didn’t fully understand our business; they didn’t grasp that for us, uptime is business-critical. We saw network selection issues and signaling storms that would cause our SIM cards to fail, meaning we had to physically send technicians out to sites to replace them. That’s an expensive and slow process. We measured our connectivity availability, and it was only at about 89% at times. In a system where overall availability is a multiplication of all factors, an 89% score for a core component is a huge problem.
Jamie Hawkins adds: This really highlights a common issue. Peter calls connectivity the ‘spider in the web,’ and it’s the perfect analogy. Many businesses only think about this critical component late in the development life cycle. They don’t realise that robust IoT connectivity isn’t plug-and-play. It must be designed and tested specifically for the device and its use case from the very beginning. Otherwise, you end up with exactly these kinds of failures in the field, which are incredibly costly to fix.

Jamie, from a connectivity specialist’s perspective, why do standard IoT solutions often fail in these demanding applications?
Jamie Hawkins: Many people have the misconception that connectivity for an IoT device, like a charger, is as straightforward as connecting a mobile phone. You pop in a SIM card, and it just works. But these devices aren’t like our iPhones. A standard solution might not have the intelligence to automatically switch networks or profiles if one connection fails. This is what can lead to bricked SIM cards. For a business-critical application like Floading’s, you can’t leave that to chance.
That’s why we take a tailored approach that we don’t consider one-size-fits-all. We start by understanding the customer’s specific use case and business requirements. With Floading, we conducted what we call a ‘device assessment’. It’s essentially a crash test, where we simulate network failures over the air to see how their chargers and our SIMs react, ensuring they are perfectly interoperable and will automatically resume connection under any circumstance.
How did the partnership with Eseye solve these connectivity issues and what was the impact on your network’s performance?
Peter Brockhoff: The results were immediate and transformative. Eseye was one of the first partners that actually understood our business and why connectivity mattered so much to us. Their solution had automated guards for the signaling storms we previously experienced, and they gave us best-practice advice we could pass on to our hardware suppliers to improve their equipment.
Within six months of making the switch, our connectivity reliability went from a low of what we saw as 89% to 99.9%. We suddenly weren’t firefighting connectivity issues anymore. That stability allowed us to push our hardware and software partners to meet the same high standards. With connectivity under control, we could confidently move forward with our European expansion, knowing our foundation was solid.
Finally, what key advice would you give to other charge point operators looking to build a scalable and reliable network?
Peter Brockhoff: First, build reliability in from the very start. Don’t wait to experience failures before you make it a priority. It has to be reliable all the time, which means having auto-healing capabilities.
Second, aim for zero maintenance. Every time a technician physically touches a device, it’s an opportunity for something to break. I never take my car to the garage right before a holiday for this reason! Use remote monitoring and build a digital twin of your network so you know what’s wrong with a charger before you even get a call.
Third, you can’t do it alone. Build an ecosystem of partners who understand your business and who you can learn from. And finally, monitor everything like a hawk. You have to constantly learn from your devices, see what’s going wrong, see the strange behavior, and use that data to push your partners and your own team to improve. Data has to come first, because data is what allows you to improve.
Sign up for our popular daily email to catch all the latest EV news!







