Connectology podcast

Listen or watch on your favourite platforms

Podcast: Hypercube’s AI perspective with Adam Sroka, CEO, Hypercube

Recorded: 30 March 2026

The running time is 43 minutes.

Summary:

Rachael Eynon is joined by Adam Sroka, CEO of Hypercube, a specialist data and AI consultancy focused on the energy sector, to explore how energy organisations can navigate AI adoption safely and practically.

Adam explores:

  • How the energy sector is roughly a decade behind leading tech industries in AI adoption — caution that exists for good reason given the complexity of multi-stakeholder energy systems
  • His “risk staircase” approach favours the lowest-risk use case that proves value and builds confidence before scaling up
  • AI as a quality check: using one model to critique or score the output of another is a practical pattern that could help organisations improve the reliability of AI-generated work without heavy overhead
  • AI’s role in Connections Reform: as grid connection processes grow more complex, there may be opportunities for AI to improve data accessibility, transparency, and decision-making across the connections landscape

Connectologist® Catherine Cleary also joined Adam on the Hypercube Energy Podcast, exploring grid connection reform, queue management, and the future of connections from an engineer’s perspective — you can listen to that conversation here: Grid reform, queue management and the future of connections

Transcript:

00:00:50 – 00:01:00 – Rachael Eynon

Hello and welcome to another episode of Roadnight Taylor’s Connectology® podcast. I’m Rachel Eynon, and today I’m really pleased to be joined by our guest, Adam Sroka, CEO of Hypercube.

00:01:01 – 00:01:04 – Adam Sroka

Thank you very much for having me, pleasure to be here.

00:01:05 – 00:01:19 – Rachael Eynon

Thanks for joining us, and this is a, a reciprocal podcast appearance, I believe, so I think our colleague Catherine, joined you recently on your podcast, so maybe we’ll cover some, some similar ground today, but it’ll be a chance to hear your perspective.

00:01:20 – 00:01:31 – Adam Sroka

Yeah, that was a brilliant conversation and made to feel like, I don’t know very much by someone very, very clever, which is almost my favourite kind of conversation, so…

00:01:32 – 00:01:47 – Rachael Eynon

Great. So today we’ll be very much in your area of expertise. So, we’ll explore AI use in the energy sector, which is what Hypercube is all about. So, it’d be great if we could start with a bit of a, a bit of background about yourself, Adam, and then maybe you can tell us some more about Hypercube?

00:01:48 – 00:05:10 – Adam Sroka

Yeah, definitely life story time. So, I started, I was a physicist and I got into computational physics and discovered data science back when it was the sexiest job of the 21st century; don’t feel very sexy today, so I don’t know if that was a bit of a myth, but I jumped from physics into data science and machine learning.

Hopped about a few industries, ended up in consulting and did some work in energy there, more traditional like energy, oil and gas type stuff. Eventually became Head of Machine Learning for an energy tech startup down in Cambridge, which is where I really learned like grid scale bears and forecasting and trades and PPAs. And those are crunchy, like physical, financial, contractual stuff that’s really quite hard to do. And my observation while I was there was that the energy sector, I always kind of ballpark is, is about a decade behind where like Silicon Valley, sort of like cutting edge tech or even FinTech and financial services are.

And it’s starting to think about things that are reasonably well solved in highly regulated environments, right? So, I thought, oh, I could help shift the needle towards climate change and net zero by helping accelerate some of that tech adoption and allow these organizations to get more out of the data and the tools and the technologies they already have.

So, Hypercube was formed with that ambition in mind. And I always laugh that like every business book I’ve ever read, it’s always like, oh, you should niche down and focus on a niche and blah, blah, blah. And I’m like, okay, why is no business I’ve ever worked at done that? Like why is that? So, I thought I would try it, and I thought, so originally it was like forecasting for power trading teams in Edinburgh was the thing. And, the real ethos of Hypercube is, can I build domain expertise and technical expertise in the same place? Because a lot of like tech consultancies are like a really super clever, fantastic database people, but they, they won’t know what a settlement period is or what PPA is.

They don’t want d rating is right. They don’t understand market. So, you, you can spend 3, 4, 6 months helping them get up to speed. Our view, because we hire from energy, we only really go after energy projects. We work in this space that on day one we are really useful to the point where now, like I’ve built five or six virtual power plants from the bottom up, like code coded like by me and my team. Not all, all I do is teams meeting these days, but, and the podcast, but our team have done like multiple of them and they’re the kind of thing that you might build once, like if at all. And so, we can step in and have really rich conversations with very technical people very, very quickly because we just understand your language.

We’ll never know energy or your business as well as you do. We do know energy and we really know the tech as well. And so, it brings a lot of value and yeah, going from strength to strength and it’s really, it’s a really interesting space, which we can get into. But we’re starting to see recurring pounds, which is why we are now on the road to building our AI platform and finding a really easy way to help organizations jump on the hype train and catch what’s going on in the AI space in a way that’s safe and secure and not like just FOMO driven development.

00:05:11 – 00:05:28 – Rachael Eynon

Yeah. Well, it sounds like you’ve kept up with the hype train then starting in data science and moving into AI at the, at the right time and hopefully bringing along the, the industry with you. So that’s great and especially, impressed that you had such a small niche to be able to narrow down to, what was it you said just in Edinburgh?

00:05:29 – 00:05:37 – Adam Sroka

Yeah that’s the thing; there’s a, a lot of power trading and energy, renewable energy in Edinburgh just on George Street pretty much was, was enough to do, isn’t it?

00:05:38 – 00:05:58 – Rachael Eynon

Great, so yeah, we, we understand you’ve got these kinds of different strands of services that you offer to organizations in the energy industry.

So, it’d be good if we could explore those and maybe understand, you know, what, or, and give an example maybe of some kind of project that you’ve got within each of those, I guess just so our listeners know what kind of AI specifically we’re talking about.

00:05:59 – 00:08:20 – Adam Sroka

Yeah, and so, this comes from where we started. So, we were a bootstrapped business. I had an, I always joke that by £478 left in a PayPal account, I started a CrossFit events business in February 2020. So, I had £478 left in, or after all the refunds were done from the first event towards this marching can, and I use that to start the business, right?

So, we are like totally bootstrapped. So, the easiest way to bootstrap the business is services. And over time we’ve settled into realizing that actually there are foundational pieces of kit technology that we’ve de-risked, that we understand, that don’t leak IP, that don’t cause any risk or friction, that we can redeploy it over and over again.

So, with some Scottish enterprise money that we’ve had and our own profits, we’ve managed to build the seed of our platform, which with the, definitely the direction of travel for us. But traditionally, and, and like where, where we started was a mix of data strategy and basically answering like, well, what should we do? Okay. Like the board says we need an AI and no one really knows what that means, but someone in a fancy suit that said so. Through to hands on keyboards, like building sort of foundational data platforms, data pipelines, like a noor point integration is a noor point integration is a noor point integration, right?

Why is everyone building them from scratch? And we like, we can just deploy that stuff and then through to some of the, the kind of fancy models and forecasting and things like that, they each have their place in the stack and they’re all essential, but you kind of need to build them up in the right layers.

And I, I these days do most of the strategy stuff because it’s where I’ve spent a lot of time thinking at this level. I’ve had a lot of experience fighting boards and like doing the politics and the cultural kind of jujitsu that you need to do to make stuff happen in other people’s businesses. But we try to cover the full remit, and we’ve very much grown from pure AI to full tech services and now we’re going full circle and thinking we can deploy this platform.

To give you like the Lego bricks and the building blocks and the kind of map as to how to compose the really fancy solutions in a way that we’ve de-risked and sort of help deploy with you.

00:08:21 – 00:08:55 – Rachael Eynon

Great. Yeah, it’d be good to come back to some of the more hands-on keyboard stuff later and, look more at some of the data that we’ve got available to us in the energy space.

But I guess how has that like strategy piece or, or those strands evolved in the past few years? So, you know, we’ve gone from seeing AI as quite a maybe niche service in the energy industry or, as you say, much more in the financial industry, to being kind of way more embedded in our day-to-day lives, at work, at home, everywhere.

So, what, what are you seeing from clients in the energy industry and their attitudes?

00:08:56 – 00:13:06 – Adam Sroka

Yeah, and again, like energy’s a really funny place because no one wants to be first at anything; like nobody wants to be first, everyone’s terrified of that big risk for, for good reasons. And I think like a lot of techies are really bad for like boo-hooing an industry, but oh old-fashioned.

But like this is important stuff, right? And like that transformer has been in the food for 50 years for a reason. Like it’s not like it’s not because we took some risks and tried some stuff. It’s like this takes a long and you’ve got bring a lot of people with you look at like market reforms and stuff like that, you cannot please everybody. Like you will upset someone any change you make because it’s a big, complicated, multifaceted, multi-stakeholder system.

So, with all that in mind, you’ve got this dichotomy of everyone wants to leverage their data, accelerate, use the tooling, like be innovative. But nobody wants to be first and nobody wants to like pay place a big bet or risk.

So I believe that is kind of made worse by the hype cycle, and like you go on LinkedIn and apparently everyone has just automated away their whole company and all their SaaS platforms and it’s all like open call or whatever the, this latest security nightmare is like. And everyone’s just like sitting their feet are watching agents do everything and is that true?

I don’t know, do we really want let that loose on the system? Definitely not. So how do you go from one end of the spectrum to the other? And like I always talk about what, what’s like the risk staircase that we could take? What’s the weeniest little thing we could do that is palatable, that isn’t going to blow something up or like bankrupt to business that proves value and gets us onto that motion?

And the thing that comes up a lot in the strategies that I do is I talk about muscle memory being like the engine for innovation. So, for people that are building themes and saying like, oh, we want to build this tooling, or we want an AI to read these white papers and translate it into our business and la la la. That stuff you, eventually like will bump into someone that says, well, this will just be a feature of ChatGPT in six months, and like, they’re usually right. Like that’s the, the direction of travel. What do they raise? Like another 110 billion or something from SoftBank over the weekend? Like that’s, they’re just raising money like you’ve never seen before.

And they are, we are now seeing the first wave of AI startups all disintegrate as anthropic and Open AI just engulf them, just take their use case and eliminate them. Right? So that’s a real concern. You could spend 50 to a hundred thousand pounds on a bespoke piece of ai, take on all the risk, and then see it become a nine pound a month subscription like in six months, right?

So, what do you do? Well, the flip side of that is you could like, you can just wait forever, and you’ll definitely not be first. And you’ll definitely someone will, someone with a higher risk appetite will get out in front. But when the time comes for you to go, like even if you realize you are the that second half of the pack, then you’ve not built any of the cultural human stuff about adoption.

Like I say to people, what’s a token and how much does it cost? And how many tokens a year does that use case use? And like, unless you are building those things today, no one, most of the people in the podcast don’t have a clue what I just said. That’s the world we’re in, right? Whereas if you are thinking, I’m going to deploy agenting systems and ai, blah, blah, blah, blah, blah, literally cost per tokens, were going to become a KPI of your whole business, that’s going to become like your CFO’s main headache if you go down that route, right?

So for me, you get back to my main point is – doing things today with a mind that, okay, they might get replaced, helps you reframe the problem. Because actually the cultural learning, the human side of it, how do we scope use cases? How do we get comfortable with it? What’s our lexicon within the business like, how do we build guardrails? What platforms are we using? That is a muscle memory thing that if you can get familiar with it, when a really important step change happens, you’re ready to go.

00:13:07 – 00:13:20 – Rachael Eynon

So, what, what does that, that first step on the staircase look like then for a lot of the companies in the energy industry? Like how, how small are they starting with some of these products that you offer?

00:13:20 – 00:17:33 – Adam Sroka

I’m going to give a total consultant answer of, it depends, but it does. But there are, there are common themes. So, I can ground this in real stuff.

One I love is, I call it bespoke, and it was something I kind of mentioned before, but so many people in the industry generate like public facing white papers, research like generalists. The market does this, and this is the cost of lithium and all this stuff, right? And that’s good stuff. But what does it mean for me? Not like for my business, not for like my country, I mean for, for Adam Sroka, this is my role, these are the things I looked up last week, this is my headache of the week, blah, blah, blah. And actually, AI is really good at that – saying, okay, I know a lot about you, this white papers popped up, read these four sections, and they relate to this other one we read last week. And bump, that’s for you. That’s a nice way to do things, that actually came out of a, a deployment that I did for a trading team up in Edinburgh, so 19 traders, three desks, and they were doing every 12 hours shift handovers, right? So, someone sits down for an hour and goes, right, what happened for the last 12 hours? Tap, tap, tap, tap, tap, tap, tap. Well, actually, AI’s really good at that and saying, give it all of the data feeds for the last 12 hours, 16-hour window and pop’s the report. So, this person’s sitting down to, pre gen first draft and thinking actually, yeah, that is true, that’s not true, da da da, and I’m done – that saved them 45 minutes and that stacks up when you’re doing it twice a day, every day. And you can see the auditability and that whole headache of, actually, I wish I could ask a, a deeper question on paragraph two, well, now you can, because the whole thing was generated by an agent, which is quite a nice use case.

We’ve built some slightly more complicated, bigger sort of pieces of the puzzle, the most mature, I think favourite one for me is like an AI asset manager, right? So, in grid scale bears, especially the software tooling is very immature and it’s, it’s kind of like a bit wild west still because the whole industry is so, so quickly like erupting. One of our asset manager friends was like, okay, we get 10,000 alerts a week from our portfolio of like fire suppression system and this spark is off and there’s an amber warning on this piece of kit and blah, blah, blah, blah, blah, blah, blah. And that, that’s just the way it is. And it’s, this is across six or seven different logins they have to go through every day.

Then when a dip in availability happens, their contracts is, they could sue the OEM sometimes for, for the loss of availability, but then when it happens, they’ve then got to go and troll all these alerts and go, were any, were any of these meaningful? Are these all noise? How do I do that? Like, how do I then draft the report? And that generally takes from their, their team took them a few days of an analysis to go and do that sort of manual work room. We built a system that plumbs all that together and has AI that’s kind of familiar with that job role and those little tasks, and then can do exactly the same thing, snip a 12-hour window and go, okay, this, these six or seven alerts always pop up across these three systems before this kind of event, it’s happened here, it’s happened here in the portfolio, and last time you were successful, do you want me to draft the first report, the first email to the OEM and then it, it is not doing any.

We have a whole value, like one of the values of Hypercube today is humans decide machine scale. And I, I deeply believe for energy that’s really very important that we’ll get to fully autonomous and like, we’ll all just kick it and, put feet up and all that drinking cocktails one day. But for today, it’s not, we’re not there, right? So how do we do that acceleration piece of the human in the loop and so deeply believe that we could take away the work you have to do, to do the work. Like the copy numbers out, spreadsheets and downloading, stuff like that, that’s all Croft, right? That’s low value add, that’s not why you’ve got three degrees and you’ve been in energy for a decade. That’s just stuff you have to do. So, let’s do that first, even if it’s not the most impactful, it’s something. And then we build muscle memory and we start to stack use cases and see other people win, that we can start to copy and slowly, slowly edge forward as an industry in a safe way that makes sense for everyone.

00:18:44 – 00:19:28 – Rachael Eynon

Yeah, that’s really interesting. I think there’s is, well, is there potentially a bit of a balance to be had around, I guess like giving that engineer or whoever’s sitting, and analysing that data normally, like how, how do we make sure that they know enough to be able to critique that data and really understand it and know the sources and where it’s come from, while also, as you say, improving the kind of work they do and making that much more value add.

Like, are, are we at risk of tending towards somewhere where there, you, you can come in, I guess in theory with no training and, and do that job, but how do we then sort of trust the outputs and, and, and yeah, really understand the kind of first principles behind some of that data?

00:19:29 – 00:23:24 – Adam Sroka

This is like a huge question, not just for energy. This is like, everyone’s trying to suss this out at the moment and there’s a few things that I’ve seen that like talk to this. But, first and foremost, like these things are great tools; they do wonders, they can really accelerate your output and help you like rethink things. Like I use them a lot to like critique my thinking and like really argue against me so that I’m just like strong person, like the other argument, right? But if you switch your brain into airplane mode, like that will be okay, well done. Like you are not going to get too far because everyone else can do that, right?

And it only takes one person that isn’t doing that to, to get past you. And a friend of mine was talking about how their kids were like, well, why should I learn anything? I just ask ChatGPT that I have to learn; I can ask, ChatGPT anything. So, I’m not going to learn chemistry; there’s no point, I don’t need to learn it. And it’s like my answer was, yeah, but you’ll ask it rubbish questions – that’s it. Like I’ll ask it good questions because I studied this stuff and I’ve lived and breathed this for years. So, when we compete in the market, I’ll get the job – that’s it. Like if, if AI is going to obliterate all the jobs, it’s not going to be the person that didn’t learn anything that’s going to get the few that are left.

And the same goes to all levels of tech, like sort of technical capacity. Last night I watched Terrence Tower, right? Terrence Tower, like one of the best mathematicians alive, super genius talking about how he’s using AI in his workflow to crack like 50-year-old maths problems that are like well beyond the kind of atmosphere that I’m in.

And even he’s saying, yeah, it’s good and it does some stuff and like it, there was this big leap and then it stopped and they didn’t really accelerate any further because the, the latest frontier models aren’t, they don’t get us any further. They just get us into the corners that don’t work faster. And maybe the next round it’ll happen again.

But if, if the brightest people in the world are realizing this as well, we’ve a long, long way to go before like everything’s cracked. So, I think yeah, be knowledgeable about your bit and what is adding and what isn’t adding to the workflow. I think people should just open book and allow these tools into interview circumstances, but be mindful of that and say, right, give me, give the answer with all the tools turned on.

Because thinking you are examining people in the same way, like our, my opinion is it just won’t work. That isn’t the future of what our industry is going to be like. I think to kind of catch onto is a bit of a start warning about, you might’ve heard of Shadow IT. Well, we started talking about Shadow AI and actually there’s this really scary thing going on now, especially in energy and highly regulated industries where you work for big corporate energy code, right? And you cannot use any AI; that’s the, the top down, you can’t use it because it’s very scary and we don’t know about it, okay?

So, everyone’s copy and pasting company data into personal ChatGPT accounts or Gemini accounts and stuff. And actually, the personal ones aren’t protected from the no training data piece, whereas the corporate ones are so, and that that’s happening absolutely everywhere.

And it’s things like, I was, I was daft. I was trying to get, like, went through my Sainsbury’s app and I was trying to get like, what do I order all the time? I just want like a recurring list of things, and I couldn’t get it out of the app. There’s no like API and I was getting like demented with it. I screen recorded a video of me scrolling through two years’ worth of transactions, just start out, flip, flip, flip, flip, flick about 30 seconds, just like whiz past the screen, uploaded that video to an AI; pop, there’s your list, there’s your frequency, that’s what you ordered. So, like you don’t even need to copy and paste it – I can just film my screen with my phone. That air gap, it’s all like, because security stuff’s a nightmare with it all.

So, if you are not in front of it, you are actually exposing this colossal risk of it’s just going to happen on manage, which is what’s happening in it forever anyway.

00:23:25 – 00:24:05 – Rachael Eynon

Yeah, yeah, I guess it comes back to your point then around getting that culture right, first and foremost in a business and getting people comfortable, so they follow the right processes. I guess slightly back to that, that point around, having the knowledge and expertise to kind of question and challenge the data.

Is there, is there anything that, that you can provide alongside those, those systems and those services to help with that? Or is there an expectation that that needs to come separately? Like for example, do, did, is there a way to kind of set that up so that it’s, it’s showing, it’s working and you can see where the sources, where that’s come from?

00:24:06 – 00:29:11 – Adam Sroka

Yeah, there’s a few patterns. I mean, so in, in like the, the weeds of working with these tools, it’s a little bit harder in the chat interfaces and the kind of like the non-techy sort of user-friendly versions of the tools. I think you can do it in some of them it’s a little bit harder, but when you are working with the APIs and like the, the settings under the hood, there’s a thing called temperature, right? And that is just like, if you set it to one, it will just make stuff up, like just total garbage. And if you set it to zero, it will just copy paste things at its seen. And so, you can dial that up and down from like Eldridge Horror all the way to, yeah, like stuff that you’ve got in a textbook, right. So that, that’s quite a useful thing, and get comfortable with how that works.

They do make stuff up and like, you can’t, they are random word machines. That’s it; they don’t know what they’re talking about. There’s reasoning, but they’re very, very, very good at the illusion. So good actually, that they’re incredibly useful, right? But like horror story about this developer who like just found a whole open resource, open-source repository deployed to his production environment that he had no, like, nothing to do with the business, it was like totally tangential, like, I can’t remember the specifics, but it was like this massive deployment. It was just in his prod environment in cloud, it was like, what’s going on, like, where’s this? And he went through all the logs and dah, dah, dah, dah, dah. And in the, the infrastructure was code set up that had all the right permissions to deploy things, rightly so, as it should, an agent had hallucinated the repo number, and it just happened to be a real one in GitHub and it just deployed the whole thing.

And like that happened, this is what I was like, yeah, it cost him money to deploy it and then tear it down and it just like, it was, there was no way catching that it was a real number. You don’t put a test to say, oh, only deploy things that are real or that I know about it, but you can protect yourself from it.

But these things happen, right? A really good pattern when you’re building bigger systems. It’s quite tricky, again, if you’re working just at the kind of the desktop user level, the web interface level, but you can do it, and I do it a lot, it gets called LLM as a judge. And this is lovely and you can just explain LLM the beauty of this stuff is you, you only have to explain like one or two lines, and it will know what you’re on about and it’ll help you.

So go into ChatGPT and say, I want to deploy LLM as a judge framework for whatever workflows. So, say you are tendering or you are reviewing technical documents, right? You say I want to deploy an LLM as a judge pattern to help me; write me a prompt to act as a critic or an evaluator for this workflow, right? And it will prompt, give you a prompt. Then what you do is you put that in in a GPT or a little, you save it somewhere. Then you do your work; say it’s reviewing a document, you do, you say right, ChatGPT, review this document and give me the normal output, pop. Then you feed the output and the new prompt into another agent or another like context, fresh thing, maybe even a different model.

I like to use multiple model providers. So, we, that’s how I tend to do it at Hypercube, so I’ll say, right, take it from ChatGPT, give it to philanthropic and say, right, just test the homework. Come up with, and it’s really good because you are, you’re trying to give it like a quantitative way to score its own homework or, the other agent’s homework; and that’s a really common pattern. It’s really nice because it’s Even if you’re doing it in a web browser, it’s copy paste, copy paste, and you’re done. So, it’s not that much more work, but you, you, you’ve had like a 5,000 word document reviewed instantly and it would catch some of the nonsense. It’ll be like, hmm, especially if you teach it to be really critical. The beauty of building that into the engineering bit is that a scale is infinitely, and that’s how like at mass scale, you catch loads of errors and things like that.

Another thing I like to do is say to it like, come up with a rubric. Score everything out of 10, rank things by impact, right? And then say to it, come up with the top 50 improvements. Like not one or two, come up with 50, a hundred improvements, right. And I do that because if you say to, we come up with three improvements, it’ll be like, oh, the formatting and the structure, and it’s like all wishy-washy, high level, like if you say a hundred it really like, has to think hard to try and find a hundred things that not most of all be rubbish, but the top 20 will be really good. And it will like find things that are really specific and you’ll be like, I do think that actually, yes, that diagram does need tweak, da, da, da.

And so I think that these are, there’s little patterns, but it evolves all the time. And you’re probably aware from this podcast that I love talking and actually tools like whisper flow or the, voice to text a like modes of these, the car mode are amazing because if you’re like me and you are like waffling, you could speak at 400 words a minute for three or four minutes and just ram or train of thought, you don’t even have to give these things good instructions and they will catch what you’re trying to say and you get bigger, detailed prompts that, that maybe do more than you trying to type it out and think, oh, how do I say X, Y, Z? And all these things are like you just building up that usage and that, that kind of way of working that works for you and your team.

00:29:12 – 00:30:38 – Rachael Eynon

Wow. Yeah, they kind of getting one system to mark and other system’s homework as a new, well, I’ve not come across that before, but I could see that making people more comfortable with kind of adopting it within their work environment and similarly with, yeah, asking it to, to list a hundred improvements.

It’s the kind of blue-sky thinking type thing that you would do without AI in an ideal world, if you had all the time and you’re kind of back-to-basics whiteboard sessions, but just allows you to kind of speed that up.

I guess it’d be interesting to explore that kind of energy data side in more detail. So, thinking for us in the connection’s world, we, we know there’s lots of data out there, some of it is publicly available and the quality maybe isn’t either, either it’s pure quality or it’s not consistent quality across different companies, and in some cases that data’s just not publicly available. So, we’ve got that, the data sources from, NESO, from the transmission owners, from the DNOs, maybe it’s about their, the projects that are in the connection queues, or their network reinforcements or some other maintenance plans. So, what do you think they could do or what solutions might be of interest to them to, to better facilitate innovation and data accessibility across the rest of the industry?

00:30:39 – 00:35:29 – Adam Sroka

Yeah, this is like, I think it’s long, long overdue, but it is tricky and it, luckily, it’s been so slow that some really interesting new patterns in like the data ecosphere, like new patterns have emerged that actually might be really useful. So, I think it was 2021 or 2022, there was a book by Zhamak Dehghani called Data Mesh and it’s an O’Reilly book, it’s a quite, quite an ambitious view of how to build data sharing and federated data processes within a business.

My personal opinion, she’s very, very clever, it’s an incredible book. My personal opinion, this is a little bit utopian, like you would have to be an exceptional company with an exceptional data culture to pull it off, I think. And a lot of businesses, it just would be so difficult to do it. It does happen in practice; some businesses do it or do versions of it. The beauty of it is you don’t have to do it to the whole business; you can do it in pockets and grow.

That spawned a few other things or a few other things happened at the same time, there’s kind of data as product. So the idea of like, like product management, product owner from tech, well actually could your data be a product with a contract, data contract and a service level agreement? And like a staleness, you say, right, my data source will only ever be this stale and it will get refreshed like this. And here’s a contract, da da da; it will always meet these requirements. Then you can like set and forget and send it. And data mesh is the idea that you take that kind of data, contract data as products, and then you give people these isolated, like that’s their domain; You let the domain experts define the product and then you are outside their boundary and you can never, you don’t even really ever have to speak to them, you just know you’ve got the contract, you’ve got the data product, you compose solutions out them, and then you can come up with new data products and so on and so forth. And a lot of that’s framed in the industry, in the data industry around like doing it within your business, right? There’s nothing stopping you doing that as like a kind of consortium of businesses or lots of things.

So, I think if we saw a bit more adoption of some of the more modern data practices, which again are quite hard, but it’s very new. I think it’d be really nice because so you could say to the NESO, the Ofgem’s, the Alexons, okay, come up with these like define it in a, here’s a, here’s a rough standard of what we want an API to look like – here’s what a messaging thing would look like, blah, blah, blah.

I was on the stage at Energy Storage Summit and Roger Hollies from Arenko, who’s great was talking about how I love this, again, utopian and probably impossible, but I love it. How, like, market definitions are engineering documents written in legal speak.

So, like you take an engineering spec and you write it in legalese and no one understands it. Those, it gets 300-page thing that no one gets. And his argument was, why don’t we just have markets as code? Why aren’t they just like, there’s a platform and you just, just like infrastructure. You can spin them up, spin them down and everyone understands the code because it’s open and you just read it and that’s the market defined and you build systems that off the back of that. And I was like, I love that because I’m a nerd, but I just think is it achievable for our industry? Maybe ambitious, but it’s directionally I think the way we should be looking; what data sets, I think the more the merrier everyone’s trying to get, like, domestic usage is really hard. I think the stuff that Center Net Zero have done with Faraday is incredible; like synthetic data based on real customer stuff is incredible for people that live in the domestic space. We haven’t really got a great sort of set of data for like the, the high voltage or the low voltage network, how it behaves, like what substations are up to.

I had some of the team from O’ Mobile on my podcast and they were saying actually because their, their chargers are so over spec, and I imagine this is true for a lot of the EV like manufacturer, ChargePoint manufacturers.  They said there are charger chargers are so over speced, they have a better model of the low transmission distribution networks than anyone else because they’ve, they’re just collecting everything at like a daft rate and they, they can reverse engineer what, what’s going on at the substation and things like that.

And I think, so all the data, so instead of like, I think I said this to Catherine that she was on with us, instead of going like, how do we do another smart meter roll out? Something like that, like, what data do we have and how can we share it, what’s already out there, and just listen to what we’ve got and kind of backend, like reverse engineer the models of the system from what’s there as opposed to doing it top down; because I think we’ve proven the top down’s a bit too hard.

00:35:30 – 00:36:37 – Rachael Eynon

Yeah. So, I guess that in combination with that, that contract, you’d sort of get what you can in a, a format that’s usable from the, the network companies and then yeah, reverse engineer from there. Yeah, Interesting.

Because I think it’s something we’re kind of struggling with at the moment, and I think this was something Catherine also covered, is that the, some of the processes we’re going through in relation to grid connections are, are sort of more opaque than they used to be. It’s a bit of a black box; You submit your, your application data, you get a, a connection date and you get a list of costs, but that kind of why someone’s got to that decision isn’t very clear. So I guess there is the, the option potentially to have more of the, the data that informs that available.

But then it is, yeah, coming back to some of the earlier points, like do you still want a kind of engineer there that you can query and say, why did, why did you do this? Why did you make that assumption? So yeah, do you think there’s, uh, sort of a need for both in that case?

00:36:38 – 00:38:12 – Adam Sroka

Yeah, I think so. We did a, like a pilot for a piece of work that was around, like a local developer that had to do like quarterly meetings with the local council about like what’s going on. And it’s like literally like one person sat with a 300 TO 500-page binder and then like other councils asking them questions and it’s just like, Ugh. Well actually you could load that 500-page binder into a chat interface, each chat to an agent that just has it or has the latest update information on it, that’s totally possible – we can do that now.

And we did a similar thing for, we bid on a lot of stuff, and we held people to bid on a lot of stuff. And so, like that come up with a fair rubric score, a set of documents and an application, but then justify the scores, and that is like, you get a response pack and you might think like, oh, I’m underwhelmed by this explanation. Well, why can’t you just go to a link? I actually got the idea for this from someone, I love this. Someone sent me a CV for a job and the lo the bottom of the CV, it was like feel free to chat to my career history. And I was like, no. Clicked a little GPT link. And they’d loaded up loads of stuff about themselves into a ChatGPT and I was like, why Is like, why does Jim do this, da dah, dah, dah. And it was like coming back and forth and I got, oh, do they know anything about this? Well, I, that’s not evidenced in their thing. I was like, what? Love, like not bit, bit, um, what’s the word? Like claim a bit novel and geeky, but like that as an idea, like load the context, the rubric, the scoring, all the, all the workings out into something that I later can go and push on and question and poke.

Will it allow people to kind of train themselves on the best way to answer tenders? Yeah, maybe. But is that a bad thing?

00:38:13 – 00:38:28 – Rachael Eynon

Yeah. Wonder if we could work up something like that for our connection offers then where you can question why you’ve, yeah, got your results. And then at least take that to the engineer to understand how they got there.

00:38:29 – 00:38:57 – Adam Sroka

And there you go, right. So that isn’t like Concur, the Galaxy AI to automate everyone’s job. That’s something that I could probably build a few in a week or two weeks that gave you with some historic data that gave you a test bed, and then you could put a real thing in the hands of real engineers and real people that could either go, ugh, I hate it, or, ugh, I wish it did this and I wish it did that. And you’re on the road, like you figured something out. Like, but is this good enough for us? Yeah. And, and that’s such an easier way. What’s the risk in that? A bit wasted time.

00:38:58 – 00:39:32 – Rachael Eynon

And then I guess if we did have that ideal world of we, we’ve got all this data public in the public domain and, we can kind of query as we wish, how can we go about like mitigating the risks of, of that data being misused? So, we, we’ve talked about, again, it being a black box potentially, and, and historically there’s a reason for that and, and we might not want everything we could want to know about the grid out there for everyone to play with.

So, yeah is there anything we could do to, to mitigate that?

00:39:33 – 00:41:34 – Adam Sroka

I think that’s hard. I mean without knowing that data, what’s there? Like, I think, I just think that’s really difficult, we have to be careful. We have to be resent of what we share. I think if you’re talking about really like critical infrastructure, like really dangerous stuff that you would want, wouldn’t want to get into the hands of malicious sectors, then run the almost like skunkworks type internal, like, can we do this internally?

Can we simulate what attacks would look like? Like what’s the nastiest thing someone could do with it? Can we, and they’re saying like, there’s all the hype this week about atrophic mythos; it’s too dangerous to launch because it’s, it’s same as all the others. Right, but apparently, it’s so good at cybersecurity attack stuff that actually it will cause issues for some people.

Right, fine, if that’s the case, then let’s do these in isolated environments. Let’s keep it stern and keep it sensible. I, I’m not one of these people, I’m actually like one, one of the few kind of AI people that says, don’t do it like you aren’t ready. Let’s do some other stuff, let’s do some guardrails, let’s do some foundational stuff for us. But if you think actually this is going to step change the whole industry, then let’s find safe ways to do it on synthetic data, on like isolated environments and things like that. There’s a reason like people have been working with, like I used to work at Thales, I did my doctor at Thales, right?

And they’re building parts of trident there and probably get told off for saying that now, even though it’s been built or whatever. But like all that stuff, right? Like we have ways, there are mechanisms to work on new stuff that’s really dangerous if it gets into the wrong hands. So, let’s just use them and treat it carefully, I don’t think we should chase after innovation and AI at all costs, and that would be really daft, which is why at Hypercube we’re building our platform to go after the low hanging fruit; the low-risk stuff where you’re not trying to like control high voltage systems or like the potentially are possibles and things like that.

00:41:35 – 00:41:52 – Rachael Eynon

Yeah, yeah. I think there is definitely some of those lower risk places. Even within, yeah, within the industry where the, yeah, the data might not be available just because somebody hasn’t had the time to, to set that up and, and yeah, produce it in an accessible, kind of standardized way. So yeah…

00:41:53 – 00:41:55 – Adam Sroka

Let someone else make those big mistakes first, right.

00:41:56 – 00:41:56 – Rahael Eynon

Yeah.

00:41:57 – 00:41:58 – Adam Sroka

And watch smarter.

00:41:58 – 00:42:10 – Rachael Eynon

Yeah, great. So, I think we might be coming towards the end of our, our time, so yeah, before we go, is there anything, anything you’d like to add or kind of plug yourself?

00:42:11 – 00:43:22 – Adam Sroka

Yeah, look, I would just say as it is probably apparent, I love talking about this stuff.

Don’t be shy. Like if all anyone gets out of this is a name, like we’re not overly mercenary, just reach out if you want to have a chat about like, what does AI mean for me? We tried this thing, should we do co-pilot? Any of that, Kind of like, I love my, my company loves to try and get me out of the way as much as possible, so set up a call. Let’s have a chat, especially if you’re in Glasgow, Edinburgh, come see us and I can talk you through our opinions of it. I’m not bright; I’m just very opinionated and that’s it. Like I’ve done a lot of this stuff. I’m incredibly opinionated, but I’m not right and we don’t know it all yet.

And the technology’s changing at such a way I find it. I live and breathe AI, right? And it’s hard for me to keep up. So don’t feel like you’re behind. Everyone’s behind it is hard. But you’ve got a name. Come and see us. Let’s have a chat. If you want to see some of the, the fun toys that we are building that actually we feel like we’ve solved little pockets of use cases, let’s have a talk about that.

But otherwise, like wish you the best on the journey with tech and AI and who knows if we’ll have a job in 12 months time.

00:43:23 – 00:43:32 – Rachael Eynon

Great. Thanks very much. Thank you Adam. And thanks to our listeners as well, and we’ll see you again soon for another episode of the Connectology podcast.

Ready to talk?

Speak with us – have a call to find if we fit.

Connectologist Logo

Sign up to our newsletter to get free insights and know-how from our experts