Data and DevOps Digest, episode 3

Welcome to Data and DevOps Digest, brought to you by cloud consultancy, Vivanti. A news and analysis podcast, we cover the trends, thought leadership and announcements happening in today’s DevOps space. Topics span from Continuous Integration and Continuous Delivery (CI – CD), Microservices and Infrastructure as Code; to Monitoring and Logging, App Replatforming and Pipeline Automation, Collaboration and more.

Episode three analyzes news related to six areas:

    1. Scope creep: Where do DevOps responsibilities start and stop in 2022?
    2. Security in DevOps: Are DevOps pros being scapegoated?
    3. Is DevOps bucking the tech market downturn?
    4. Transitioning from lift-and-shift to Cloud-Native
    5. Is the era of specialized development tools over?
    6. Are the big 3 cloud companies taking customers for granted?

Scope creep: Where do DevOps responsibilities start and stop in 2022?

Lachlan James  0:07
Hello and welcome to DevOps digest – a news and analysis podcast about everything DevOps, brought to you by cloud consultancy, Vivanti. I’m your host, Lachlan James. I’m joined by Vivanti Principal Consultant and DevOps expert, James Hunt.

Lachlan James  0:22
James: I’d like to start off by asking you one, maybe no-to-simple question: In a single sentence, describe where you believe DevOps responsibilities – in terms of the services they provide to the business – start and stop.

James Hunt  0:40
How long can this sentence be? Wait, wait, that wasn’t my sentence! Dang it. I’ll come in again [laughs].

James Hunt  0:48
DevOps is a practice and not a function. And to me, that means it stops and starts wherever the business function that’s adopting DevOps decides that it starts at it stops. You might as well ask, ‘where does data-driven decision-making begin and end in the business?’. Some parts of business need data, the direction of the business market wants and needs complex cost-centered systems involving people, programmes, etc. But, some things don’t. And I think it’s the same with DevOps.

Lachlan James  1:16
So, I want to kick things off by posing that question to you, James, because there was an article that caught my attention over the past couple of weeks. It got me thinking about the responsibilities of DevOps professionals, and how that’s both expanding and evolving. The article – titled Where DevOps and Site Reliability Engineers Intersect and Diverge – appeared on InformationWeek.com on June 23.

Where DevOps and Site Reliability Engineers Intersect and Diverge

Lachlan James  1:49
The author Ganesh Datta, defines the roles and scope of DevOps responsibilities like this: 

He says “Anything that is pre-production is DevOps, while post-production work is SRE”. He extrapolates, saying that “While DevOps is primarily focused on enablement of application development and production, SREs are much more focused on the stability, or reliability, of the platform once it is in production”. As an example, he claims that the toolkits these two job roles use are “dissimilar”, saying that “DevOps teams are more focused on IT workflow and automation tools like Jenkins, Chef, Puppet and Harness”, while “SREs are focused more on monitoring, via Data Dog, Prometheus, and similar platforms”.

Lachlan James  2:33
But what was interesting for me, personally, I found this to be a really odd delineation, because I think good DevOps professionals are extremely interested in what happens post production. That’s how they improve subsequent builds and deployments. In fact, Forbes published an interesting article in the past two weeks, which shared the top 15 Agile Principles that 15 different experts believe every DevOps team should practice. And, continuously reviewing post-deployment outcomes, to improve future delivery processes, was the second piece of advice offered on that list. So, James, am I just getting caught-up in, you know, job title semantics here? Or is there an important point to make as well? What are your thoughts?

15 Tech Experts Share Agile Principles Every DevOps Team Should Follow

James Hunt  3:25  
I go back to the original article, I think it’s important to point out that Ganesh, the author, is the founder and CTO of a company called Cortex, which is currently going to market with a specific tool related to service management, visibility and SLA compliance in production. So it makes some sense to me that he would want to differentiate the everything before, from everything after. But, to my mind, this is both incorrect and dangerous. 

James Hunt  3:51  
It’s incorrect because the split is precisely what we left behind when we started embracing DevOps culture and practice. If you recall, in the bad old days, the developers would toil away inside their IDEs, crafting code in isolation or inside small team contexts, and then throw a massive number of changes ‘over the wall’ to the ops team to let them figure out production. 

James Hunt  4:12  
When I did release engineering as a day job – pre DevOps, mind you – we would spend upwards of 72 hours deploying code, finding previously unseen issues resulting from that late-in-the-game integration strategy, and then chasing the devs for fixes. Rins,e and repeat, and you very quickly start looking for something better. That ‘something better’ was DevOps. 

James Hunt  4:34  
When Etsy – that’s the name people may remember from the shared marketplace craft website – when Etsy adopted DevOps practices quite publicly in the 2012 to 2014 timeframe, they did so by bringing their production environments closer to the developers and accelerating those sync points between the latest available code and what’s latest deployed code.

How Etsy Deploys More Than 50 Times a Day

James Hunt  4:56  
The danger to me, in what he put forth in his article, is by separating out the work that’s done to enable developers, prior to going into production, and the work that’s done to keep the services spinning post production, necessarily cuts those developers out. And, I think that’s a really bad regression, personally.

Lachlan James  5:18  
No, that’s a pretty interesting take. And generally, I mean, these articles that we’ve just talked about here got me thinking about the bounds of DevOps responsibilities a little bit more holistically. And as you probably expect, James, if you Google, ‘where do DevOps responsibility start and stop?’ – which is literally what I punched in after I was reading some of these articles, you get a variety…

James Hunt  5:39  
A massive variety.

Lachlan James  5:41  
Yeah, you get a swathe of answers. It’s quite, it’s quite funny, maybe disturbing, but there’s a lot of opinions out there about where this sits. And ultimately, most threw some shade on the InformationWeek write-up – a little bit like what you did there… 

James Hunt  5:58  
Good on them [laughing]. 

Lachlan James  5:59  
Exactly. But no, because of the cyclical nature of DevOps, you know, particularly the responsibility of building and maintaining good CI/CD pipelines. That was a common first point that was made. And to do that, again, you’ve got to think about these things from a cyclical perspective. 

Lachlan James  6:22 
So devops.com, they recently pointed out that, you know, the boundaries of responsibility remain very much up for debate, while citing a, I guess, a general broadening of responsibilities from again, CI/CD application testing to observability and to application security.

TechStrong Con - Downturn Brings Additional Sense of DevOps Urgency

And, ClickIT summarized that cyclical nature of DevOps responsibility well, with a nice overview diagram, which they termed the DevOps value flow.

DevOps Team - Roles and Responsibilities for 2022
Common responsibilities of DevOps Teams

In a similar vein, Tiny Stacks emphasize the critical role DevOps should play in automating the full release pipeline, as well as continuously monitoring builds and deployments.

What Does a DevOps Engineer Do?

So James, is DevOps hard to define because it’s more of a philosophical approach, first and foremost, as opposed to a practice area, second?

James Hunt  7:09  
DevOps is actually easy to define. It’s a culture of practice that brings software engineering methodologies, abstraction and repeatability mostly, to what is traditionally a one-off activity — like server provisioning and service deployment. What’s hard to do is to point to any concrete thing and say, there’s the DevOps, right there. Or to look at a team and say something like: ‘They’re 86.3% DevOps, working towards 90% by the end of the next quarter. We are on schedule!’. Everybody wants, in this industry, so desperately to apply a singular metric or KPI to DevOps-iness and not have to face the reality that DevOps is a thing you must evaluate in the context of some other success criteria. It may be reduced errors in production, faster delivery of features to market, greater customer satisfaction. But these are all highly context dependent assessments. And what’s good for one team organization may not be great for another.

Lachlan James  8:06  
Yeah, and I think that’s probably a pretty decent summation of where it’s at. I mean, let’s face it, whenever you’re explaining something, it’s convenient if you can whittle it down to a point or a metric or a thing, because otherwise it ends up being a long explanation. So but yeah, sure, I get that…

James Hunt  8:21  
That’s why our episodes are always so long [laughing].

Lachlan James  8:24  
That’s, that’s right. Exactly. So you know, one of us has to be quiet — probably me to be fair.

Do DevOps have an obligation to ‘give back’ to the Open Source community?

Lachlan James  8:32  
The Tiny Stakes write-up also said, you know, good DevOps teams and engineers should function as an organization’s ‘go-to Git Gurus’. And this seems pretty fair, and it got me thinking about a related issue. So like application engineers, DevOps engineers, are also relying more and more on open source technologies. So James, do DevOps professionals also have a responsibility to report errors about, and contribute to, the continuous improvement of open source software? Because you hear that idea regularly connected to dev teams. But what about DevOps teams?

James Hunt  9:06  
I do love talking about the obligations in the Open Source realm topic very near and dear to my heart. Do DevOps professionals, or people practicing DevOps culture, have an obligation to report bugs and contribute patches to fix those bugs? Absolutely not. There are software licenses out there like the GPL, the GNU General Public license, that are explicitly about protecting the ability to make changes to that software, primarily because Stallman and Co.were worried more about losing access to the software that they were regularly modifying at the MIT AI Lab way back in the 70s and 80s: When the commercialization of artificial intelligence happened the first time. If professionals practicing DevOps want to contribute back I think that’s great. I also think that’s a choice that a subset of those practitioners are not going to be particularly good at, and therefore I don’t think they should pursue it. It’s hard work, requires a certain amount of pervasiveness and perseverance, that has actually no direct — or very little direct or tangible — impact on keeping their services up and running. 

James Hunt  10:10  
You mentioned that that idea is regularly socialized into development circles. And I believe that’s more to the point that the developers, they’re primary pre-DevOps, their primary stock and trade was code — consuming it and outputting it. And if you’re just going to output it and push it back into the upstream consumption, you make your life easier. But I don’t think that’s really the case with DevOps practitioners: It’s more about bringing the tools to bear and understanding them, more than it is actually building out net new.

Lachlan James  10:37  
That’s an interesting way to think about that. And it’s kind of a little bit of a new perspective for me as well. But it’s interesting because we, you know, we both work with software developers in different ways, but it’s a regular thing that comes up in conversation, the more and more pervasive open source technologies become. So it’s an interesting split there.

Security in DevOps: Are DevOps pros being scapegoated?

Lachlan James  10:54  
The other thing that I came across a lot over the last couple of weeks, sort of changing gears a little bit, was the idea of security in DevOps. There’s been a heap of interesting articles published about this topic recently. And before I dive into some of this content, I just have one question for you, James: Do you feel like you’ve been framed?

James Hunt  11:17  
Well, I did once commission an oil portrait of myself, in the style of the 16th century Flemish Masters, to hang atop the mantle. But, other than that, no.

Lachlan James  11:26  
Ah James, witty as always [laughing]. Aside from the oil painting, which sounds delightful — I’d like to see it at some point — according to TheNewStack, you and the other DevOps professionals tuning in, should feel a little bit like scapegoats when it comes to software development.

Cybersec Threat Hunter to DevOps - You Have Been Framed

Lachlan James  11:46  
So Joe Fay wrote up a piece based on an interview with Tom Van DeWiele; the Principal Technology and Threat Researcher for Finish security firm WithSecure. So, essentially, Van DeWiele argues that cloud native development, the pervasiveness of open source software, and the general expansion of the cloud native ecosystem, presents cyber attackers with a far larger attack surface than ever before. And as tech teams have grown and developed alongside those trends, it’s the DevOps role that has been left ‘carrying the can’ when it comes to security. And this viewpoint seems to ring true when we think about, you know, broader scale proof points. 

So for example, in episode one of DevOps Digest, we analyzed findings from Secure Code Warrior’s State of Developer-Drive Security Survey. The main takeaway was that devs don’t seem to take sufficient ownership for mitigating security vulnerabilities. And in that survey, for example, only 14% viewed application security as a top priority when writing code, while 67% of devs admitted knowingly shipping vulnerabilities in their code.

Secure Code Warrior Survey

Lachlan James  12:57  
So James, what needs to change to ensure, I guess, a fairer distribution of responsibility for software security — throughout both the development and deployment phases of software delivery?

James Hunt  13:11  
I mean, simply put, security needs to be hip. Failing that, it needs to provide some tangible or visible reduction in some bad side effect or outcome. Let’s talk about testing and the cod liver oil, or the medicine of unit testing and development. Testing was sort of hip in the beginning of TDD BDD: Test Driven Development and Behaviour Driven Development. The developers who wrote tests first, and then implemented down to the level of required functionality, were seen as smarter, as visionaries, as devs who were more ‘with it’. But unit and integration testing stayed past the fashionable phase, because people were able to track how many production outages they had due to quote ‘silly bugs’: Things that happened, that you could have caught with the unit tests, or could have easily caught if you had just been looking for it. Or if the QA person hadn’t forgotten a test, or if all that had been automated somehow. Those instances, those outages, the number of those went down once a healthy regression test suite was put in place. 

James Hunt  14:18  
To my mind, the biggest issue with security is that it’s very hard to quantify existential threats to a business. Were there any breaches this week? How many ransomware attacks crippled the business last year versus this year? Those are very difficult KPIs to capture, and could actually end in business ending events. We’re not talking about a single outage that affected some customers. We’re talking about loss of all data because somebody had a remote code execution, or a vulnerability in a published API. It’s very hard to get those year-over-year, month-over-month.

Lachlan James  14:50  
Yeah, and I guess, whatever’s going on there — and I think that does go a long way to answering that sort of question, it seems like security is a growing concern that developers and DevOps teams have to address as cloud and open source trends facilitate a broader array of attack vectors for hackers. And in fact, DevOpsOnline reported that, according to a recent survey by Deep Instinct, 45% of senior cybersecurity professionals are now so stressed that they’ve considered quitting the industry in 2022.

CYBERSECURITY PROFESSIONALS WANT TO LEAVE INDUSTRY DUE TO INCREASE STRESS AND EXPECTATIONS

And on top of that, IoT World Today reported on new data, gathered by a security service provider Kaspersky. The survey suggested that IoT cyber attacks more than doubled year-on-year during the first half of 2021, with IoT devices breached over 1.5 billion times, up from 639 million breaches throughout the entirety of 2020. So it seems clear that, you know, software security professionals, DevOps teams and developers all need to prioritize cybersecurity to ensure better collective outcomes for the industry.

IoT Cyberattacks Escalate in 2021 According to Kaspersky

Lachlan James  16:03  
And all this got me thinking about the latest iteration of the OWASP Top 10, James. So OWASP, the Open Web Application Security Project, recently updated their top 10 list of the most critical security risks to web applications. And, quite a lot has changed since the 2017 iteration, much of which is indicative of the security trends we’ve been talking about today.

Top 10 Web Application Security Risks

Lachlan James  16:37  
So James, my first question is pretty simple: For those just getting started with DevOps, or professionals not working in the weeds of the industry every day, when OWASP refers to ‘moving left’ with regards to cyber security, what do they mean? The term is cited a couple of times in the latest top 10.

James Hunt  16:56  
I love the phrase ‘move left’, as I am left-handed and believe all things — from politics to software security practices — should move more to the left. In the general sense, ‘moving left’ as a phrase means to incorporate some process earlier in the overall process of building something. For example, to move user experience left, means you begin to think about what the UX and by extension things like aesthetic accessibility, user interface, affordances, etc. You think about what those things will be when you build the MVP, not after. And this is designed so that you take those into account sooner, and you don’t spend more time doing the wrong thing and then have to rework — which is a big problem in security that DevOps is currently grappling with. Incidentally, the whole concept of DevOps itself is moving left. But, what we’re moving to the left, is the concerns for production reality. Getting the developers to be able to run the entire application stack locally is moving left on deployment. Test Driven Development is a ‘move left’ on code correctness verification, it now happens earlier in the development process, and doesn’t happen when the users check for correctness. With security, to ‘move left’ is to adopt those security practices — like scans, static code analysis, fuzzing, Red Team Pen Testing — to move those closer to the actual act of writing software itself. Right now, we do these things after a deployment. Whether that’s to production or into some other security testing, or QA style environment. But this is long past when the developers themselves are actively thinking about what it is they’re doing. Anything you find at that point has to be remediated and go back through that development cycle. And that’s very slow, and very disruptive, to the natural velocity of an Agile team.

Lachlan James  18:47  
Yeah, absolutely. Much better explained than I could have ever done. Thank you. But I thought I was good to actually set that up from the outset, because I think it’s an interesting philosophy, if nothing else, and that seems to be the direction that a lot of the industry is going in, for some really good reasons. So the other thing that caught my eye was that ‘Broken Access Control’ moved, from fifth position in the 2017 list, to the top slot today. So James, what’s encompassed by this term and what’s behind its rise to the most critical security risk to web apps?

James Hunt  19:21  
In the web app space, Broken Access Control is when the application should have asked for, or verified, some authorization or some entitlement for a user but didn’t. Usually this is a forgotten check. The security-minded individuals will insist on a ‘deny by default’ posture: Unless an account has been granted a permission to do something explicitly, that activity should be prohibited. Unfortunately, that’s not how software programmes actually work. They prefer to execute, to do. Making them not do, requires an explicit code to check the ACL before doing. An attacker only needs to find one of those forgotten checks, or mis-applied middle layers, or confused privilege names, to breach the systems. As far as its meteoric rise, from fifth to the top slot, I actually attribute this to a, at least from the web app perspective, a maturation of security practices in both design and implementation. As we’ll talk about the sliding trends in the next segment, I believe, mostly we’re getting better at security. But the things that are not intrinsic to how platforms work — things like access control, even having access control in the first place — those tend to rise to the top of ‘things we’re still doing badly’.

Lachlan James  20:37  
Yeah, and connected that too indeed, we have sliding trends. So authentication failures has fallen from the number two to the number seven slot — when we compare 2017 to the most recent iteration of this top 10 list. So what’s going on here James?

James Hunt  20:56  
Two things: As the general web-using population gets more accustomed to logging into things with federated identity providers — whether that’s their Gmail credentials, their Twitter account, etc — more solutions start to rely on a lot too. Which is a system of essentially punting on having to build authentication, challenge response, password verification resets, and all that stuff. And instead, use pre-built and validated SDK’s — Software Development Kits — and software libraries. This eliminates the inclination of software teams to roll their own authentication frameworks. More people who are using standard, or de facto standard, OAuth Libraries means more bugs get fixed to the benefit of a larger population. You may not run into it in your programme, but somebody else will, they’ll get it fixed, you’ll patch that back into yours, and all of a sudden your security posture increases by virtue of just going with the flow. 

James Hunt  21:51  
The second thing I think that’s at play here is the rise of authentication enablers. MFA is a big one. Consumers have been bit by stolen accounts enough times now that something like Google Authenticator, or even SMS 2FA, isn’t seen as exotic or is a bridge too far. And the other authentication enabler that springs to mind is password management. Ten years ago, people thought you were just a bit paranoid if you created a different password for every single service. In 2022, that’s now the default; with things like LastPass and 1Password.

Lachlan James  22:23  
Yeah, absolutely. I’m a 1Password user myself, so I can verify and can also verify it’s actually been a great practice for me to get into the habit of because I am one of those people: Same password for everything. Terrible idea. So I wholeheartedly agree. So, with your DevOps hat firmly in place, James, what other trends stood out to you in the latest OWASP Top 10?

James Hunt  22:52  
Most of it made sense. Most of them made sense given where people are spending their time implementing new parts of applications. But the two new entries, the ones that didn’t change name, or didn’t just change rank, those are the ones that intrigued me the most. Server Side Request Forgery, or SSRF, comes in at number 10. And the report mentions that the data shows — and I quote — “a relatively low incidence rate with above average testing coverage, along with above average ratings for exploit and impact potential”. Basically, the InfoSec community is sounding the alarm on SSRF. But the data that OWASP has access to, that they pulled into this report, isn’t substantive enough to really make that a higher ranking. I expect to see that actually climb the rankings over the coming years. Secure Design, which comes in at number four, was even more fascinating. Because it’s not a technology thing. It’s not like SQL injection. It’s not a contract problem, like identity authentication issues. Secure Design is a hard look at whether we’re building a secure thing, not just building it securely. It’s a bit like installing a screen door on a submarine. With enough sealant, you can make that work. But the real question you should be asking yourself is: ‘What should we be doing instead of this?’. Something better from the start, without having to put all the mitigation imprints in place.

Lachlan James  24:20  
Yep, no, that makes sense. I love a good analogy as well. The screen door on the submarine. No, very good. That makes sense. Yeah, think about whether that’s the right approach for the job and the mission at hand. 

Is DevOps bucking the tech market downturn?


Lachlan James  24:32  

So moving on to the next area that I want to have a quick chat about today: Basically, there was a lot of reporting on market trends — there kind of always is — but the last couple of weeks, particularly. So I guess it seems possible that DevOps is bucking the tech market downturn; or at least [seems possible] from some of the reporting that’s going on, over the last couple of weeks. Maybe it’s optimistic, but let’s have a chat and unpack some of this. 

So, as an example, the Linux Foundation released its 10th Annual Open Source Jobs Report the other week, based on 2200 respondents. Two of the top three most sought-after skill sets directly related to DevOps. So 69% of employers said they are looking for IT professionals who have cloud and container technology expertise — kinda of right up DevOps ally. And 57% [were] also looking for straight-up DevOps skill sets as well. Further, 79% of IT professionals themselves said that it was either very important or extremely important to be familiar with DevOps.

The 10th Annual Open Source Jobs Report

Lachlan James  25:38  
And to support this notion of DevOps potentially flying in the face of a global tech downturn, seasoned tech journalist, Mike Vizard, covered an interesting session from TechStrong’s virtual conference, TechStrong Con. So, titled “DevOps: Has the Bubble Burst?”, the panel suggested that organizations will need to continue to invest in DevOps automation and cybersecurity – there’s that word again! – to ensure their long-term survival. And so there are a couple of really interesting suggestions for the panelists here. So one of them, Thomas Krane, the Managing DIrector for Insight Partners, suggested that many DevOps companies are yet to see a decline in demand because organizations are investing in automation in order to reduce their technical debt costs. Kind of makes some sense. 

DevOps - Has the Bubble Burst?

Lachlan James  26:29  
So James: Do you think this survey data from The Linux Foundation, and Krane’s comments, are some cause for optimism here?

James Hunt  26:37  
Before I answer that, I do want to key in on that last sentence: “Investing in automation in order to reduce their technical debt costs”. Keep that in mind, because that’s going to be central to the rest of my answers. 

James Hunt  26:50  
It’s important to keep in mind that The Linux Foundation focuses primarily on open source hiring, which is a bit of an odd thing to focus on. But, it’s The Linux Foundation, they’re very open source friendly, they’re very open source minded. Reading the report in full, it seems that when they say that, they mean hiring people into technical independent contributor roles, specifically for their familiarity with open source technologies and tools. So the optimist in me says, ‘yeah, they’re hiring more and more people, we’re gonna beat the tech downturn, we can turn this thing around’. But one of the stats in that report is that 93% of hiring managers aren’t finding people with the right skill set. To me, that means that the practitioners out there are not bending their focus in the right direction. If you know JavaScript, that counts as an ‘open source skill’ — with the air quotes; node.js is open source after all. So of course, if you know JavaScript, you know an open source tool. But this does mean no good when I need someone with a solid background in tweaking container security and performance. So I think what might be happening is, instead of hiring a senior person to do that, I as a hiring manager may have to split that position into two non-seniors. And boom: We have an increase in hiring rates. But, I’m still not quite satisfied with what I’m finding in the market. And I don’t know that you’re going to see a massive rebound in employment rates, because of that underlying mismatch of the skills being sought and the skills being offered.

Lachlan James  28:20  
So, kind of more people just getting what they can, in terms of available talent in the market, right?

James Hunt  28:25  
Open source is a very broad topic. It’s a bit like saying, we need people who understand money. Like okay, do you need people who understand loans? Do you need people who understand banking and finance? Do you need people who understand balancing a chequebook? Like, there’s a lot of things. There’s a lot of nuance inside that.

Lachlan James  28:41  
That’s right: We’re talking about cars. I’m only doing the ones with four wheels. Well, that is a lot of them for sure. Yeah. Yeah. No, absolutely. That does make some sense. So I think there’s, not misalignment of terms there, but when you actually read into the detail and think about how that plays out in the real world. We kind of walk away going, maybe, maybe some cause for optimism, maybe… 

Lachlan James  29:06  
Anecdotal evidence out there at the moment also seems to offer a little bit of additional cause for optimism. Just in the last couple of weeks, there’s been news about relatively sizable funding rounds for DevOps startups. So there was $55 million for Gearset and $35 million worth of Series B for DevOps acceleration platform, Incredibuild. So James, as Venture Capitalists seem to be turning off the tap elsewhere, why does there still appear to be a little cause for optimism in the DevOps space? Is it because investors suspect that DevOps-related spending is actually going to help organizations impacted by the tech downturn weather-the-storm? 

Software DevOps platform Gearset raises $55M
Incredibuild raises $35 million Series B for DevOps acceleration platform

James Hunt  29:47  
At this point, we’ve moved into the part of the episode where Lachlan is optimistic and I get to be thoroughly pessimistic and continually burst his bubble [laughing]. Because frankly, I think Gearset and Incredibuild are both outliers. So Gearset is aligned with Salesforce — technology wise. And it’s not a bad bet to go in on any platform that extends and enhances a big player like Salesforce. Incredibuild has two very recent-ish partnerships inked with both Amazon and Microsoft. As an investor, I’d throw money at them just to make sure I was there before Google jumped on the bandwagon, and the stock price went up again. 

James Hunt  30:23  
More critically, and more holistically across the industry, both of these are DevOps platform plays. And a recession on the horizon means layoffs. Indeed, lots of VC burning startups have already been laying folks off. Check out the website layoffs.fyi, if you want to scope out the damage there. Outside of a recession, you make more money as a company by growing consumption: More customers, more spend, more dollars rolling in the door. Once you get into a recession, it’s very hard to grow the revenue side of the profit equation. So you start looking to cut costs. 

James Hunt  30:58  
A particularly cynical view of DevOps practices, and the automation that invariably tags long, is that it reduces the need for headcount. Literally automating the job away and cutting back on the single biggest expense in any business; people. And with that in mind, I think that’s why you’re going to see a real, not a resurgence, but at least a continued influx of money into any technology company whose products or offerings or services allow businesses to potentially cut their costs. Whether that’s by reducing costs and cloud spend, reducing headcount, maybe not growing as fast with demand, and I think that’s where, sadly, DevOps is really going to shine during a recession or downturn.

Lachlan James  31:43  
Yeah, I think that’s pretty understandable. I mean, anything to do with process or task automation and people — and helping to streamline any of that — unfortunately results in reduction in headcount or just saves people a lot of time.

James Hunt  31:55  
And a little further on that: I personally have never found automation in the DevOps space to actually reduce the need for people. Because that’s what everybody’s afraid of, especially when you have a consultant come in and say: ‘Oh, we can automate this, we gotta make that streamlined’. [And your natural response is;] ‘Oh, I won’t have a job anymore’. In reality, there are so many other things that need to be fixed and made better and improved, that really the automation of DevOps, if you keep going, is a massive multiplier. But, my fear is that during a recession, companies will stop at the ‘well, we’ve automated so we can keep going and coast until the money numbers look better again’.

Transitioning from lift-and-shift to Cloud-Native

Lachlan James  32:30  
Yeah, I think that does make some sense. So; leaving that aside for a moment and moving on to another topic… 

James Hunt  32:39  
You’re gonna lift and shift into another topic, Lachlan?

Lachlan James  32:42  
I am; I was gonna avoid doing the bad pun. But listen, someone has to do the jokes. 

James Hunt  32:48  
I’m here for the bad dad jokes. That’s why I’m here. 

Lachlan James  32:51  
That’s it. So we are indeed lifting and shifting into a new topic. So I came across a great article by Alexander Gallagher on devops.com. And I actually encourage everyone to go and have a read of it. I thought it was a really interesting take on some things that are happening in the industry. So go and have a look at that irrespective. 

Lachlan James  33:09  
It explores the different stages of cloud adoption, as more organizations transition from I guess, traditional ‘lift and shift’ approaches to kind of [initially] get there, to cloud-native — doing things for the cloud, from the ground up.

Moving From Lift-and-Shift to Cloud-Native

So Gallagher cites research from analyst firms Forrester and Gartner. And bear with me, this has a point to it… Forrester forecast that 2022 “will see big organizations move decisively away from lift-and-shift approaches to the cloud, embracing cloud-native technologies instead.”. And similarly, Gartner states that more than 85% of enterprises “will embrace a cloud-first principle by 2025 and will not be able to fully execute on their digital  strategies without the use of cloud-native architectures and technologies.”. But, the fact of the matter is that many organizations – particularly large enterprises – are still playing the ‘lift and shift’ game pretty hard, as they still have lots to do in order to transition non-cloud-native apps, systems and workloads. To kind of get to that ultimate end-state that these analyst firms are talking about.

Predictions 2022 - Cloud Computing Reloaded
Gartner Says Cloud Will Be the Centerpiece of New Digital Experiences

Lachlan James  34:19  
So for example, a recent article for TechWire pointed out that: “mainframes are still used by 71% of Fortune 500 companies, handle 90% of credit card transactions, run 68% of production workloads, and continue to support 44 of the 50 largest banks and the top 10 insurance companies.”. So you know, pretty, pretty obvious that it’s a thing that’s still around.

Extending mainframe investments with modern software development and DevOps solutions

Lachlan James  34:48  
James: Why do industry analysts have this propensity to evangelize shiny new things and gloss over the fundamental, unsexy issues, which many organizations are still grappling with on their digital transformation journey? 

James Hunt  35:05  
Because nobody wants to read about mainframes, I mean, present company excluded, I’d love to read about mainframes. But fundamental issues are hard to cut down to under 3000 words, right? About 10 minutes reading time, on average. And on top of that, no one really wants to talk about their mainframes. We have a youth culture problem in the technology space, as much as we do anywhere else. Every morning, for example, I get a spam message from Medium, the blogging platform publishing site, I signed up for it, I totally understand where it comes from. But it’s loaded-up with all the best tech writing of the day — all of it on Medium, of course. According to that daily round-up, no one ever programmes in C. No one writes or runs Lisp. The only operating system is Linux. And the only way to run an application is via FAS, serverless offering like Amazon Lambda, or Google Cloud Run. Simply put, the industry has an attention deficit, and industry analysts looking to publish need eyeballs, so that they can keep driving engagement and viewership. I don’t know how to fix that. But I think that’s why you see an outsized emphasis on all things new, and all things future and forward-looking, versus a long, hard look at the underlying problems that still face and plague a lot of our, our digital organizations. 

Lachlan James  36:22  
Yeah, absolutely. I just find that it’s one of those really odd things, because I’ve, you know, followed and worked with analyst firms a lot in the type of roles that I perform at tech companies. And I just always find it bizarre. Because, if the majority are still working their way through issues, which — if you just read their publications and blogs, from analysts, that is — you’d think are past and done with. It kind of feels like, if their mission is to try and help organizations and tech companies understand how to embrace new technologies and actually benefit from them, you kind of feel like they’re doing them a disservice, right? It feels like the balance is off, right?

James Hunt  37:01  
Yeah. Another way to look at that is this: Everybody’s looking at the same future, but everybody comes from very different pasts. So if you’re an industry analyst, and you’re trying to get more people to understand what it is you’re trying to talk about, talking about the thing that will happen soon, is a lot more widely applicable than ‘here’s how this bank dealt with the fact that they bought System 360 in the 70s’.

Lachlan James  37:26  
Right, because MY bank’s not doing that. That doesn’t relate to me. No, I get it. People are converging, from many different points, towards some of those same new technologies. So it’s interesting to talk, it’s easier to talk about where you’re heading. And I get that because it can be relatable to a far wider net of people. So that does make a little bit of sense. But again, I think it’s one of those really strange, strange situations. 

Lachlan James  37:55  
So I guess, in acknowledging this fact that, you know, there still are a lot of people doing things like using mainframes and actually doing the lift and shift game… Back to that article by Gallagher, he offers five bits of advice for companies still undertaking lots of that lifting and shifting to help them prepare for that cloud-first future that we’re talking about there. And the five things he goes through these, and then I’ll get your thoughts on these, James. So he talks about: 

    1. Embracing containerization, as it brings organizations closer to the cloud-native mindset of “write once, deploy anywhere” — though I feel like you might tear that apart in a moment, James.
    2. Two; deploy tactics that reduce demand for in-house personnel when transitioning to the cloud – such as outsourcing and technology choices, like serverless computing or managed data center services.
    3. He talks about the need to leverage automation, through containerization, model-driven approaches that help to automate life cycle management, cloud-native compatible observability tools in order to automatically monitor microservices.
    4. Four, he talks about good software provenance, to support new update strategies that support containerization.
    5. And lastly, think practically – he says that embracing cloud-native wholesale might not be the best approach for every organization. Consider the benefits of hybrid cloud – which apps can be cloud-native and which should remain in the data center. 

Lachlan James  39:26  
So some interesting advice there, James. Some of it’s pretty reasonable. So what do you think about Gallagher’s list? Anything to add here?

James Hunt  39:34  
Well, I do know that, going forward, anytime I have a list of things — and I feel it’s a little short — I’m going to add ‘think practically’ to the end. Because I think that’s universally applicable in all cases: Whatever you’re doing, don’t just blindly do it. But..

Lachlan James  39:49  
Well my next one was gonna be ‘think crazy’ [laughing].

James Hunt  39:54  
The ‘write once, deploy anywhere’ thing sticks out at me because that’s not really what containerization is about. Anyone on Apple Silicone, running Docker desktop, knows this intimately. As has anyone who operates on a non x86 platform — like a Raspberry Pi, or AMDs — because the containers that exist today, on places like Docker Hub, are often written for the architecture that the developer used. And usually that’s x86. Usually, that’s what you’re deploying in production. So usually we don’t notice. In fact, it wasn’t until the Apple M1’s hit the scene that I think a large portion of hip trendy developers noticed that, ‘Hey, there’s actually a platform under this container thing’. You should still definitely move to containers, because as a packaging format, it’s second to none. Dependencies are included, and more importantly, isolated, and the whole thing is executable. So I agree with the idea that you should embrace containerization. I’m just not 100% with Gallagher on the ‘why’.

Lachlan James  41:00  
Right; good point, badly articulated.

James Hunt  41:02  
I do also disagree with reducing reliance on in-house expertise. Or rather, I think I would frame it substantially differently. You should pick technologies that fit well within your existing team’s skills that they have today, and aspirations of where they want to go. And it does have to be both. If your ops people, for instance, love Postgres, and they want to gain a better understanding of how to tune it, how to scale etc, as a business — picking, say Amazon RDS with MySQL, as the backing service — really doesn’t do you any good. Technically, yes, Amazon manages RDS. But anyone who’s used a managed service from a cloud provider knows that they’re managed, to a point. And where that point stops, your people have to pick it up. So you still kind of, on the hook, for all of the technologies that you’re outsourcing to vendors and cloud providers. And it makes sense to not necessarily run away from in-house experience, but to augment it. 

James Hunt  42:02  
For me, ‘lift and shift’ is about reducing the noise of managing out-of-band systems — anything that doesn’t go-with-the-flow of how you want to deploy your whole fleet of software services. For most people, that’s going to be containerization, atop of managed Kubernetes. But the point is, not so much the containers, as it is the uniformity. Think of trying to run financial analysis inside of a business where, let’s say 95% of the company uses Excel spreadsheets, and 5% uses Apple Numbers. Apple Numbers is a great little home piece of software. It doesn’t run enterprise finances very well. And having to constantly cater to that difference, that impedance mismatch between those two systems, will cause a massive spend in time and money. So you’re much better off getting the 5% to join the 95%. And to me, that’s lift and shift. A lift and shift strategy that gets you all of the inconsistency of your on-prem data center — but now it’s metered usage on someone else’s hardware — to my mind, that’s a failure.

Rapid-fire round: Is the era of specialized development tools over?

 

Lachlan James  43:13  
Yep, yep. Makes plenty of sense, James. Especially when you say it, and I don’t. So, moving on, the last section I want to talk about today, James. It’s time for the ‘Rapid Fire Round’ to end this out for today. So two stories caught my attention in this regard, which pose, a controversial question, which I’d love your take on,

Lachlan James  43:36  
So James, what are we talking about? When writing for TechWire, Nima Badiey claimed that, and I quote, “the era of specialized development tools is over”. To support this notion, he argues that in contrast to specialized development tools needed to support legacy systems like mainframes. So it says, and I quote: “new developers entering the workforce are learning and honing their skills on modern generalizable git-based systems, new open source runtimes and modern DevOps practices“.

The era of specialized development tools is over

So James, does he have a point? So is the era of specialized developer tools over? Or, because it’s a contributed piece from the current VP of Alliances at GitLab, do we take it all with an enormous grain of salt?

James Hunt  44:36  
Well, prior to his position at GitLab, Nima was actually inside of Pivotal, where he did a lot of work with Cloud Foundry, and you’ll see a lot of these same motifs pop up there. Then, you know, push the code and not worry how it’s deployed, as long as you’re doing the ‘right thing’ in Ruby or Go or Java or JavaScript, you’ll be fine.

James Hunt  44:58  
So, no. JavaScript is general, in the sense that you can pick and choose your editor, so that’s probably one of the runtimes he’s talking about. JavaScript is a generalizable open source runtime. But it’s only general in the sense that you pick the editor. In every other way, it’s a highly opinionated, highly fragmented, ecosystem. We have two package managers, multiple distribution libraries, not to mention the tonnes of reactive front-end frameworks, transpilers, polyfill implementations, etc. So I think it’s a bit disingenuous to say that devs are cutting their teeth on the same tech now — that everybody does things the same way, so the specialized dev tools and specialized bespoke automation pipelines are a thing of the past.

Lachlan James  45:39  
Absolutely, hey, listen, I think if you’re working for a company, and you’re building a platform approach, and you’re your enemy is a bunch of different technologies, you’re going to argue they’re becoming obsolete and everyone’s moving towards a single way of doing it on one platform. Sure. No, I totally agree with that one.’

 

Rapid-fire round: Are the big 3 cloud companies taking customers for granted?


Lachlan James  45:54  

So next up, on Rapid Fire Round, TechStrong Research recently released a report, titled ‘DevOps and the Cloud: New Ways to Pay for the Public Cloud’. So it surveyed 458 development professionals about their cloud usage. While it unsurprisingly found 93% of respondents are already using AWS, Azure or GCP; it also revealed that nearly two-thirds are considering buying from an alternative cloud vendor outside the big three. And 28% are already using alternative providers to augment services. The biggest drivers respondents cited for using alternative providers are reducing reliance on a single provider (55%), improved price/performance (38%) and recent cloud outages (31%).

DevOps and the Public Cloud - New Ways to Pay for the Public Cloud

So James: What’s behind this trend? Are the three big cloud companies simply taking their customers for granted?

James Hunt  46:49  
I don’t think it’s that… I had a joke prepped about the cyclical nature of the music industry, and the fact that alternative rock is now going to be back, and I can get my acid washed jeans out and listen to grunge again. But I think ultimately, what’s going on here, is that no one ever got fired for buying IBM. But that didn’t make people want to put all their business on IBM’s books. GCP, Azure, AWS: Those are the safe bets. If you walk into a cocktail party with a bunch of developers, those exist I’m assuming, and you say: ‘Hey, we’re thinking about using AWS’, no one’s gonna be shocked and want to know more. 

The so-called ‘alternative’ cloud providers — like DigitalOcean and Linode — actually offer some pretty compelling pre-built offerings that take a lot of the executive decision-making overhead off the table. My biggest complaint with Amazon is that they basically give you all the tools and none of the fasteners: You have to put it all together. You can, though, build some amazing stuff. Google’s a little bit better, Azure is a little bit worse. But take a look at something like LKE — the Linode Kubernetes Engine. It’s particularly nice. If you need a Kubernetes cluster, here it is. You don’t get to muck about with the internals, you don’t get to choose what flavor of operating system; you can choose the version, but you can’t choose the CNI, you can’t choose really much of anything. And it just works. And they spend an inordinate amount of time making sure that their LKE offering, or their managed database offering, or whatever offering these alternative cloud providers are giving out.. They spend a lot of time because they have to compete with the big guys. And I think that’s really those alt clouds, they are cheaper, they’re easier to understand, and sometimes they’re faster and better all around.

Lachlan James  48:30  
So, the answer is: Big three, they’re still doing okay, nothing to worry about here?

James Hunt  48:34  
I think they’re doing fine. And I don’t know of anyone who’s decided to throw their lot in exclusively on the alternative clouds. I know a couple of people who have gone native Linode. I don’t spend that much time in the DigitalOcean community to know if there’s any product offerings or service companies who are proudly 100% DO. But I think a lot of companies are using it as a hedge. As you said, the ‘let’s reduce reliance on the single provider’ strategy. That stat actually was a bit lower than I would have expected, especially since that overall was, what? That 28% are already using alternatives. But yeah, the fact that these are not the big three is both a blessing and a curse to the big three.

Lachlan James  49:23  
Yeah, absolutely. I mean, I think it makes sense for people to go and supplement what they’re doing. They’re hedging their bets a little bit, getting some specific services that work really well for some particular contexts. I think that makes a lot of sense. As you said, things that are easy to unwrap and unpack and use.

Until next time

Lachlan James  49:38  
Alright, James. So I think for this episode of DevOps Digest, that probably means we’re out of time. I want to thank you for your interesting commentary, as always. And, of course, everybody for tuning-in. 

Lachlan James  49:49  
So to receive regular DevOps news and analysis, you can subscribe to our YouTube channel, or go to vivanti.com/contact-us and sign up to our mailing list. So everyone, it’s been a pleasure to have your company, and bye for now.

If You Liked That, You’ll Love These:

Building a new kind of consultancy

Building a new kind of consultancy

At Vivanti, we’re building a new type of cloud consultancy. One based on trust and empowerment. We are looking for savvy, technical people who want to forge their own path in the industry.