Transcripts

This Week in Enterprise Tech Episode 561 Transcript

Please be advised this transcript is AI-generated and may not be word for word. Time codes refer to the approximate times in the ad-supported version of the show.

Lou Maresca (00:00:00):
On this week at Enterprise Tech, Brian Chi Curtis Franklin and I unpack the latest shakeup in cybersecurity with the phase out of older T L SS protocols, but that's definitely not without side effects. We'll talk about it. Plus our guest Preta base is co-founder and C E o Dev. Rishi will dig into the movement of large language models as they approach production environments and how privacy is part of that decision making. You definitely should miss it, TWI it on the set

TWiT Intro (00:00:27):
Podcasts you love from people [00:00:30] you trust. This is twi.

Lou Maresca (00:00:41):
This is TWI this weekend enterprise Tech episode 5 61 recorded September 15th, 2023. That cloud looks like a llama. This episode of this weekend, enterprise Tech is brought to you by reeva. It's a first Eva's new pro series. The HD L three 10 for large rooms [00:01:00] in the HD L four 10 for extra large rooms gives you uncompromised audio and systems that are incredibly simple to set up, manage and deploy at scale. Learn more at eva.com/twi and buy discourse. The online home for your community discourse makes it easy to have meaningful conversations and collaborate anytime anywhere. Visit discourse.org/twit to get one month free on all self-serve plans and by thanks Canary, [00:01:30] thousands of irritating false alarms Help no one get the single alert that matters for 10% off and 68 money back guarantee. Go to Canary tools slash twit and enter the code TWIT and the How to Hear about us box. Welcome to twit this week, enterprise Tech, the show that is dedicated to you, the enterprise professional, the IT pro, and that geek who just wants to know how this world's connected. I'm your host, Louis Eski, your guy through the big world of the enterprise. I'm not going to guide you by myself. I need to bring in professionals and [00:02:00] the experts started very own Mr. Curtis Franklin, principal analyst at I'm DIA, and he always has the pulse of the enterprise. Welcome back to the show. Curtis. What's been keeping you busy this week?

Curtis Franklin (00:02:10):
Oh, just been taking pulses left and right, talking to a bunch of folks, writing a lot, doing my standard bit of analysis. The good news is that in much of the security world, it's been a fairly standard week. Although we [00:02:30] did have as so often happens, one really public cybersecurity event. The good folks at M G M Resorts found themselves having to go back to just about using pen and paper to keep track of people who were checking in and trying to make reservations. It's a good example of why you have be careful no [00:03:00] matter how large or small you are, and I think it's also interesting that it shows us that we like to break things down into it. The computers in the back room and OT or in a home, it's iot, but in business it's OT operational technology and this was a case where an attack on the IT side whacked the OT because things like all those smart door locks [00:03:30] and the various bits and pieces of things that help people check in the little kiosks, the places around their retail, all of that was hammered because of as they were liking to call it an IT event.

(00:03:52):
Going to be a lot of lessons to be learned here and we will find out a lot more about it because now of course we have disclosure requirements from [00:04:00] securities and exchange commission. My heart goes out to the people in IT over there. They have not had a fun week at all by comparison. Rest of us have had a week that was pretty much beefcake.

Lou Maresca (00:04:17):
Indeed it has. Indeed it has. Thank you Chris. Interested to get into that. Well, I also have to welcome back our very own network of Gadget guy, Mr. Brian Chi Bert, I hear that you've been visiting some rockets this week, huh?

Brian Chee (00:04:28):
Oh yes indeed. [00:04:30] The nice thing is is the Kennedy Space Center is only about an hour away from where we live, so I was able to go and visit. We actually have annual passes so we can see now one of the other things I've been doing that's more enterprise in nature is that the makerspace that Kurt and I are both part of had an interesting problem. We had a fairly good sized lightning [00:05:00] storm in the area because central Florida is the lightning capital of the United States and sadly the A spike went through a U P S and it wiped out the power supply for the primary router for the makerspace. So one of the things I'm doing is I'm actually recycling some old gear Belkin and a few other people actually make DC uninterruptible power supplies. Now just [00:05:30] really, really quick, there are two major categories of Uninterruptable power supplies.

(00:05:35):
There are standbys and there are online standby means there's a switching time. So that means if you're running a really fast machine, it could actually switch and lose a couple of more than a few clock cycles and possibly have problems online means the power is going through the inverter into [00:06:00] the batteries back out through an inverter and then runs the actual equipment. So one, it smooths out and cleans up power in a pretty dramatic way, but it also means that power goes away. There's no switching time. Well, the cool thing about DC interval power supplies is that there is no switching time. It is technically an online device. It's always charging the batteries and the batteries are what's outputting [00:06:30] in this case, 12 volts to the router. Now they do exist, you can still buy them. The Belkin unit unfortunately is end of life, but there are now lithium ion based DCS units that have multiple power outlets, five volt, 12 volt, and I think 24 volt in a lot of them, and depending on how much voltage you pull, it lasts x number of minutes, hours or what [00:07:00] have you.

(00:07:01):
They're very, very popular for security cameras, but they're also really great for cable modems and things like that. So because it's not going back and forth, it is in theory a little more efficient and I'm kind of sitting on it and watching to see how well it really does work. I put a fresh battery in, uses a lead acid, and we'll see how well it works [00:07:30] and I've got some doodads on there. I got to go in and periodically test to see the power quality, but it's going to be interesting and it's possibly one of those technologies that went end of life too soon. Yeah,

Lou Maresca (00:07:51):
Too bad. Just too bad. Well thanks Brian. Good to hear. Well folks, we've had a busy week at the enterprise. They were unpack the latest shakeup in cybersecurity with the phase out of older t [00:08:00] l s protocols. Question is your data safe? Plus our guest talks about a breakthrough in AI development from startup Preta base aiming to simplify machine learning for developers globally. Don't miss out on the deep dive with preta Base's co-founder and C E o Dev Rishi on the movement of large language models to actual production environments. Question is how is privacy part of that decision making? We'll get into that lots to talk about, so definitely stick around. But first, it's been a busy week in the news of enterprise, so let's jump into this week's news. Blips organizations [00:08:30] worldwide now have a new tool in their toolbox for digital transformations. With the latest partnership and integration news.

(00:08:36):
According to this article from cio.com, Oracle and Microsoft are enhancing their collaboration to actually facilitate the journey of businesses of the cloud bolstering, cloud native development AI experimentation. Now Microsoft's Satya Nadella and Oracle's Larry Ellison's unveiled Oracle database at Azure, a synergized offering designed and simplifying the migration process of data center's workloads from [00:09:00] on-premise infrastructure to the cloud while opening fresh avenues in artificial intelligence. Now Oracle is strategically placing it it's formidable database hardware including Oracle Exadata and software and Microsoft Azure data centers. This move grants clients direct access to Oracle database services on Oracle cloud infrastructure through Azure promising superior performance, security and reliability. Now Oracle's database at Azure is aims to accelerate the migration of majority of data that remains on premise, a shift [00:09:30] seen as imminent by industry leaders that are out there. Now the synergy between the database and AI is expected to reshape businesses processes fundamentally here offering low latency access to data, a critical element in AI facilities and functionalities such as fine tuning and pre-training those models.

(00:09:47):
Oracle will manage O C I services directly within Microsoft's global data centers initiating in North America and Europe to ensure that they have single low latency operating environment. Moreover, it supports a range of Oracle services, including the autonomous [00:10:00] database and real application clusters, providing customers with rapid response and resolution for mission critical workloads. This initiative not only simplifies deployment and management also empowers clients to develop new cloud native applications using O C I and Azure technologies including Azure's AI offerings, although all through a seamless connection provisioned from the Azure portal promise of multi-cloud systems that integrates the best of Oracle and Microsoft Technologies. [00:10:30] This actually might be the necessary step forward for the industry to start moving and experimenting in ai.

Curtis Franklin (00:10:38):
Well, according to the UK's National Cybersecurity Center, social engineering has replaced ransomware as the key leading edge technique used in cyber extortion schemes In 2023 at a speech at 44 Con in London this week, ncs C'S operations director Paul Chichester told [00:11:00] the audience that ransomware remains a major concern for businesses because the number of incidents continues to increase. But with that said, many attackers and a growing number of them are skipping the encryption step of malware. They just steal data, put it on a dark web leak site and then solicit payment in exchange for taking it down before they press go. This is part of a continuing evolution in criminal [00:11:30] practice from encrypting data to put it out of reach to encrypting data after exfiltrating a copy that they threaten to make public to simply stealing the data and threatening to make it public unless ransoms paid.

(00:11:42):
This most recent development means that the traditional suggestions to protect against ransomware remain current on patch, have a great backup in recovery scheme, things like that no longer work. Now the emphasis returns to the old school things [00:12:00] of keeping attackers out of data in the first place using tools like user behavior and multifactor authentication at the top of the list. Now it's interesting to look at the size of the ransom demanded under the new scheme where we're used to seeing ransomware demands that mesh nicely with the coverage. For most cyber insurance programs, the pure fraud play tends to come with a price that's just a bit under what the victim would pay to the relevant [00:12:30] regulator if the loss of personal data was discovered. This makes it seem like a payoff and a payoff that is the cheaper option. Unfortunately for the victim, that's not the case. Once the data is stolen, the fines can come due regardless of whether the data is ever made public. So all this is just another stage in the Everlasting Arms race between criminals and civilians. A race that doesn't look like it's going to slow down [00:13:00] anytime soon.

Brian Chee (00:13:04):
So we've heard the doom and gloom. Oh, we're going to use up the world's supply of lithium. Well, this extreme tech.com article means maybe that's not going to happen quite so soon. So the headline is World's largest lithium deposit found along the Nevada Oregon border. So if you believe their back of the envelope estimation, [00:13:30] this is a very, very significant deposit of lithium. According to Och Boris, their geologists not involved in a project told the Royal Society of Chemistry it could change the dynamics of lithium globally in terms of price, security and supply and geopolitics. Choosing to industrialize the area isn't so simple. Though the McDermott Caldera is culturally significant to several indigenous groups, including the Fort [00:14:00] McDermot, pa, Shoshone, and Bok tribes, the Caldera holds many first foods, medicine and hunting grounds for tribal people both past and present. According to the people of Red Mountain, a committee representing all three tribes and others wrote in a recent statement, the global search for lithium has become a form of green colonialism.

(00:14:26):
The people most connected to the land suffer while those severed [00:14:30] from it benefit. This isn't the first time indigenous tribes have protested mining at the Caldera members of the Fort mc Ute Shoshonean Burns Ute tribes say the Bureau of Land Management neglected to provide ample time for community input when it approved Lithium Nevada's work at Thacker Pass were 31 Paiute were massacred by government soldiers in 1865 turning to Cal's untouched region, still used for tribal purposes [00:15:00] into lithium mines would add to this cultural distress and disrupt local ecosystems home to multiple endangered plants and animal species. Hey, so I'm sorry. This is not the first area of cultural significance where a mineral of value has been found. Lithium just happens to fuel the tidal wave of high density energy storage for vehicles, home computers and many other things. However, extracting lithium is a [00:15:30] massively dirty process that could potentially destroy areas important to tribal peoples.

Lou Maresca (00:15:38):
This week from cybersecurity insiders is another spotlight on innovation aiming to revolutionize security operation centers, the nerve centers of cybersecurity. Now as we know, these hubs grapple daily with an onslaught of alerts with human and analysts often becoming the bottleneck in a system inundated with more threats that can actually timely be addressed. News service from Radiant Security is [00:16:00] leveraging AI to bolster the efficiency and effectiveness of sox. Now dubbed the AI copilot. Sounds familiar, right? This platform integrates seamlessly into SOC workflows promising to enhance productivity, identify previously missed threats and accelerate response times. Now here's how it works. Grads AI copilot automates the entire security triage and investigation process grounded in a rich dataset including insights from the Mitre attack framework. It conducts a dynamic q and a process to analyze each alert, generating a customized response plan, [00:16:30] which analysts can then fine tune and execute on various levels of automation according to the organization's preference and situation.

(00:16:37):
Now, outperforming human analysis system boosts boosts a high 90% accuracy rate. That's a pretty good accuracy. It operates around the clock, guaranteeing an in-depth analysis of every alert and potentially uncovering threats that might otherwise go unnoticed, but really stands out as its transformative potential for the role of an analyst to promise this to actually evaluate junior analysts to evaluate [00:17:00] contributors by automating triage and offering step-by-step guidance, radiant even sees it as more than a product and more of a commitment to transforming. So management empowering teams to actually respond more efficiently and effectively to threats, which could be a game changer in the complex landscape of cybersecurity. Well, folks that does it for the Blips, NEP sucks News Bites, but before we get to the News bites, we do have to thank a really great sponsor of this week Enterprise Tech, and that's Nova Eva's Media Room Audio Technology [00:17:30] has our history of wowing IT pros.

(00:17:33):
In fact, Duquesne University has a number of a hundred newer devices installed and one of their senior technical analysts recently said, I can't say enough about how impressed I am. Audio has been my life's work for 30 years and I'm amazed at what narva Mike Speaker Bar will Eva has made another leap forward with the introduction of their pro series featuring the HDL three 10 large rooms and HDL four 10 for extra large rooms. Now [00:18:00] for the first time you can actually get pro audio performance and plug in place simplicity in the same system. Now before the EVA Pro series multi-component pro AV systems, were the only way to get pro audio performance in large and extra large rooms. EVA continues to amaze it Pros with these pro series devices. Now their online demo highlights, the EVA audio expert can be heard clearly from under a table behind a pillar or any other obstruction that's out there.

(00:18:28):
It's pickup performance that many conventional [00:18:30] systems can't match. Let's talk coverage here. The HCL four 10 coverage rooms up to 35 feet by 55 feet with just two mics and speaker bars. Imagine equipping an extra large meeting room or a lecture hall with just two wall-mounted devices. You can even use them individually in a divisible room. Cool. The HCL four 10 also features a unified coverage map, which processes mic pickup from two devices simultaneously to create a giant single [00:19:00] mic array. The HCL three 10 coverage spaces up to 30 feet by 30 feet with just one mic and one speaker. Barra is also about simplicity. The hgl L three 10 takes about 30 minutes to install and the HGL four 10 takes about 60 minutes. With continuous auto calibration, EVA audio automatically and continuously adapts to the changes in a room's acoustic profile and with EVA console their cloud-based device management platform, it takes the pain out of tasks [00:19:30] like former updates, checking device status, check, changing settings, and much more. Bottom line with the Pro Series EVA makes it simple to quickly and cost effectively equip more of your spaces for remote collaboration. Learn more at reeva.com/twit. That's N U R E V a.com/tw and we thank Reeva for their support of this week in enterprise Tech. Well folks, time for the News Bites. Now this week we're going to talk a little [00:20:00] bit about moving protocols forward and how it impacts organizations. Throw it over to Curtis.

Curtis Franklin (00:20:07):
Thanks Lou, appreciate that. Well, we all know that one of the things that contributes to security breaches is hanging on to old technology too long, specifically technology, which when it was introduced was plenty secure, but over the course of time [00:20:30] has been subject to a lot of different techniques for getting through its various layers of security. Well, that is true of transport layer security. Transport layer security or t l s is the successor to secure socket layers. S S L, if you're wondering why you've heard of these, it's the basic security [00:21:00] schemes that are used in things like secure websites. They're foundational security for the transactions between end users and servers through the internet. Well, when it first came out we had t l s version 1.0 and 1.1 and they worked for a long time. They did fine, but they've been replaced. [00:21:30] We are now up to t l s version 1.3 at least most of us are, and therein lies the problem.

(00:21:42):
Microsoft and Google in order to promote better security are saying that they are going to be much more aggressive about deprecating the systems, the applications, [00:22:00] all of the various bits and pieces that run 1.0 and 1.1, especially those that are able to run 1.0 and 1.1 by default. Now in a lot of cases in order to maintain backwards compatibility, they will allow people to enable 1.0 or generally 1.1 compatibility, but they've got to go in and be very explicit about [00:22:30] making that. So why? Well, it's to encourage people and specifically let's be honest, to encourage organizations to move to the more secure, more UpToDate 1.3 or at the very least 1.2. The problem for a lot of companies is that they have things that are [00:23:00] sitting on their network with 1.0 or 1.1 baked in and getting from there to the place where everyone wants them to be is going to be something of a challenge. And I know Brian, that you have thought about this in terms of just how [00:23:30] much fun and how easy it's going to be for companies to go in and do sweeping changes to their infrastructure to make things 1.2 and 1.3 compatible. I mean piece of cake,

Brian Chee (00:23:44):
Right? Yeah. I'm actually going to narrow this argument down a little further. I continue to do a lot of work in the communications industry, but a lot of my work is with charities and what a lot of people [00:24:00] like doing is they like getting tax credits or tax deductions by donating their old equipment to charities and taking the tax write off sounds great until you've got some older equipment that haven't had their firmware upgraded yet and they don't support the newer web browsers or the newer encryption. So now all of a sudden, unless you've got someone [00:24:30] with a lot of talent on staff that can go and try and hack this all together, you're left with things that can't be configured at all, so you might as well just throw them away, which is not good for the environment, it's not good for the charity, it's not good for the people that are donating because like for instance, a lot of school districts have now said, no, we're not going to take donations of older equipment because it just ends up in the dumpster.

(00:25:00):
[00:25:00] We can't upgrade it, we can't support it. So my complaint is there used to be, especially on Chrome, when you try to go and open up a browser to say a piece of equipment you could without going to any boxes, you just literally type in a sentence. It used to be, I acknowledge this is unsafe [00:25:30] and it would let you in without having to go through and radically modify the browser. Browser in their settings doesn't work anymore. It's not there, it's not acknowledged. Nobody has it. Bummer. So my answer is I actually keep some old Acer bookshelf machines that are running Windows XP and I don't upgrade them. I specifically don't even patch them. I don't connect [00:26:00] them unless I'm trying to get into this old gear. So to the folks at Microsoft, Google and so forth, this is a great goal. Getting everybody to use better encryption that is less likely to get hacked and so forth is a wonderful goal, but you've forgotten about the charities, you've forgotten about the schools. We still [00:26:30] need a way to be able to go and upload, log into this older equipment and try to get the firmware in there. Please don't forget us.

Curtis Franklin (00:26:43):
Well, that deals with a lot of the, let's call it organizations that may not be at the top of vendors lists when they're thinking about the customers [00:27:00] that are going to propel all of this. Lou, you tend to be on the cutting edge of things. I mean, do you see an issue even for companies that are more current that don't depend say on donations of old equipment? Are there issues there at this fairly draconian cutoff [00:27:30] from the older versions?

Lou Maresca (00:27:33):
Yeah, I mean obviously the newer version has some pretty cool features like perfect forward secrecy, c ciphers and some fancy stuff like that to help, but there could be blind spots. I think those are the big parts of it and the blind spots come from the fact that a lot of your networking appliances, your servers are still using these older protocols and in fact, a lot of your services that are analyzing performance or watching traffic, that kind of thing [00:28:00] are also using older protocols. So that means that those tools, unless they're updating themselves to the latest protocol, they're going to stop working, which means then you basically have a blind spot until they get updated. So that's one thing. I think another thing is the T L S 1.3, from my understanding, reading a lot of these documentation remove some of the supports of some of the older cryptographic algorithms that are out of there that essentially are obviously less secure, less performant.

(00:28:28):
And the problem with removing stuff, [00:28:30] anytime you write an A P I or something like that, you always know that removing an A P I, you need to give people enough time because if you don't, then obviously they're forced to go do an upgrade and forcing people to go do stuff sometimes is very expensive. In this case, removing algorithms that end date people might be dependent on maybe for their services, their appliances, their hardware, that means that they'll stop working or if they do upgrade other services to 1.3 and expect things to kind of be backwards compact. [00:29:00] So I think there's that problem as well. And also there's a bunch of new features that come with 1.3. Of course being on the bleeding edge is not without issues. Sometimes there's a new feature out there called zero O t T resumption, which is basically a transport layer function of capability that allows a client to actually resume interrupted t l s connections with a server without performing a full handshake after the fact, and that means that it helps improve performance and you get less data loss there, but unfortunately [00:29:30] it actually increases your security risk too.

(00:29:33):
There's actually some security issues that go along with that and you have to secure it differently anytime you force things forward. You always cause these, I'd say n minus one backwards compact scenarios, but you also run into future. You have to prepare yourself for the new technology. There could be holes there as well.

Curtis Franklin (00:29:54):
Well, I think this is going to be one of those things and this isn't brand new. I mean [00:30:00] it is an evolution, it's a migration that we started talking about what, two years ago. Brian, let me ask you real quick, why are we still talking about this? Why is it taking so long to make what sounds like to most people should be a fairly basic change?

Brian Chee (00:30:28):
A lot of organizations just [00:30:30] don't have the money for a forklift upgrade. That's the bottom line. The pandemic did a real number on a lot of people's budgets. The economy has not exactly been super healthy and we still, there's a lot of older gear I would challenge a lot of our viewers go around your company, I'll lay odds. You have an old HP Jet [00:31:00] Direct laying around. Those don't support TLS 1.3. I'll tell you right now, there's a lot of things like that webcams or actually surveillance cameras are also very subject to that issue. I actually ran into it with healthcare equipment, the gym at the University of Hawaii. I actually had [00:31:30] to take it offline, their equipment offline for physical therapy because it didn't support even S S A, it didn't support encryption, but the physical therapy equipment was so expensive that the athletic department didn't have a prayer of replacing it and anytime soon.

(00:31:52):
So it's going to be around for a long time as long as there's not a big injection. [00:32:00] I don't know. How about I go and wave a magic wand and say, president Biden, how about we all say there's a tax deduction or a tax credit for dumping old stuff? It's not going to cost the United States government that much money, but it'll be one heck of a shot in the arm for the IT industry. People will start [00:32:30] trading out the old gear. And one of the questions from loquacious, it would be nice if of the companies donated better equipment, so this isn't necessary, meaning we don't have older gear. Well, there was a program, bill Clinton actually signed an executive order that if you donated two year or younger equipment, so you could take two years of depreciation on your IT gear and [00:33:00] if before it hit the third year you donate it to a Carnegie One research institute, you got tax credits. There was also a follow on to do the same thing for K through 12 schools. Both executive orders are gone, so maybe President Biden might want to go and revisit that executive order because it wasn't a bad executive order.

Curtis Franklin (00:33:30):
[00:33:30] It's going to be interesting to hear how different companies approach this as we get deeper and deeper because I think it's pretty much guaranteed that no one is going to backslide on this. We're not going to suddenly have all the major vendors of the world say, you know what? We were wrong. Let's just go back to T L SS V 1.0 and call it a day. That's not going to happen. So we're [00:34:00] going to keep migrating, we're going to keep evolving and guaranteed we're going to keep having organizations that run headlong into this with older equipment. So that's where we are today. Look forward to seeing where we are, say this time next year.

Lou Maresca (00:34:22):
Thank you Curtis. Well next up we have our guests, but before we get to our guests, we do have another great sponsor of this week at Enterprise Tech and [00:34:30] that's discourse the eyeline home for your community for over a decade. Discourse has made it their mission to make the internet a better place for online communities by harnessing the power of discussion. Real-time chat and AI discourse makes it easy to have meaningful conversations and collaborate with your community anytime, anywhere. Would you like to create a community visit discourse.org/twit to get one month free on all self-serve plans trusted by some of the largest companies in the world, discourse is open source [00:35:00] and powers more than 20,000 online communities. Whether you're just a starting out or want to take your community to the next level, there's a plan for you. That's right. A basic plan for a private invite only community, a standard plan if you want unlimited members and a public presence, a business plan for active customer support communities.

(00:35:18):
Jonathan developer advocacy lead at Twitch says, discourse is the most amazing thing we've ever used. We have never experienced software so reliable ever. One of the biggest [00:35:30] advantages to creating your own community with discourse is that you own your own data. You'll always have access to all of your conversation and discourse will never sell your data to advertisers. Discourse gives you everything you need in one place. Make discourse the online home for your community. Visit discourse.org/twit to get one month free on all self-serve plans. That's discourse.org/twit and we thank discourse for their sport of this [00:36:00] week in enterprise tech. Well folks, it's my favorite part of the show. We actually, we're going to get a guest to drop some knowledge on the Tay, right Today we have Dev Rishi, he's co-founder and c e O of Preta Base. Welcome to the show Deb.

Dev Rishi (00:36:14):
Thanks very much Lee. Really happy to be here.

Lou Maresca (00:36:16):
We're excited to have you here. We talk about some fun topics today, but before we get to all that, our audience is a huge spectrum of experiences, whether they're starting out or all the way up to the CEOs and CTOs of the world and some of them love to hear people's origins stories. Can you take us through a journey through [00:36:30] tech and what brought you to

Dev Rishi (00:36:31):
Yeah, definitely. I know Brian asked me to keep it concise. I'm going to do my best, but my background actually started studying computer science and machine learning right in school over a decade ago. I did my master's focused on computer science and statistics and then I like to say that I sold out from my engineering background, became a product manager. And so I became a PM at Google across a number of different teams and I saw experiences of how Google research productionized machine learning inside of our products [00:37:00] like the Google Assistant. And then ultimately I actually spent most of my time on Google Cloud's external machine learning platform. Today it's called Vertex ai. At the time we were trying to figure out the name for it while I was there. It was also the first PM for Kaggle, which is a data science and machine learning community.

(00:37:18):
If you're not familiar with Kaggle, when I joined a number of years ago, we were just about a million users in the community, which was amazing to me because I didn't know there were a million people out there that knew the first thing about data science, machine [00:37:30] learning, statistics, python, pandas. But it turned out that was the case by the time I left about two and a half, three years ago, we're over 5 million users and today it's actually about 14 million, which I think is just speaks towards the growth of interest that this individual level and being able to use ml. And so I saw that on the community side at the same time that I was at G C P seeing organizations struggle to actually put machine learning in production. And it was this [00:38:00] dichotomy of a lot of interest in ML without a lot of the ability for organizations to production productionize and operationalize it outside of the Silicon Valley tech companies. And so in 2020 I met my co-founders, Pierro, Travis and Chris. We were all excited about this new way to make machine learning more accessible to organizations. It was built on top of an open source project, one of my co-founders at authored. So we started pase as a company to make it easier for engineers to build models about two and a half years ago and we're excited [00:38:30] to be on that mission today.

Lou Maresca (00:38:32):
Fantastic. Now with any ease of developing things, obviously people sometimes have us take a step back, they're worried about things and what's going to happen once you move it from a testing environment, experimentation environment into production. What are you seeing, what are seeing some of the organizations out there or even developers out there once they start on a product like Pase or any other product that's using new technologies like some of the generative technologies that are out there? What's their reservations? What's [00:39:00] preventing them from moving forward?

Dev Rishi (00:39:01):
Definitely. So we actually did a survey to better understand this recently we surveyed about 150 professionals that were either data scientists, execs or engineers at their companies. And this was across US tech companies, healthcare, financial services and others. And one of the really surprising things to me was there was a two part thing that was actually surprising. One of them was how strong the interest was in LLMs and ai. Out of all the companies that we surveyed, 85% of them said they either [00:39:30] already were experimenting with large language models or that they had immediate plans to do so inside the next 12 months to be able to start kicking off a work stream on this. Only 15% said that they didn't have immediate plans At the same time, only 13% of the ones that we surveyed actually had a single large language model in production.

(00:39:49):
And so I think we've seen this just incredible dichotomy between the level of interest and the fact that people are starting to experiment with 'em today and the fact that how difficult it can be to get into [00:40:00] production. Now we tried to understand what were some of the blockers, and I think you've heard some of them in the past, right? The LMS hallucinate, they say things that we might not have expected, but the most surprising one was at least in our survey, three quarters of the organizations we surveyed said they just couldn't use commercial LLMs, the walled gardens behind something like an open ai A P I or any of these foundation model APIs. And so to me that was actually one of the most fascinating results of the work that we did.

Lou Maresca (00:40:28):
Now let me ask you a question. When you say they couldn't [00:40:30] use them, is it because it's compliance reasons? Is it because the data is too sensitive, they don't want to move it outside their private boundaries? What is it the main reason that you're hearing

Dev Rishi (00:40:39):
The number one reason that we heard was privacy. And it's funny that privacy really, I think if you dive deeper and start to speak to some of these organizations means two things. The first means they're data province and governments. So what do you mean my data is going to go outside of my firewalls or outside of my V P C to some third party a p i. But the second is actually ownership, [00:41:00] which is I think if companies actually believe, which a lot of the organizations that we speak with do that these models are ultimately going to be their ip, their competitive differentiation and advantage, then they need some ownership over the models that they're actually going to be relying on. They don't trust this idea of weights and models being hidden away externally instead of being something that they have direct access to. And so this idea of privacy was the number one sided reason, but I really think that goes [00:41:30] hand in hand with ownership. To me this is very similar what we saw in some of the early stages of cloud adoption two in 2006 where we saw the staying power of these types of concerns across organizations and the need to be able to own some of this and have that agency in-house.

Lou Maresca (00:41:46):
Now we had Salesforce on in previous weeks and obviously they talked about all the millions of different AI models and so on that they're putting into production. But the problem is they're not seeing fast adoption. And the one thing is obviously privacy issues. [00:42:00] How do you see organizations looking beyond this? What do they have to get over or what do they have to do to feel better about this?

Dev Rishi (00:42:09):
Yeah, the really nice thing is that the history of machine learning has already in part been really well-versed in how to be able to solve this problem. So I think that the main thing organizations need to be able to do is be more oriented around open source and ownership. The reason I think those two things go hand in hand is [00:42:30] I think more and more organizations are going to realize that using a really large external commercial L L M, like something like cheap T four others that customers might be integrating in today, it's kind of like using a cannon when you need a scalpel and instead what you really need are much smaller fine tuned large language models that you actually host internally inside of your V P C today. The difficulty with any organization who wanted to host a very large language model, 175 billion parameters or even larger [00:43:00] like the ones that G T three and R on our are just getting the actual G P U compute to be able to host that is something that will be really expensive and cost prohibitive and candidly, good luck getting access to the A one hundreds in order to be able to actually even do that.

(00:43:14):
And so what I think organizations need to be able to shift their mental model to is I don't need a canon to be able to solve all of these problems. For some of 'em I need to scalpel. And that really means fine tuned open source large language models. We've seen things like Burt, which is like a 300 million [00:43:30] parameter model. So many orders of magnitude smaller be really the workhorse of narcho language tasks as enterprise organizations in the past. And this extends from 300 million to the recent LAMA, two 7 billion and 70 billion variants that meta release as well. And I think even OpenAI has spoken to how fine tuning smaller models is competitive or even better at solving specific tasks than a very general purpose model. My favorite quote from a customer, just to wrap this is that I don't, general intelligence is [00:44:00] great, but I don't need my point of sale system to recite Finch poetry. And so being able to have something that's a little bit more task specific I think is really where organizations will go.

Lou Maresca (00:44:08):
We talk a lot about how a lot of these generative technologies are like little sideshows and people are impressed in the meantime, but they don't know how to commercialize them. I think that's pretty interesting. Now I'm actually, I've talked a lot of organizations, especially startups who are commercializing a lot of this and what I'm seeing, in fact, I've talked to one that's in the pharmacy data analytics [00:44:30] side where they actually, like you were saying spin up VPCs, virtual private networks that have the infrastructure that can run and then they inject these models in there and that's where it's basically their own wall garden around private data sets that come from other companies, pharma companies that they can then do the analytics on, produce the insights, give them that data and then of course shut these networks down and basically delete all unknown visibility into it. Is that the model [00:45:00] similar to product base? Is that the model you're seeing going forward in order for people to feel more secure and more happy that their data's not going anywhere but what they have control over?

Dev Rishi (00:45:09):
That's our model and that's the only model that when we've spoken to customers, we get the head nod from the C I O in the room or anyone else who's reliant on compliance and privacy. So just to dissect that a little bit more, what companies and platform providers, for example, like Pase and us really provide in that world is the infrastructure that deploys inside your own virtual private cloud [00:45:30] is able to spin up any of the large language models that you want. And one of the benefits you get here is not just the privacy but the ownership and choice the platform like protease, you can spin up any open source model. If you want to use LAMA two 70 billion, you can, if you want to use different variants of these, you can and you can experiment across them. So we provide the infrastructure, you spin that up, it's serverless, so it'll essentially run for the point at which it's being consumed then turned down afterwards and it just stays co-located where your data is.

(00:45:58):
So if you're in an A W S [00:46:00] and you have your large language model spun up on a G P U cluster there managed by an infrastructure provider like Prase your data and Redshift, those things can be co-located so that there's kind of minimal friction in being able to use that directly and you use that data integrated in your large language model. And we can talk about ways that you can customize an L lmm to your data. There's a few techniques, but you can use that data for the analysis that you need and then spin down the service so it's not actually costing you any money or getting more exposure after you're done.

Lou Maresca (00:46:30):
[00:46:30] So two more questions and then I want to bring my cohost back in here now. I think the first thing being is most AI machine learning has been cost prohibitive, meaning most companies can't afford to get started on it, and if they do, it ends up scaling exponentially for them and costing too much to maintain. What is the getting started cost for an organization? Is it low enough that people can start funneling a bunch of data in and seeing insights right away and [00:47:00] they just pay for compute or it's kind like a subscription model and then you pay for data storage and all that stuff? What's the model follow?

Dev Rishi (00:47:09):
Yeah, that's a great question and I really like it because I think that AI has been cost prohibitive for many organizations for two reasons. The first was the technical talent gap in some regard, which is do I need to hire a PhD? And then the second was the actual underlying infrastructure inside of our platform. What we're really looking for is do you have what we call an ML [00:47:30] curious engineer? So that means two things. The first is an engineer who's willing to read technical documentation and get up to speed and that they have some interest essentially in being able to understand how these ML systems get architected, but that's more or less it. From there, the only things that we really charge for are the actual underlying compute that you're using. And so basically as you use more, that's kind of how the consumption really gets driven in terms of what that actually looks like for that engineer.

(00:47:59):
Our [00:48:00] entire system is predicated around this idea of taking a declarative approach. A declarative approach means the engineer specifies the what and the system figures out the how. And so they use and specify their entire pipelines using these configurations, simple files that they can just extend over time. And the key idea behind that is that they can control what they want, they can specify the piece of the pipeline they want in a single line of config, and then all the rest of the boiler plate and all the rest of the adaptations that need to be done are filled in with our best practices. So that's how we've generally approached the platform [00:48:30] and we're actually built on top of open source foundations. And so if you wanted to see a few examples of how people just can get started on this, Ludwig AI is a great initial source for being able to allow engineers to onboard onto it immediately.

Lou Maresca (00:48:43):
Fantastic. Well, I do want to bring my co-host back in, but before we do, I do want to thank another great sponsor of this week in enterprise tech, and that's Thanks Canary. Most companies discover they've been breached way too late. Take these scenarios. How would your company actually fare here last month on attacker compromise, one [00:49:00] of your users and has been reading the company chat since then, they've been searching for keywords and embarrassing data. Would you know your lead developer was targeted and compromised at the local Starbucks? Would you even notice you could? That's right. With canary tokens, drop the fake A W S A P I keys on each and every enterprise laptop attackers compromising your users must use them. When they do, they reveal themselves. Simply put, Canary tokens are tiny trip wires you can drop into hundreds [00:49:30] of places they follow that thinks canary philosophy reveal to deploy and ridiculously high signal quality.

(00:49:37):
Just three minutes for setup, no ongoing overhead, nearly zero false positives. You can detect attackers long before they actually dig in. Each customer gets their own hosted management console, which allows you to configure settings, manage your thinks, canaries, and handle events. There's a little room for doubt. If someone browse the file share and open a sensitive looking document on your canary, you'll immediately be alerted [00:50:00] to the actual problem. It's rare to find a security product that people can tolerate and it's near impossible to find one that customers actually love. Hardware, VM and cloud-based canaries are deployed and loved on all seven continents. Go to canary.tools/love and find a selection of unsolicited tweets and emails full of customer love for than Canary visit canary tools slash tweet and for just $7,500 per year, you'll get Canaries, your own hosted console, upgrades, [00:50:30] support and maintenance. And if you use Code Twit and how to hear about his box, you'll get 10% off the price for life things.

(00:50:38):
Canary adds incomparable value, but if you're unhappy, you can always return your Canaries with their two month money back guarantee for a full refund. However, during all the years twit has partnered with Thanks Canary, the refund guarantee has never been claimed. Visit Canary tools slash twit enter the code twit know how to hear about us box and we thank thanks [00:51:00] Canary, their support of this week and enterprise tech. Well folks, I've been talking with Dev Rishi, he's co-founder and c e O of Creda base a lot about AI bringing machine learning models to production, some things organizations are having a challenge doing. I do want to bring my co-host back in. Who should we go to first, Brian? Well,

Brian Chee (00:51:21):
I'm going to ask a research question. I come from a research background and one of the challenges about 15 [00:51:30] years ago, I needed to go and find out when Chinese factory ships were showing up at Curry Atal, which is the very, very northern part of the Hawaiian island chain. And the problem is communications. I needed to be able to go and tell me when these factory ships show up, but I had, my communications was insanely expensive. What I wanted was a model. So here's how it pertains [00:52:00] to what we're talking about. Large language models have been shrinking. We all cross our fingers that they'll keep shrinking. Do you think they're going to shrink enough that we can start using them in iot so I can load models into the edge so that instead of having to say, well, we have ship of this size, blah, blah, blah, I just say no factory ship, are we going to get there? Are we even heading in that [00:52:30] direction?

Dev Rishi (00:52:31):
I think the answer is definitely, and in general, there's three different techniques that I think people use for shrinking deep learning models. The first is pruning. So they take all the layers that exist in these models and they go ahead and shred out some. The second is quantization, which basically refers to the level of granularity that these models are going to have their data points saved in. And then the third is distillation, which is we're going to take this really large language model, [00:53:00] we're going to shrink it down to we're going to teach a smaller student model how to be able to take that. What's been most fascinating to me is the rate using these three different techniques, the rate at which large language models have already shrunk LAMA two 7 billion being competitive with LAMA two 70 billion or G P T three, 175 billion in the course of what was released really just a few months ago.

(00:53:26):
And being able to go ahead and start to be operationalized. So I think [00:53:30] that what you're going to see is large language models shrink down as well as edge and IOT infrastructure and inference be optimized for being able to run models that are on the orders of tens of millions to hundreds of millions of parameters. And I think that's going to be the world in which you're going to be able to run some of these iot. And on the edge it reflects a trend. We've seen a two-part trend in a weird way, Brian. We saw small models get created, let's think [00:54:00] about your recurrent neural networks and others that had unlimited number of layers all the way to the large language model end where you started to see new capabilities really start to get demonstrated. And now we're really basically scaling those back and seeing how much can we get away with. And the trend I think has been really positive and it doesn't seem like we're anywhere close to being at the edge of our optimizations. The very last thing I'll say on this point is even internally in our testing, we're able to serve now a LAMA two 7 billion model on a single T [00:54:30] four G P U, which is one of the most commoditized, cheapest instances of gpu C and A W S. And so I'm very optimistic for what we'll be able to happen on device in a year.

Brian Chee (00:54:39):
Well, speaking of GPUs and processors, I happened to have an Intel neural processor. It's actually a U S B job, and the original tutorial I got was, okay, first off, spin up Debbie and do things. GPUs were at that time [00:55:00] going crazy because everybody's trying to do bitcoin mining, couldn't get GPUs. Interestingly enough, neural processors, sadly I'm under nondisclosure, so I can't say who are starting to appear on single board computers for specific reasons. Is A G P U really necessary or are renewal processors enough or are they just apples and oranges when it comes to large language models?

Dev Rishi (00:55:30):
[00:55:30] I think the difficulty you get is that there's the advantages on GPUs, and specifically I'll say for now, Nvidia GPUs in large worlds for two reasons. One of them is the underlying chip set and the way that those are optimized for the types of matrix multiplications you need in these LLMs. But the second which I wouldn't underestimate is actually the software stack that's been built on top of these GPUs. And with Nvidia, I'm also talking especially about cuda. And so one of the difficulties I think is not only being able [00:56:00] to run these models on top of the underlying hardware, but also the ability for the software to be something that a developer, let's say the average developer, a tech company or at another organization would be able to actually start to run inference on and be able to train. And so I think that there will innovations that happen in the hardware space going forward, and I actually think that you'll see other types of processors become [00:56:30] increasingly competitive with GPUs on a purely performance basis. When I was at Google, we saw this with TPUs tensor processing units, which take a different architecture to your conventional GPUs and the A one hundreds that really had really promising performance results. But there's an element of usability on the software stack that ended up being kind of the overweight and concern for what got used commercially.

Curtis Franklin (00:56:54):
Well, you've been talking about the limits that can be placed [00:57:00] on LLMs and have them still be the LLMs. Do you think that we're going to be able to convince enough executives, enough people who make purchasing decisions that you can have these limits and still have useful AI with good guardrails? I mean, there are a whole bunch of issues here. [00:57:30] I think that it's probably because of the way that LLMs have delivered their results in a very human-like language that has spooked the living daylights out of people. Where do you see that coming down? First of all, do you think my analysis is correct there and do you think there's a good useful way to fix it?

Dev Rishi (00:57:52):
I think your analysis is correct, and I think that the way that I think about the useful way to fix it is really on the orientation of use cases that execs think about, [00:58:00] oh, I'm solving their organization today. So one of our board members is from investment firm in Poloto called Greylock, sits in on a lot of other enterprise organization C I O meetings. And one of the most consistent things that they've said over the last two quarters is that every C-level meeting at every board meeting that they've been a part of, what is our AI strategy has been something that the C E O, the C I O, the C T O have all been asking. [00:58:30] And so the great news in my mind, really their Curtis, is that we actually do have a lot of that top level exact buy-in that this is something that we need to be able to do.

(00:58:39):
And I think this might be a little pessimistic, but a little bit of existential fear that if company A uses an L L M and company B doesn't, company B is no longer going to be competitive over five to 10 years if they're automating some of the tasks company A does. So I think there's a lot of that push. Where I see it a lot fall down is [00:59:00] actually just picking the wrong use cases for LLMs right off the bat. And if there is a single piece of advice that I would give to execs that are looking at it evaluating LLMs for their internal use cases, it would be to start simple. And so what I mean by the wrong use cases is I've heard a few organizations that saw the generative kind of capabilities that these had, and one of the types of questions that they want to ask is, how should I maximize my revenue at my organization?

(00:59:27):
The truth is single model, that's [00:59:30] a hard question for a C E O to answer. And a single model probably does not have those answers themselves either. Instead, I would orient people to start thinking about use cases where number one, non-determinism or the fact that the model might put out something that isn't perfect is okay. And number two, you have the data to actually build a competitive moat and edge. And so some of the use cases that we see have a lot of success I would say, are around personalization and recommendation systems. It's okay within [01:00:00] some reason if I show you three different, let's say products and one of 'em wasn't the one that you were going to click. And similarly on classification tasks that are trying to automate things like customer support and others where you have a human in the loop and L L M is really just making them a lot faster. And so with the exec level confidence or conversations, what I really think I would emphasize is start on the simpler ends. Don't think of an L L M as general artificial intelligence. Think of it as a really efficient [01:00:30] way to be able to solve one task specific problem that involves natural language.

Curtis Franklin (01:00:35):
I love what you're saying there, and it fits nicely with what I hear a lot of in my segment of the market, which is security, where people are talking about LLMs as human assistance, something that increases the effectiveness of a human operator rather than something designed to replace a human operator.

(01:01:00):
[01:01:00] And I've got to ask because I'm going to use a current event to ask you a question, got a couple of major strikes going on in Hollywood where one of the major issues from writers and from actors is that producers apparently believe that artificial intelligence can replace [01:01:30] the humans in those roles and I've heard in other circumstances with say a merger, A C E O saying, well, we're going to get efficiency because we're going to replace 50% of our administrative workers with AI in the next two or three years. Where do you come down on the realistic prospects of AI [01:02:00] wholesale, replacing large swaths of the workforce? Is that executive getting way ahead of themselves or is that something that we really do need to be thinking hard about right now?

Dev Rishi (01:02:15):
When you say wholesale replacement, you impose a very high bar for what the models need to be able to do in order to be successful. And by the way, I think we see this with self-driving, which was, didn't self-driving feel like it was around the corner in 2015, but it turns [01:02:30] out you need to get every condition right, how it drives in snow, what it's going to do with a bicyclist. And that's critical if you're going to take the driver out of the driver's seat instead, I think the much more likely outcome and the organizations that are going to be successful over the next 12, 24 and 36 months are going to be the ones that recognize these models, augment each user of the model to be a lot more efficient. I wouldn't trust the L L M to write me a full script, but I would trust a script writer with an L l M to write me a script two times [01:03:00] faster than one without it.

(01:03:02):
And so that's where I really think you're going to see these types of LLMs pushing the boundaries of productivity. Then the organization will need to make the determination of, if I have my script writers and they're two times faster, I have these other workers and they're a lot more efficient, do I want to make more of the same goods and services that I was doing or am I happy at the current level of output? But ultimately, I think that these models, fundamental weakness [01:03:30] is really their non-determinism. They're probabilistic. And the best approach we have to be able to hedge that out today is to have a human in the loop. And I think that will continue to be the case over the next 12 to 24 months. And I'm saying that as someone who's very bullish on this technology, I think it is very much a human in the loop model.

Lou Maresca (01:03:46):
I see. Well, thank you Deb, really interesting topic. We really appreciate you being here. Unfortunately, we're running low on time. We wanted to give you a chance to maybe tell the folks at home where they can get more information about preta Base and maybe how they could get started.

Dev Rishi (01:03:58):
Definitely. So [01:04:00] please check us out preta base.com. And the easiest way to get started with us is just to go to our free trial. You'll be able to use it for free, test it out, see if it actually lives up to everything that you'd like to see. We're the easiest platform in our view to be able to build and serve your own machine learning models inside your cloud. And if you want to check out our open source foundations there@ludwig.ai, which you can run anytime on your own as well.

Lou Maresca (01:04:22):
Fantastic. Thank you again. Well, folks, you've done it again. You've sat through another, the best thing, enterprise and IT podcast in the universe to definitely tune your pad catcher [01:04:30] to ttw. I want to thank everyone who makes this show possible, especially to my wonderful cohost, our radio, Mr. Brian Chi Bert, what's going on for you in the coming weeks? Where could people find you?

Brian Chee (01:04:40):
I am truly hoping I get more feedback from our viewers on how well they liked that hands-on segment that we did last week on Outand management. I threw out a few potential topics on Twitter and I'd love to hear more from you folks. [01:05:00] What kinds of hands-on thing I was throwing out. Maybe we do wireless bypass, point to point, wireless lasers and things like that, how to do long haul copper, things like that. What do want if we don't have expertise in house, okay, we're going to go find a buddy or a friend that has that expertise and let's see if we can get ated. Anyway, [01:05:30] we'd love to hear from you. I'm A D V N E T L A B advanced net lab on Twitter. You're also more than welcome to throw email at bert, spelled C H E E B E R TWI tv. You can also send email to ttw at tv, which will hit all the hosts. You folks are the ones that are going to tell us what you want, our numbers, we could do better, we'd like to do [01:06:00] better, but they're not going to get better unless you folks tell us what you really and truly want and we'll go see if we can go and make your day.

Lou Maresca (01:06:11):
Thank you, Bert, appreciate being here. Of course, we also thank everybody around, Mr. Curtis Franklin. Curtis, thank you so much for being here. What's going on for you in the coming weeks? Where can people find your stuff?

Curtis Franklin (01:06:21):
Well, I've got a number of things that I'm writing about. You're going to see at least one piece from me on dark reading next week, so [01:06:30] be looking for that. Might also see about doing a little bit of video work over on LinkedIn. I enjoyed that. I want to get back to that. But the big thing is I am already starting to think about what the trends for 2024 are going to be and wait, let me think. Hub, I'll bet that artificial intelligence will be in at least one of them, that crazy preview there. [01:07:00] But beginning to think about that, and this is one where I'd love for people to let me know folks from the TWI riot, so please feel free to hook up with me on X kg four G W a Mastodon kg, four gwa mastodon sdf.org, LinkedIn, Curtis Franklin, just find me. I'm on all the social medias. I'm just one of these [01:07:30] wild and crazy influencer kind of people. So although I promise you'll never hear me telling you to emulate my hairstyle, this one is mine and mine alone, so you can't have it. But I would love to hear from you if you're listening, and I look forward to seeing everyone again this time next week.

Lou Maresca (01:07:52):
Thank you, Curtis. Appreciate being here. Well folks, I also have to thank you as well. You're the person who drops in each and every week to drop and enterprise and [01:08:00] it goodness from us want to make it easy for you to listen to catch up on your enterprise and IT news. So go to our show page right now, TWIT tv slash twit. They're find all of our amazing back episodes, of course, all the show notes and the guest information, but more importantly there next to those videos there, you'll get those helpful. Subscribe and download links. Get your video of your choice, your audio and video of your choice on any one of your devices, on all of the podcast applications that are out there. Subscribe, support the show, download each week, get your information as you [01:08:30] need. It's really a great way to stay on top of your enterprise at 19 News.

(01:08:32):
Definitely do that. Of course, you may have also heard we have an ad free podcast service as well if you don't want to hear those ads. It also has a bonus TWIT plus feed that you can't get anywhere else, and it's only $7 a month. That's right. A lot of great things that come with that amazing club twit, but one of them is actually the access to members only Discord server. That's right. You can chat with hosts, producers, lots of separate side discussions, special events, live chapter in the show. Definitely join Club twit, be part of that movement. [01:09:00] Go to TWIT TV slash club twit. Of course, they also offer corporate root plans as well. It makes it easy for you and all of your organization to get access to our Ad Free Tech podcast plan. Start with five members at a discounted rate of $6 each per month, and you can add as many seats as you like there.

(01:09:17):
It's really a great way for all your IT departments, your developers, your tech teams, your sales teams to stay up to date and access to all of our podcasts. Just like that regular membership. You can join the Discord server as well as here, the What's Bonus [01:09:30] feed as well. So definitely get Club Twit, join that. Of course, there's also family plans as well, which I use. It's a $12 a month. You get two seats, $6 each for each additional seat after that, and you get all the advantages of the single plan as well. Lots of options out there. So definitely join club twit T TV slash club twit. Now, after you subscribe and you download some of the episodes, you can impress your friends, your family members, your coworkers with a gift of twice, tell you one thing. We have a lot of fun on this show and I guarantee they will have fun listening as well.

(01:09:59):
So definitely have them [01:10:00] subscribe and enjoy the show. Now after you subscribe, if you're available on Friday, that's right, right now, Fox Friday, 1:30 PM Pacific Time, we do this show live. You can go look at all the different streams that are out there. Just go to your webpages, live TWI tv and in your browser, and you can choose your stream of your choice. You can come see how the pizza's made, all the behind the scenes, all the fun and banter before and after the show. So come watch the show live if you've already subscribed and downloaded. And of course if you're going to watch the show live, you can also jump into our amazing IRC channel as well. There are some [01:10:30] amazing characters in there. Any way you get there is go to http irc twit tv, and there you'll jump right into the Twit live channel and you'll see some great characters each and every week.

(01:10:40):
You have Tech Dino Chicken Head, you have Keith five 12. We have Laqua, you got all the characters in there each and every week, rear, rear Mike. So definitely join them, have fun, be part of the show. Now, I want you to hit me up every week. I'm also on twitter.com/slash lm or x.com I guess you could say. Of course, [01:11:00] I'm also on Threads, Lou, m p m over there on Threads. I post all my enterprise tidbits there. Of course, I'm also on Masteron Lou, I'm at twit Social. I want you to direct message me. Even on LinkedIn, I get a lot of messages. In fact, I got about 30 messages last week on LinkedIn just about the show and things they want us to cover or even how to get started in the industry. Lots of great stuff, so definitely reach out to me.

(01:11:22):
I get to those each and every week. Plus, if you want to know what I do during my normal work week, definitely check out developers.microsoft.com/office. [01:11:30] There. We post all the latest and greatest ways for you to customize your office experience to make it more productive for you. In fact, if you have Microsoft 365 right now installed on your machine office, you check out Excel, there's an amazing automate tab that's in Excel now if you haven't noticed already. And you can go actually record macros, you can edit the code, and in fact, you can go rerun those by scheduling them or whatever trigger you want on power automate. So a lot of powerful automation capabilities built right [01:12:00] into this standard stuff. I want to thank everyone who makes this show possible, especially to Leo and to Lisa. They continue to support this weekend in enterprise tech each and every week, and we couldn't do the show without them.

(01:12:11):
So thank you for their support over the years. Of course, I also want to thank Mr. Brian Chi one more time. He's not only our co-host, but he's also our titles producer. He does all the bookings and the plannings before and after the show. So Chibar, thank you for your support over the years. And of course, before we sign out, I want to thank our editor for today because they cut out all of our mistakes and make us look good. So thank you [01:12:30] guys. Of course. Plus, I want to thank our technical director today, Mr. Ant Pruitt. He does a lot of great things on twit, and I'm always curious to hear what's going on on twit this week. So ant, what's going on for you this week on Twit?

Ant Pruitt (01:12:44):
Well, thank you, Mr. Lu. This week has been a little bit quieter for me, just for a few days, and I appreciate that getting a little bit of breathing room. I hear you. So I've been working on some stuff for the club Twitch stuff, but quite frankly, sir, right now I'm just trying [01:13:00] to gear up for more high school football with my boy and his senior year. Looking forward to going to check him out tonight. And then I'm also looking forward to trying to make sure I can master this haircut. I thought I had it down pat, but according to Mr. Kurt there, apparently I ain't doing it right. So we shall see. But yeah, appreciate the support, sir.

Lou Maresca (01:13:25):
Thank you, man. Well, until next time, I'm Louis Bsca just reminding you. If you want to know what's [01:13:30] going on in the enterprise, just keep TWIET
 

All Transcripts posts