r/cscareerquestions May 22 '24

AI will replace us all, but in the meantime my company has blocked all use of AI tools (copilot, chatgpt, etc).

I work for a 5000+ employee company. We are banned from using any AI tooling for anything company related.

At the same time, the executives are telling the world that we are AI-ready in marketing and client offerings.

Has anybody else had to deal with near total blackout of AI tooling?

626 Upvotes

289 comments sorted by

148

u/Samultio May 22 '24

Similar position, a C level was very surprised when they visited and found out no one was using AI, like what did they expect.

91

u/[deleted] May 22 '24

[removed] — view removed comment

65

u/Material_Policy6327 May 22 '24

Yeah the AI hype is overblown. I work on AI systems and yeah it’s been nice funding wise but holy shit does business not understand anything about it.

9

u/DirectorBusiness5512 May 23 '24

Maybe their intelligence is artificial

3

u/RaamShack May 23 '24

Average experience with business

11

u/[deleted] May 22 '24

Crafting media narratives that serve his interests is what Altman does best. Remember when the media was running stories about Air BNB killing the hotel industry while they were part of Altman’s Y-combinator? That was what Altman wanted everyone to think, that Air BNB was the future and that staying at a hotel was stupid. The reality? These services did have a modest negative impact on hotel revenues but nowhere near what was being portrayed in the media who just loves a good tech upstart story.

8

u/terrany May 22 '24

Just like how they expect everyone else to be "middle class" like them, our exec asked us what we do when we maintain our boats/where we park them.

537

u/niveknyc SWE 14 YOE May 22 '24

You are the AI

61

u/Kryddersild May 22 '24

You can call me AI

13

u/Seth_Nielsen May 22 '24

My long lost pal?

10

u/RushN24 May 22 '24

I will call you Betty

2

u/WrastleGuy Jun 19 '24

Then it’s settled, you can be my bodyguard 

2

u/RushN24 Jun 19 '24

Thanks pal! I thought you were long lost.

12

u/chaoism Software Engineer, 10yoe May 22 '24

Allen Iverson is so iconic. I model my game after him

4

u/Comfortable_Storage4 May 22 '24

We talking about practice

3

u/returnFutureVoid May 22 '24

No AI. We’re talking about AI.

→ More replies (1)

24

u/WishboneDaddy May 22 '24

Average Indian? I have been called worse. 🤣

→ More replies (1)

4

u/JennyJtom May 22 '24

Lol Amazon tried that already.

1

u/[deleted] May 22 '24

[removed] — view removed comment

2

u/AutoModerator May 22 '24

Sorry, you do not meet the minimum sitewide comment karma requirement of 10 to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the rules page for more information.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

339

u/PM_ME_SOME_ANY_THING May 22 '24

My company never picked it up. No big deal. I was doing fine without it. Didn’t they say people were putting proprietary stuff into ChatGPT which was then getting stolen?

153

u/DynamicHunter Junior Developer May 22 '24

It hasn’t gotten stolen (yet), it’s the fact that it does hold sensitive company data that people input for training that they don’t want. That’s why a lot of companies have their own versions of stuff like copilot that protect the company’s private info instead of the public facing models

52

u/PandaCodeRed May 22 '24

I also assume it is for legal reasons, can’t really claim the data is confidential or a trade secret if the company is disclosing to a third party without an agreement governing confidentiality and use of such data.

43

u/[deleted] May 22 '24

[removed] — view removed comment

13

u/darthwalsh May 22 '24

Yeah, similar. Our internal portal for "chatGPT" is using some private OpenAI instance on Azure, presumably with a contract that they are not training on our data.

7

u/ImNotALLM May 22 '24

Fyi for anyone curious, there's a toggle in the settings of chatgpt to opt out of using your chats as training data.

→ More replies (1)

4

u/[deleted] May 23 '24

GitHub copilot (for Business) to my knowledge does not retain any code after it has returned a response.

19

u/CurtisLinithicum May 22 '24

...which is precisely why it was banned at my work, although being corp, it's generally high on proprietary/private stuff.

8

u/bwatsnet May 22 '24

I'd bet people still use it, and get promoted over people who dont

4

u/ImpoliteSstamina May 22 '24

I work with multiple people who've been dumb enough to post proprietary stuff on public forums, and we only found out because they then bragged in meetings about using "community resources" to solve problems.

We're not banned from AI tools but I wouldn't blame management if they did.

12

u/[deleted] May 22 '24 edited May 22 '24

I think that was a concern but I dont know if I have heard accounts of proprietary software being found in ChatGPT… yet

20

u/MyRedditAccount1000 May 22 '24

8

u/Western_Objective209 May 22 '24

So what actually happened? It looks like an employee just pasted some company info into chatgpt?

14

u/tebasj May 22 '24

It looks like an employee just pasted some company info into chatgpt?

yes and now chatgpt databases have private samsung info in them.

→ More replies (2)

2

u/[deleted] May 22 '24

Oh right I forgot about that

→ More replies (1)

6

u/ImSoCul Senior Spaghetti Factory Chef May 22 '24

Most larger/enterprise level companies aren't using naked ChatGPT. I think people are also overly paranoid about proprietary code. Yes you should keep propriety code protected to cover your ass, but pretty sure another company could have full access to our codebase but not be able to spin up the proper infrastructure to support, letalone replicate our business. As was famously said "Code is a liability, not an asset"

8

u/267aa37673a9fa659490 May 22 '24

It's a tool, how you use it makes all the difference. There are plenty of AI tools that only has access to what you explicitly feed it.

You can put propriety stuff in Stack Overflow too if you want but you don't see that being banned.

9

u/naillstaybad May 22 '24

They are trying to limit risks I guess, my company also bans things like JSON prettify online lol

2

u/[deleted] May 22 '24

If it is known you would get in trouble 

→ More replies (1)

5

u/gk_instakilogram May 22 '24

How is it different from putting proprietary stuff into github, gitlabs or atlassian? Or any other cloud tool for that matter?

7

u/SanityInAnarchy May 22 '24

I guess it depends what they're doing with it. It's one thing to trust Github to just store your source code and give it back to you when you ask for it. It's a bit different if they're also training AI models on your source code and then let other companies use those models, and you have to trust those models not to accidentally leak stuff when they can barely even pass the apple test.

That said, I don't think they're doing this. IIRC they say they only train on publicly-available repos -- it may access your private repo when talking to you, but your repo isn't part of the general training corpus.

And there's good reason for them not to train on private data, because it used to be easy to convince Copilot to output its training data. That's a problem even if it's only trained on open source -- at least it's not a leak, but you don't want to accidentally be copying some open source code and violating its license. In their defense, they've basically promised to defend you in court if this comes up again.

6

u/[deleted] May 22 '24

Well you can host your own GitHub account for enterprise accounts.

1

u/gk_instakilogram May 22 '24

Most enterprises use cloud and do not host their own instances. Also think about things like azure, aws, google cloud, in these instances you as well run proprietary data and code on other companies infrastructure.

5

u/IBJON Software Engineer May 22 '24

That's completely different because those are controlled environments and aren't being farmed for data. Microsoft can't just go into your Azure tenant and just scoop up whatever data it wants just because it's in their data center. 

Ditto for enterprise git hosts. 

→ More replies (5)
→ More replies (2)

1

u/bubblehead_maker May 22 '24

The better stack overflow?

1

u/Strong-Piccolo-5546 May 22 '24

google has gotten bad. I find chatgpt and copilot (this one searches the web) are better than google when i need to look something up.

73

u/PyroRampage May 22 '24

Likely because the laws of copyrighted training material are still a wip and they don't want to risk production code been mixed with AI code. What if their whole codebase becomes GPL !

28

u/terjon Professional Meeting Haver May 22 '24

This is going to be hilarious when it happens as I bet that there's thousands of products already polluted with AI code.

→ More replies (8)

65

u/itsmeart May 22 '24

Yeah same, all AI tools are blocked in our company (banking)

3

u/I_Have_Some_Qs Software Engineer May 22 '24

At my bank I think the only AI code writing tool we are allowed to use is co-pilot, and only for completions and not for things like the chat.

94

u/[deleted] May 22 '24

Nope my company actually encouraged it and checks adoption rates across the business.

Not sure if it’s the right move forcing it on people who don’t want to use it seems just as bad to me.

22

u/WishboneDaddy May 22 '24

Do they have any security guardrails against dumping IP, business docs, and customer data into chat prompts?

11

u/[deleted] May 22 '24

Apart from whatever settings come with the tools I don’t know. The IT dept (separate from devs) + Higher ups “vetted” everything is safe to use across the org. Whatever that means.

→ More replies (2)

14

u/serg06 May 22 '24

Why would it matter? ChatGPT Enterprise keeps your data private: https://help.openai.com/en/articles/8265053-what-is-chatgpt-enterprise#

5

u/WishboneDaddy May 22 '24

I wonder how many employers here are using chatgpt enterprise. Don’t they charge $25 per monthly license? Big yikes if your expenses are tight.

7

u/darthwalsh May 23 '24

Yikes? Billing $1 per day per employee? How many minutes per day of work does the AI need to save, in order to pay for itself?

If an employer switches from in-person offices to remote work, how many dollars per employee did they free up in the budget?

→ More replies (1)
→ More replies (2)
→ More replies (1)

13

u/Farren246 Senior where the tech is not the product May 22 '24

I'm genuinely curious, what kind of company is this that forces AI onto its employees? My company can't even force its buyers to learn Excel...

24

u/TedW May 22 '24

Employees are easier to force than customers.

14

u/Alternatezuercher May 22 '24

There is a company role called buyer.

8

u/TedW May 22 '24

That.. makes a lot more sense.

2

u/Farren246 Senior where the tech is not the product May 22 '24

:D

Thankfully, our accountants all seem to be fairly proficient in Excel.

6

u/LeetcodeFastEatAss May 22 '24

I’ve heard Accenture is all in thinking AI is going to make them rich by keeping revenue steady and cut costs by a significant amount by getting higher productivity from each employee because of AI. They are collecting data on how many story points employees are “saving by using AI.” Basically, any time a story completes under the estimate it was because of AI.

5

u/[deleted] May 22 '24

Believe it or not Healthcare, scary I know.

5

u/Farren246 Senior where the tech is not the product May 22 '24

"Alexa, tell me the medical history of Jimmy McDermot, and leave nothing out!"

(Proceeds to make up nonsense that disqualifies poor Jimmy from having his heart surgery covered.)

4

u/ProfessionalBrief329 May 22 '24

OP didn’t say force but encourage… big difference

4

u/Farren246 Senior where the tech is not the product May 22 '24

Encouraging then checking up on you to make sure you adopted it ain't encouraging.

5

u/Swing-Prize May 22 '24

I work at big financial and AI is viewed as potential productivity booster thus everyone is encouraged to try out in their flows and provide feedback. It's part of upskilling. Internal dataset is also being fed so we wouldn't need to navigate through many wikis guessing right categories/keywords.

→ More replies (5)

20

u/bnasdfjlkwe May 22 '24

Blacked out until the company onboards a solution that can handle propriety data.

26

u/[deleted] May 22 '24

I have heard from alot of my friends I met in the field that their IT directors are blocking AI. I use it daily at my job, its a lifesaver for coding as I am a system engineer that doesnt exactly specialize in that.

13

u/betterworldbiker Senior Technical Product Owner May 22 '24

I use it constantly, if they block it I might honestly leave as it has increased my productivity so much.

3

u/[deleted] May 22 '24

I'm our lead systems engineer so I don't have to worry about it lol

3

u/betterworldbiker Senior Technical Product Owner May 22 '24

I don't really code, just do architecture and design work and documentation, but it's helpful for getting a rough draft going on stuff. Definitely cannot use it for a final product, but it is what it is.

10

u/Dreadsin Web Developer May 22 '24

I worked at one of the biggest company, and those tools were mostly banned. It kind of made sense though, because they didn’t want us putting any sensitive company information into ChatGPT, so I get it

Eventually they said we could use an AI tool they were developing. It wasn’t very good

4

u/dadvader May 22 '24

Yeah i find that most of the company only ban AI because they don't want to mix their internal data into public model. And will prefer just make one themselves so they can be sure that internal data stay internal.

The promise of AI basically guarantee that there is no way any company would block its usage out of their goodness heart or bold human-head-first strategy thing. They block it because they don't want to risk data leakage. That's all there is.

8

u/squeeemeister May 22 '24

Same, work in identity security, dumping our code into copilot is a big no no. However, these AI companies have proven a track record of asking for forgiveness instead of permission, so it’s fun pretending like all our code hasn’t already been injected because it’s all in github.

We aren’t touting a lot of useless AI vaporware features, yet. We have one product that has a custom query language. And dumping that into a small LLM has actually proven useful to end users that can just tell it “give me all users who logged in on X” and it will spit out the query and run it.

24

u/MyPhantomAccount May 22 '24

We're the same. Massive company, blanket ban on it. Its like steroids in the Tour De France,  everyone is using the tools, no one is admitting it

2

u/JustSomeGuy131 May 22 '24

Underrated comment lol, same at my company!

32

u/r8juliet May 22 '24

First, AI will not replace us all, AI will replace those who are unable to adapt. I'm in a company that, at first, blacked out most AI tooling. There's obvious security concerns with how much data is being captured by the tool, so I can't hate on a company for wanted to wanting to evaluate a new tool. There are 2 approaches you can take (probably more but I'm not feeling very creative atm). 1. Whine and mumble under your breath with passive aggressive comments. 2. Develop an adoption plan and schedule a presentation with a decision maker.

My approach: We surveyed current approved tooling, clearly laid out the risk/rewards, then proposed a security training plan for engineers to complete prior to being authorized to use it. I work for a large defense contractor and they were very receptive to this approach. If you leave it in the hands of departments that have zero stakes, it will just sit on a desk. People are generally enthusiastic to approve things when the boring work is done for them. If your company is completely shutting you down you need to either work on your salesmanship or switch to a company that can get out of their own way.

→ More replies (8)

6

u/doktorhladnjak May 22 '24

They’re clued in enough to not be giving their data away to big AI companies for free. Or actually probably paying them to do so

4

u/m4bwav May 22 '24

Legal departments everywhere are scared of the legal and trade secret implications of AI.

If you enter a trade secret into an AI and that AI is not totally locally hosted, it seems possible that someone might be able to extract that secret.

4

u/double-happiness Software Engineer May 22 '24

I was poking fun at our tech lead recently because I said I could tell he'd got some code he was sharing and discussing from Copilot, because of the smiley face and rocket emojis at the end of the accompanying explanation 🤣

But actually, he showed me how he was getting code out of it, and I started to do the same, so it's a win really.

5

u/shanz13 Student May 22 '24

at this point , ai is like google for me. super helpful and saves a ton of time

3

u/patrickisgreat Senior Software Engineer May 22 '24

AI won’t replace us all any time soon, if ever.

5

u/fknbtch May 22 '24

my company asks that we use it as much as we can for as many different tasks as we can. we're a software company though, and for the codebase we're using copilot.

9

u/SinnPacked May 22 '24

No outright ban at my office, but we're definitely instructed not to show it anything proprietary.

I wouldn't mind an outright ban seeing as I've made limited use of it and find it mostly just wastes my time.

3

u/nitekillerz Software Engineer May 22 '24

I feel like they’re going about it the wrong way. They seem to not be able to afford the licensing for copyright stuff which is fine or don’t see the value of it. But you can bet your ass you’re not stopping an employee from a company of that size from using it directly or indirectly. My company encourages it because we have copilot licenses etc but I feel like other companies that are banning it will hard a have time doing it correctly.

7

u/kondorb May 22 '24

Lol, ChatGPT is a daily use tool now, like Google and SO were just recently. Both quickly got shittified so ChatGPT now replaces them. For the same exact use case.

Your company is shooting itself in the foot.

→ More replies (1)

2

u/kog May 22 '24

People at your company were almost certainly putting company IP in their prompts. At that point it starts to become pretty important to most companies that they stop it.

2

u/MonkAndCanatella May 22 '24

That's hilarious. What a fucking shitshow. That said we use Webstorm, and it has some kinda ai model for autocomplete thrown in, and when it works, it's fucking awesome, not gonna lie. Used as a really intelligent autocomplete instead of something that will write your code for you, it's really useful. I would say that gets it right maybe a third of the time. I'm talking about filling in imports, and other kind of annoying stuff like logging

2

u/Jamese03 May 22 '24

They don’t wanna pay for licensing something they can keep secure, and don’t want company code sent to directly to openAI

2

u/[deleted] May 22 '24

AI is a thug that steals intellectual property. That's why your company banned it.

2

u/amitkania May 22 '24

my company blocks all that and also stack overflow and github

→ More replies (4)

2

u/VanquishShade May 22 '24

The number of comments on here about why companies aren’t allowing AI tooling is ridiculous 

I think it’s pretty obvious to the OP (and the rest of the world) what the reasons are for companies blocking these tools 

 The OP’s just frustrated at the irony

2

u/jofalves May 22 '24

Yep, been there! There are many companies in Europe also banning AI tools for any work-related tasks (mostly due to privacy reasons). Meanwhile, execs from such companies are out there hyping up as "AI-ready" in all their marketing and client talks. It's super frustrating and feels pretty hypocritical. You're definitely not alone in this!

2

u/Immediate_Studio1950 May 22 '24

The intensive use of AI kills our genius... Completely annihilates the effort we should make to achieve our tasks... Spend 7 hours of clock traditionally solving an algorithmic complex or basic mathematical problem without resorting to AI requires some huge cognitive element & this has many respects while AI has hardly any..

2

u/Abject_Scholar_8685 May 22 '24

I am aware of several large companies who are not allowed to use AI tools of any kind. I'm not aware of many who explicitly who allow or support it.

2

u/drumDev29 May 22 '24

As someone who has used both copilot and codeium you aren't missing out on much. Most of the time fixing the output is just as fast or slower as doing things yourself the first time.

2

u/Sudhanva_Kote May 23 '24

My company created their own interface with openAI subscription so company data doesn't get used for training (or something like that) so instead of the openly available chatgpt we have to use the custom one. It was what I would describe as "good enough". Also we had access to GitHub co-pilot which was also helpful (sometimes)

2

u/emulatorguy076 May 23 '24

So I was on the AI team in my previous company and working on a big project which involved the gpt api saving us multiple million dollars in fees annually and it suddenly stops working. Apparently the people at the top just forgot that we had an ai team and just banned even the gpt api from working on the machines. Delayed the whole thing by 3+ weeks and the only solution they were willing to compromise with was that we had to use a different laptop which was outside company policy with a different network which we were given a data card for 💀💀💀 needless to say I didn't stay for long after that debacle.

2

u/Ozymandias0023 May 23 '24

Your company has done you a favor

2

u/Senior-Pro May 23 '24

The prompts on ChatGPT contribute to its training data. Be careful with the information you share, as it may be retained and used for future responses. Many companies have faced issues when employees pasted sensitive code or information into the platform.

5

u/Alex-S-S May 22 '24

If you have an internally developed tool it's fine. The problem is that ChatGPT and the rest of the crap steal data and idiots would put internal company code into those things. People got fired for this and rightfully so.

→ More replies (1)

4

u/WrastleGuy May 22 '24

It’s actually awesome when a company blocks it, because you just use it on your phone and are super productive compared to everyone else.

7

u/RealityWard742 May 22 '24

So essentially break the rules. Feeding information to an outside AI model.

2

u/conconxweewee1 May 22 '24

My company wants us all to be using it, I actually just set up copilot a couple weeks ago with the chat feature. It’s honestly really nice!

2

u/iNoles Looking - Experienced Software Engineer May 22 '24

if managers want to kill their workforce for AI, it is on them.

2

u/taleofzero May 22 '24

Yeah, it's called the 20 years I spent coding without AI.

We have the option of using Copilot and it's like an auto complete that sometimes gives good answers and sometimes gives silly ones.

3

u/pydry Software Architect | Python May 22 '24

AI will replace us but also thanks to the demographic crisis you can't retire, ever. Somehow theyve gotta square that circle.

3

u/TedW May 22 '24

They just don't want us retiring before they can replace us.

AI isn't there yet, but I expected art to be harder than code, so it's probably closer than I think.

2

u/jan04pl May 22 '24

No, actually Art is simpler than code, because art doesn't need to be 100% perfect. If there are a few pixels off, it doesn't matter. Look at any AI-generated image, you will always find something off after looking closer, and the result will never be the same for the same prompt.

With code however, the code needs to be 100% correct every time, else it won't work or worse, screw something up. That's hard to achieve with the transformer architecture as it's all about probability and not actually understanding the meaning of the code.

2

u/TedW May 22 '24

I expected it to be harder to identify a carrot, let alone be able to draw one that's wrapped around a pencil. But we figured out how to get AI to draw that anyway, and much sooner than I expected.

Code constraints are very well defined though. An AI can (eventually) understand the rules and best practices. If AI code does what we asked, but has a few unexpected side effects, well, we can clarify those with another request. I agree the consequences are high, but it's easier for humans to proof read, than to write. And we can ask for additional test cases to avoid specific scenarios.

2

u/jan04pl May 22 '24

The current "AI" (LLMs) does not understand anything. It's trained on input data and applies probability weights in order to process and output text. It's very good at it, as it got almost the entire internet and books etc as training data, but that data is created by humans and finite. It will struggle with anything after it's training data cutoff, and pasting in the whole documentation of a new framework for example only gets you as far as the context window. Start a new conversation and it forgets everything you pasted in before.

I suggest you read up on how LLMs work under the hood. There is nothing magic, fancy or intelligent there.

→ More replies (8)
→ More replies (6)

1

u/[deleted] May 22 '24

[removed] — view removed comment

1

u/AutoModerator May 22 '24

Sorry, you do not meet the minimum sitewide comment karma requirement of 10 to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the rules page for more information.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.

1

u/Efficient-Lab1062 May 22 '24

My company has embraced it. Obviously don’t put proprietary stuff in there but I use it pretty often as a junior dev to help with weird errors or explain a bit of code I don’t understand when I come across it.

1

u/Afraid-Department-35 May 22 '24

We've been using co-pilot. There were a certain set of rules that we had to agree to before we were allowed to use it that prevents us from using it in certain cases, but no a full blanket rule. Mostly being to not feed it PII data so naturally some parts of our development cannot use ai tools.

1

u/p3wong May 22 '24

it is quite typical until your company figures out legal issues around ip, costs, internal billing. they don't want people paste code into chatgpt without knowing it can be reused. we got copilot a few months ago and it's really nice and useful.

1

u/[deleted] May 22 '24

My company gave out Co Pilot licenses to everyone, and I get my GPT sub comped. Obviously, people shouldn't be putting sensitive info into it.

1

u/[deleted] May 22 '24

We’ve been banned everywhere I’ve worked except my current company

1

u/wh1t3ros3 May 22 '24

Seems like the right step to make while we figure out what data is protected under copyright law as it relates to training models.

1

u/scoby_cat May 22 '24

My company doesn’t use generative AI at all to make our code for liability reasons. But I do use it in my personal projects

1

u/[deleted] May 22 '24

My company develops AI tools and has requested we test them in our workflows. Polar opposite.

1

u/thephotoman Veteran Code Monkey May 22 '24

After working a little with Copilot, I’m underwhelmed. It’s less of a developer replacement and more of a “I don’t need to know how to write shell one-liners or editor macros anymore” thing. But it consumes more power and memory than either of those things.

Since I do know how to write shell one-liners and editor macros, it’s basically useless to me.

→ More replies (2)

1

u/AaronKClark Unemployed Senior Dev May 22 '24

Do you work for CrowdStrike?

1

u/Antique-Special8024 May 22 '24

Has anybody else had to deal with near total blackout of AI tooling?

Yes. Since we dont know what the AI systems do with data that they are given access to accidentally giving them access to customer data would automatically constitutes a data breach which triggers a legal shitshow.

We have internal projects to explore if its possible to run AI systems inhouse and control what it does with our data but until thats deemed possible & set up theres a blanket ban on using any form of AI tool. (Or really any tool that isnt pre-approved)

1

u/jakl8811 May 22 '24

Companies don’t want to enter their data into a 3rd party, especially a 3rd party that trains their models on these inputs. It’s just a massive risk.

Most companies still pursuing AI, are using them in Fedramp environments or instances where the 3rd party is unable to access prompt info

1

u/[deleted] May 22 '24

There are safe ways to use it, but we both know people are going to be pasting full proprietary shit into chatgpt regardless of the policies they put into place.

1

u/chervilious May 22 '24

It's like storing data in the cloud. It's generally "safe" but it's not best practice and even there is a law that prevent it.

Using LLM locally can be an option for some.

1

u/JonathanL73 May 22 '24

Lol AI only to benefit the employers and not the workers.

1

u/vimommy May 22 '24

Same, which is understandable because it's insurance. They're working on their own internal tools and environments though so it's just a matter of time

1

u/precocious_pakoda May 22 '24

Tbf, the AI does increase productivity. I was recently building a vs code extension and GitHub Copilot pretty much was able to generate 70% of shipped code. It's not perfect, but I massively reduced my time spent writing code.

→ More replies (1)

1

u/naillstaybad May 22 '24

yea they just don't want you posting company stuff on external AI tools.

They are working with AI tool companies to provide a standalone tool that can only be used within the company.

1

u/SuedeAsian Software Engineer May 22 '24

Haven't dealt with AI tooling blackouts, but I'm not really worried about AI replacing us yet.

If Yann Fucking LeCun isn't worried then neither am i

1

u/Strong-Piccolo-5546 May 22 '24

they dont want AI tracking what you are doing. its a legit fear. Look at Scarlett Johanson suing Chat GPT for using her voice.

I use it as a google assist. Google has gotten bad, but copilot and chat gpt help out a lot.

1

u/cballowe May 22 '24

The challenge for companies lands in two areas. One is understanding what the model was trained on - is there any chance of the model returning code / text / etc that you're not licensed to use (even open source stuff has restrictions).

The second is "what might your queries/prompts leak to the provider of the tool". Is there any feedback into the model based on what you do and could that leak more to other users?

Various lawyers and risk management people are trying to understand those things more.

A company could be ready for things without using those tools - building their own models tied to the business goals and business data that they have rights to, for instance.

1

u/sarctechie69 May 22 '24

AI is completely blocked to the point that even bing is disabled at my work lmao

1

u/Dr_Semenov May 22 '24

Only AI could ask to use AI in work. LOL

1

u/RealityWard742 May 22 '24

I don't use it as I can work faster. I don't like having to digest what it spits out then fix it, slows things down

1

u/krsCarrots May 22 '24

The only AI I know is Allen Iverson. Can we talk a out practice now?

1

u/Remarkable_Status772 May 22 '24

OMG!

You mean you'll have to learn to do your own research? And use your own, superior, biological intelligence?

Waaaaaah!

1

u/C3PO_1977 May 22 '24

Have you read any code a chat bot returns. I don’t think we have anything to worry about.

But using it does help with naming variables and files. Instead of consoleApp500 I actually have names for projects and My favorite: var1 or num2. That is all I can really use it for is to help me name objects in programming

1

u/-Quiche- Software Engineer May 22 '24

My company has our own licensed version of GPT-4 and it sucks. Couldn't even output the correct rule for a simple gitlab runner stage.

1

u/No_Loquat_183 Software Engineer May 22 '24

Why are they blocking AI tools? GPT4 can't solve anything super complex, but it helps me develop new features much faster and can help with a few esoteric issues. They're just doing you guys a disservice tbh. Now, this would make sense if someone in your company literally copy and pasted code, but I would hope some people have at least half a brain to not do that.

→ More replies (2)

1

u/alisonstone May 22 '24

The issue is mostly a legal/compliance one. Some older people may remember when "the cloud" first became a thing, it was banned at most large corporations too. At some point in the near future, we will get corporate/enterprise versions of all these AI products where it is either licensed to run locally on the company servers or there is some corporate contract where the product is firewalled within OpenAI, Microsoft, or Google and sensitive data or information cannot be leaked.

It just takes a long time for the corporate versions of these products to develop and become accepted. It takes a lot of lawyers on both sides and standardized auditing/security processes need to be developed and accepted by the corporate community before most companies are willing to sign on.

1

u/C3PO_1977 May 22 '24

Chat bots use web crawlers right? If they use data sets it’s not hard coded, right?

Who the hell hard codes data? Thats just crazy…

What data could you possibly put into a Chatbot…I guess if you are using it for other purposes besides helping with programming.

If you are doing research, or writing a review paper, using Chatbot is not a good idea, because you must have good sources. You must know how to cite primary sources, which takes intuition. No matter how advanced AI becomes, computers and the software that controls them will never have intuition and therefor will be limited.

And when it comes to Chatbot they can not gain access to files saved on a physical drive unless you program it to and even then, I don’t think it would work, it will tell you it can’t access data on a physical drive. So I’m not sure what the context is with companies and data. In order to do anything with data you have to have a massive amount and usually it is stored in a DB or data set on a physical server some where….maybe in Iceland. Who knows. Even if it’s in the Cloud there is still a physical drive somewhere will that hold data. Companies with on Premise data centers should worry about the data from customer identity and sensitive information, but organizational policy are not legally binding and some of this sensitive information is not necessarily confidential in a legal context. It’s not criminal to accidentally put someone’s name and hair color somewhere that has public access. I can understand ssn , cc, and Documents like ss cards and birth certificates, but this brings up and question that needs to be addressed, why does a organization that is not a health department or vital records department really need all that data. It’s almost like corporations want to be a government and control and have authority.

It is strange really… in order to buy anything or read a article on the internet or just apply for a job , you have to give a sample of your DNA

And Some of these businesses are unorganized, inconsistent, break EPA policies and OSHA would have a field day with some places, but they are more worried that some Chatbot is stealing data that nobody fucking cares about.

Am I the only one who thinks this is unproductive and illogical and just stupid….

Just saying

1

u/Wrong-Idea1684 May 22 '24 edited May 22 '24

I worked for a German company as a contractor. We were told it's fine to use chat gpt for general stuff, but pasting source code or business flows in there will get my contract terminated. Not sure how they can check if I do this, but I followed their rules anyway.

Germans are pretty anal about their data (which is a good thing).

1

u/lostAndN0tFound May 22 '24

On the contrary, my company(fortune top10) is forcing us to use gpt4 and also copilot. In my org if someone is found coding without it, they are encouraged to use it(more like forced politely) and we were forced to take trainings for how to use it.

I think enterprise copilot and enterprise version of on prem gpt 4 has its own benefits. Your code and prompts are not used as another data point for outside world. Idk if it is the truth or not but your code base doesn’t become data for the model to train on for everyone else other than your organisation.

1

u/dadvader May 22 '24

I don't use AI, particularly GPT for code. I will use it to explain some terminology and allow me to research further though.

The real AI tools that i really use a whole lot more is Github Copilot. The auto-complete is fucking game changer. 95% of the time it know exactly what kind of mundane boilerplate it will generate next and do just that. Amazing, amazing tools.

1

u/Itsalongwaydown Full Stack Developer May 22 '24

ChatGPT is blocked at my work but I just use bing search with the built in AI search function. Works basically the same way

1

u/biznovation May 22 '24

Your company is doing so for good reasons such as data security, ethical and appropriate use of AI. It would be foolish to use open source AI tools in business processes for a litany of legal and reputarional concerns. AI usage in business requires a substantial level of governance and security considerations to be in place.

1

u/Drayenn May 22 '24

My company banned it until it was officially accepted. I figure they were scared of data theft? I cant live without copilot now!

1

u/jeffweet May 22 '24

Typical useless knee jerk reaction that will drive usage underground and make it harder to control

I just don’t understand

1

u/Legal_Being_5517 May 22 '24

It won’t replace developers, people really underestimate how complex some applications can be

1

u/CromulentBovine May 22 '24

No real AI or LLM allowed at my work either due to its confidential nature (accounting). Doesn't stop people from relabeling basic excel functions as AI though. Mostly just a trendy buzzword that makes executives happy.

1

u/MrJesterton May 22 '24

The concern is the, careless or intentional, dumping of sensitive information (PII, HIPAA, trade secrets, proprietary code, etc) into public large language models.

Just use your personal phone, take a visual summary, and wait for them to realize they were way too strict from the start (rightfully so in some environments.)

1

u/Rolex_throwaway May 22 '24

You should certainly not be using any AI tooling outside of company subscriptions and instances. It’s common sense for them to block.

1

u/mohd_sm81 May 23 '24

to me, i could be wrong...

This contradictory talk from executives vs companies actual internal policies is due to few things (again i could be wrong it is just mw trying to logically analyze the situation):

1) increase investors interest in anything AI, while dumping down potential employees salaries by telling them "you are not THAT much needed anymore"

2) for use of AI in a commercial piece of software, there is the fer of a) using garbage code that can be better produced by a more mature understanding of business needs, and b) fear of lawsuits e.g. the latest Scarlett Johannson situation.

3) AI isn't going to replace humans, quiet the opposite it is going to increase productivity... and i am saying this while i am getting my second masters in data science in which we learn mostly machine learning and deep learning.

4) Conflict of Interest CEOs (e.g. NVIDIA's) are just saying what they say because they want to sell their main product, GPUs and TPUs... how else can we explain this if their toolchains, compiler devs, frameworks maintainers, etc etc are still working for them and paid dearly?!!! it just doesn't add up.

there could be more reasons but for me this trend of AI and ML is just too much hype imho.

→ More replies (4)

1

u/WalterWriter May 23 '24

My wife (engineer, employed but looking, hates AI): WHAT COMPANY?

1

u/Other-Progress651 May 23 '24

Your company believes its employees are morons. Thats unfortunate

1

u/prodev321 May 23 '24

Any company confidential data shared with AI systems hosted outside your company’s network will store that data and others might be able to access it .. now there are ways to set up AI within the company’s network and make it available only for employees .. you should pitch for this approach so that you can make use of AI tools ..

1

u/DirectorBusiness5512 May 23 '24

My company just gives us AIs: Anonymous Indians (from revolving-door WITCH companies. Were it not for the FTEs, none of the people who wrote the bulk of our team's current codebases would be at the company right now lol)

1

u/wowredditisgreat May 23 '24

We've embraced it at my company. We both deploy our own chat interface and AI model and use co-pilot for eng. It's made me marginally more productive, but specifically for debugging I've found I get to the solution much faster than I used to.

1

u/Almagest910 May 23 '24

When AI replaces us all, that will probably replace the work we do with other work that we can’t use AI for. If you look at history, any sort of disruptive technology has only served to replace what used to require human labor with non-human labor, but people just moved on to do other types of work. We went from farming to factory work, then that got automated and a lot more people could spend time in less physically demanding work, we might get another revolution like that one later.

1

u/makonde May 23 '24

You cant really stop people from using it though, you can maybe stop it on company machines but people will still use it somewhere else and move code over.

I think maybe for legal reasons is better if they say we don't allow it though.

Most code has little proprietary value as long as you are not putting private user data in seems fine.

1

u/UpsytoO May 23 '24

In my opinion AI is more of a marketing tool than anything else at the moment, there will be some integration of it sooner or later, but nowhere near as it's marketed or spread on mainstream news.

1

u/[deleted] May 23 '24

We were restricted until our Info Sec team reviewed options and determined which platforms were IP-safe. We've since been authorized to use GitHub Copilot and the enterprise version of Microsoft Copilot. 

1

u/[deleted] May 25 '24

Well good news for you: you can’t be replaced by AI if it’s not permitted in your work place. Enjoy the lack of frenzy right now.

1

u/Amadis001 May 26 '24

This is common at high tech companies. The lawyers are busy figuring out what it a means for them if their data is uploaded to the cloud, content is created by third-parties, etc. Eventually, they will figure it all out.

1

u/Starship1617 Apr 29 '25

My Company: AI is the future, automate, automate, automate! AI! AI!

Me Using Copilot on my Laptop to make my life easer: *Copilot gets removed from my company and anybody who wants to dare use it needs VP Approval.*

I want to reiterate the point of using it to make "my" life easier and not my company's. The dumbest thing anybody can ever do is use AI to automate things that they do--tell the company that it has been automated, and "celebrate" this achievement. This is what companies ultimately want--they want you to automate yourself out of a job. You keep doing this more and more over time, and you're ultimately no longer needed. My own boss even once showed us statistics of how a company made chat-bot solved over 90% Help Desk questions or whatever it was--his own words, "I wouldn't even need those Help Desk jobs anymore".

Unless people wake up, this is where the world is headed--to get you unemployed.

When I'm using AI at work, I'm using it for ME and to make MY work easier. I'm also telling nobody that I'm using it. I can't be the only one who doesn't see all of this right?