r/cscareerquestions Dec 08 '23

Does your company pay for your ChatGPT Subscription?

It's another tool to help us work more efficiently. Is it not?

0 Upvotes

16 comments sorted by

21

u/captain_ahabb Dec 08 '23

Pretty sure I would get fired if I put our code into an unvetted third party cloud tool

12

u/TonyTheEvil SWE @ G Dec 08 '23

I, a Google employee, am not even allowed to put code into Bard.

1

u/CallMePyro Software Engineer - Google Dec 08 '23

go/duckie

4

u/badnewsbubbies Dec 08 '23

No, but they did prohibit its use entirely for anything aside from generic brain storming.
Putting confidential internal business information or code into a third party tool that has had public security issues just doesn't seem like something they want to do, ya know?

4

u/[deleted] Dec 08 '23

r/localllama

Run your own if need be. I figured it out after a couple days.

2

u/[deleted] Dec 08 '23

It’s blocked from our company. I have my own subscription tho

2

u/ptjunkie Embedded Engineer Dec 09 '23

We have our own internal version we are allowed to use.

1

u/alinroc Database Admin Dec 08 '23

I would only use it for work if the company paid for it and it was covered under the governance policies of the company.

If you're using one of Microsoft's CoPilot-branded products that use ChatGPT, and you're logged into company's M365 tenant (subscription), then anything you put into it is (from what Microsoft has said) isolated to your company's subscription, not leaked to randos.

0

u/jfcarr Dec 08 '23

The AI Luddites in cybersecurity have blocked it.

7

u/codefyre Software Engineer - 20+ YOE Dec 08 '23

It's not ludditeism. By default, ChatGPT, and presumably most commercial AI tools, incorporates your discussions into its training data to improve performance. Every word you type into ChatGPT becomes part of its dataset. Security auditors keep finding exploits that allow these chatbots to reveal their training data. Google just posted a ChatGPT infinite repeat exploit last week, which allowed its researchers to see raw training data from that system. In theory, anything you post into ChatGPT might be extractable by one of these exploits.

If your job includes managing access to a secure codebase, that's a bit of a nightmare scenario. You don't want your devs posting and refining code in a system that might leak that code to anyone.

1

u/xiongchiamiov Staff SRE / ex-Manager Dec 08 '23

If we are using it for anything work-related, yes. It's encouraged to figure out new ways to do this.

We have guidelines about what to put in to ChatGPT and how we can use the output, which I crafted based on reading their terms of service, conversations with leadership, and the infosec part of my brain. We decided we were ok with putting code and certain other proprietary information into there, but not credentials or PII. The risk analysis will vary by company.

1

u/Source_Shoddy Software Engineer Dec 08 '23

My company provides access to GPT-4 via Azure since they have the proper confidentiality agreements in place there.

1

u/biriyani_critic Dec 09 '23

What is a use case that is made more efficient by using ChatGPT?

1

u/[deleted] Dec 24 '23

[removed] — view removed comment

1

u/AutoModerator Dec 24 '23

Sorry, you do not meet the minimum sitewide comment karma requirement of 10 to post a comment. This is comment karma exclusively, not post or overall karma nor karma on this subreddit alone. Please try again after you have acquired more karma. Please look at the rules page for more information.

I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.