Here are the latest industry updates you need to know this week👇 ✅ Gen Z prefer fan-made content ✅ TikTok copyright notices ✅ Downloading TikTok clips ✅ Meta AI labels ✅ Meta ad-free subscriptions Read the latest release of This Week In Social to stay up-to-date & subscribe to be the first one to find out the latest social media news! 🚀 #SocialMedia #socialmediamarketing #SocialMediaAgency #tiktok #instagram #youtube
The Social Shepherd’s Post
More Relevant Posts
-
CEO & Chief Data Scientist - HBR Advisory Council - Visiting Professor - Author of 'Psychometrics in Recruitment' - Host of 'The Minhaaj Podcast' - Neurodiverse - Linkedin SSI Top 1% Influencers in AI
Exciting news! France's competition watchdog has fined Google €250 million (about $272 million) over concerns regarding the use of copyrighted information from media publishers in the European Union. This decision comes as a result of Google's AI chatbot Gemini, previously known as Bard, being trained on content from news publishers and agencies, violating the company's prior commitments. All companies should follow the suit. This ruling is part of a larger story that began in 2020 when a French court mandated payments for the use of media producers' intellectual property by corporations, in accordance with 2019 EU copyright regulations. This meant that Google had to compensate publishers whose content was displayed on its search engines and other platforms. In response to the fine, Google has stated that it agrees to settle in order to "move on" and focus on working constructively with French publishers. However, the tech giant believes that the fine is disproportionate to the issues at hand and does not adequately consider the efforts made to address concerns raised. Google highlights that since the regulations came into effect, navigating negotiations with publishers has been challenging due to a lack of clear guidance, especially as the landscape of publishers continues to evolve. Despite this, Google emphasizes that it is the first and only platform to have signed licensing agreements with 280 French press publishers, which costs the company "several tens of millions of euros per year." This development is significant as it occurs amidst a broader conversation about AI services scraping the internet for content to train their models, including copyrighted material. Publishers and newsrooms have expressed concerns that AI systems are using their content without permission or compensation. This issue has led to actions such as news organizations blocking OpenAI from scanning their websites for content using its web crawler GPTBot. Additionally, in December, the New York Times sued Google's rivals Microsoft Corp. and OpenAI, alleging wide-scale copyright infringement by copying "millions" of its articles to train their AI models without permission. This ongoing struggle over copyrighted content also involves book authors, such as Sarah Silverman, who have levied lawsuits against OpenAI and Meta Platforms Inc. These legal battles highlight the complex intersection of AI, copyright law, and content creation in the digital age. #copyrights #google #chatgpt #openai #EU #artificialintelligence
To view or add a comment, sign in
-
-
📰💡 Google's €250m Fine for AI and Media Content Breaches 💡📰 The complex world of intellectual property has once again made headlines, this time involving tech giant Google and French regulators. Let's break it down. 📚 Background: Google has been fined €250m (£213m) by France's competition watchdog for not adhering to an agreement to pay media companies for reproducing their content online. The issue stems from Google's AI-powered chatbot, Bard (now Gemini), which was trained on content from publishers without notifying them. 🔍 The Agreement: In 2022, Google committed to negotiating fairly with news organizations and providing transparent payment offers within three months of receiving a copyright complaint. However, Google violated four out of seven commitments in the 2022 settlement, including negotiating in good faith and providing transparent information. 🧠 The AI Angle: The French watchdog specifically pointed out that Bard, launched in 2023, used data from media outlets without informing them. This hindered publishers' ability to negotiate fair prices, linking AI's use of content to the display of protected content. 👊 The Dispute: France has long fought to protect publishing rights and revenue against the dominance of tech companies. The EU's "neighbouring rights" copyright allows media to demand compensation for their content. After initial resistance, Google and Facebook agreed to pay French media for articles shown in web searches. The recent fine follows a complaint from major French news organizations and Agence France-Presse (AFP). 💼 Google's Response: Despite signing licensing agreements with 280 French news publishers, Google was fined due to issues in their negotiation practices. Google’s statement emphasized their efforts to work constructively with publishers, even though they found the fine disproportionate. Conclusion: Google's case is a significant example of how AI and media content intersect in the realm of intellectual property. The evolving landscape requires tech giants to navigate the delicate balance of innovation and fair compensation for content creators. Stay tuned as this unfolds! Read more: https://lnkd.in/dG7XbRQ6
Google fined €250m in France for breaching intellectual property deal
theguardian.com
To view or add a comment, sign in
-
📊 I help businesses put their privacy compliance on autopilot, saving them time and money in the process.
📢 Google Hit with 250 Million Euro Fine by French Competition Authority 💶 In a significant move on March 20th, France's competition watchdog imposed a 250 million euro ($271.73 million) fine on Google for violations related to EU intellectual property rules in its interactions with media publishers. The fine centers on concerns about Google's AI chatbot, originally named Bard and now known as Gemini, which was found to be trained on content from publishers and news agencies without their consent or knowledge. 🇫🇷 This fine stems from a broader copyright dispute in France, ignited by complaints from some of the nation’s largest news organizations, including Agence France Presse (AFP). Although a previous disagreement seemed resolved in 2022 with Google dropping its appeal against an initial 500 million euro fine, the authority now accuses Google of failing to meet several of its settlement commitments, especially regarding negotiations with publishers and transparency. 👉 Google has opted not to dispute the facts in a bid to reach a settlement, offering to implement measures to address the identified shortcomings. The tech giant expressed its desire to move beyond this issue, focusing on the larger goal of developing sustainable methods for connecting people with quality content and fostering constructive relationships with French publishers. 👉 However, Google has labeled the fine as disproportionate, critiquing the regulatory environment for its unpredictability and claiming that the watchdog failed to adequately consider its efforts to comply. 💬 A spokesperson for the European Commission expressed concerns that adhering to the EDPS's directives might compromise the current high standards of mobility and integration in IT services, a situation affecting not just Microsoft but potentially other IT service providers as well. 🛎️ This case adds to the growing call for clearer regulations on the use of copyrighted materials by AI platforms, as highlighted by similar legal challenges faced by other major tech companies. As publishers, writers, and newsrooms worldwide seek better control over their content in the digital age, Google’s situation serves as a pivotal example of the ongoing negotiations between tech giants and content creators over intellectual property rights and fair compensation.
To view or add a comment, sign in
-
As my readers know…I am a fierce advocate for protecting news…copyrights…trademarks….original content in general. Yet. Publishers need to learn from music and movie/video. Strike deals!!!! Get in on the evolution. Don’t be left in the cold. From Robinhood: “Major news sites try to close off OpenAI from their articles, while some embrace the tech Creepy crawlies… A new report from the Reuters Institute said that by the end of last year, nearly half of top news sites had blocked OpenAI’s crawlers to stop ChatGPT from ingesting their content. Crawlers are bots that scrape data from across the web. AI companies like OpenAI and Google use crawlers to collect content for training their models. Having access to reputable news info is crucial for AI companies, which have pitched their bots as the future of search. 600+ news publishers have opted out of crawlers from OpenAI, Google, or Common Crawl (a nonprofit). Asking nicely: Articles from sites like The New York Times are easily discoverable, and many question whether blocking AI crawlers can effectively protect content. Paper trail: Even if blockers fail, the act of opting out may give publishers a stronger case in copyright-infringement suits. Two roads diverged in an AI wood… and they’re both less traveled by. A division is forming in how publishers and creators respond to the threat from AI. Some are taking the courtroom route: The Times is suing OpenAI and Microsoft, alleging they trained their bots on millions of Times articles. Several copyright lawsuits have been brought by authors including John Grisham and George R.R. Martin. The other approach: Some publishers are embracing AI by striking licensing deals. OpenAI said it would pay Axel Springer (Business Insider, Politico) to use its content in ChatGPT, and Microsoft teamed up with Semafor for AI-assisted news. THE TAKEAWAY It’s unclear who’ll come out on top… If courts find that AI companies infringed on copyrights, publishers could be owed big $$ and companies like OpenAI could see more suits (and lose access to sources they rely on). But publishers that negotiate early licensing deals with AI companies may have a leg up if courts don’t rule in their favor. With all the traffic they’ve already lost to Facebook and Google, publishers might be eager to set a precedent in which they have a financial gain.”
To view or add a comment, sign in
-
All media and content providers take note. Smart move by both…could help set the precedent that was missed in the early days of sharing music and streaming…not to mention social media. However, content makers need to keep the pressure on ….through every means and regulators need to weigh in ASAP…. AI is here. It will only get better and more precise with access to real and powerful content. We need deals, like this, to ensure that the output is accurate and not skewed. Weigh in! From Robinhood: “As AI copyright battles mount, OpenAI’s deal with a media giant could provide a new framework AI Insider… OpenAI will pay media powerhouse Axel Springer (Business Insider, Politico) to use its news content in ChatGPT answers and training. The multiyear licensing deal will let the chatbot summarize news stories from Axel Springer’s myriad media brands. ChatGPT will include links to the OG sources to give the sites credit and clicks. The partnership’s expected to bring in big bucks for Axel Springer. FYI: OpenAI also made a deal with AP in July, allowing it to use the news org’s archive for training. Interesting timing: Since August, over 500 news publishers (including The New York Times, Reuters, and The Washington Post) have installed software to block their articles from being collected and used in ChatGPTraining. Playin’ defense: This year there’ve been several reports that major news publishers were prepping for a case to force AI companies like Alphabet and Microsoft to compensate them for content. Licensing deals could help avoid copyright-infringement suits. From Jodi Picoult to Getty Images… Big names are getting involved in AI legal battles, which have piled up since ChatGPT’s rollout. In September, famous scribes including John Grisham, George R.R. Martin, and Jodi Picoult sued OpenAI over copyright infringement. In July, Sarah Silverman and others sued OpenAI and Meta. It’s not just books: Getty Images sued Stability AI, alleging it “unlawfully copied and processed millions of images protected by copyright” without a license. Coders are suing AI companies, too, accusing them of “software piracy.” Big law: The EU just reached an agreement on its “AI Act,” a historic law that would make AI companies create safeguards against illegally generating content. THE TAKEAWAY Catch a wave before it crashes… By negotiating with news publishers, OpenAI may be trying to preemptively set a precedent before lawsuits set a legal precedent. If courts find that AI companies infringed on copyrights, the financial fallout could be huge. News publishers, who learned a lesson from all the traffic and $$ lost to sites like Facebook and Google, may also be eager to set a precedent in which they have a financial gain.”
To view or add a comment, sign in
-
US' oldest nonprofit newsroom sues OpenAI, Microsoft for copyright #tech 🤝 Download 1 Million Logo Prompt Generator 🔜 https://wapia.in/1mlogo 🤝 Follow us on Whatsapp 🔜 https://wapia.in/wabeta _ ❇️ Summary: The Center for Investigative Reporting has filed a lawsuit against Microsoft and OpenAI for alleged copyright violations, accusing them of using the news organization's content without permission. This legal action is part of a trend where publishers challenge AI companies for copyright infringement. The lawsuit highlights concerns about AI's impact on journalism and intellectual property rights. Other news organizations have also filed similar lawsuits against Microsoft and OpenAI. The legal proceedings are expected to spark debate on the ethical implications of AI-driven content aggregation. Hashtags: #chatGPT #copyrightviolation #lawsuit
US' oldest nonprofit newsroom sues OpenAI, Microsoft for copyright #tech
https://webappia.com
To view or add a comment, sign in
-
Enrich your marketing knowledge with Brain Food's weekly buffet of the most recent industry insights! 🧠 🥘 New York Times Sues OpenAI And Microsoft: ‘Billions’ Owed For AI Copyright Infringement - https://lnkd.in/esweS6GH We're getting closer to OpenAI's first device - https://lnkd.in/eqjYDX_N PPC 2023 in review: UA sunsets, Google antitrust trial, X’s downfall and more - https://lnkd.in/e8UXA6fB TikTok is making users give their iPhone passwords for unclear reasons - https://lnkd.in/eP-Ati2x Email marketing 101: The five basics - https://lnkd.in/eRVGCd4B Enjoyed this post? Stay updated with our latest insights by signing up to our newsletter! ✨ Link in the comments below. ⬇
New York Times Sues OpenAI And Microsoft: ‘Billions’ Owed For AI Copyright Infringement, Case Claims
forbes.com
To view or add a comment, sign in
-
Will the NEWYORKTIMES.COM lawsuit against Microsoft and OpenAI stymie AI progress in 2024? Here's a custom Q&A GPT on the suit https://bit.ly/4aCKDpS. The lawsuit comes on the heels of The Associated Press and Axel Springer coming to terms with OpenAI on content distribution. The importance of this lawsuit can't be underestimated. Content publishers lost control very quickly 20 years ago when Google started indexing and accessing the web. It shifted the revenue streams of content access precipitously toward online search engines and caused a lot of content publishers to resturucture and downsize their businesses. So far it looks like that was a big lesson learned! The lawsuit succinctly encapsulates the legal arguments that will will be a centerpiece in the battle for AI revenue streams for the next 20 years. The New York Times Company has initiated a legal action against Microsoft and various OpenAI entities, alleging the unlawful use of its copyrighted material to train and develop generative artificial intelligence (GenAI) products. This lawsuit aims to protect the journalistic integrity and proprietary content of The New York Times, emphasizing the importance of copyright laws in safeguarding the fruits of intellectual and creative labor. Here's a short summary of the lawsuit: Copyright Infringement: The defendants allegedly used millions of copyrighted articles from The Times to train their large language models (LLMs), creating competitive products without permission or compensation. Value and Emphasis on Times Content: The Times argues that the defendants specifically emphasized its content in their LLMs, recognizing its high value and thereby infringing upon its exclusive rights. Fair Use Doctrine Misapplication: The defendants claim their actions constitute "fair use" due to the transformative nature of GenAI. The Times counters that using its content to create substitutive products is not transformative and thus not protected by fair use. Economic Harm: The unauthorized use of The Times's work undermines its subscription, licensing, and advertising revenues, jeopardizing its financial sustainability and the quality of journalism it provides. Refusal of Legal Protections: Despite the clear legal protections and rights provided to creators under copyright law, the defendants continue to use The Times's content within their AI models, generating outputs that mimic, summarize, or directly use the content. Negotiation and Resolution Attempts: The Times has sought to negotiate a fair resolution that would allow the use of its content while ensuring fair compensation and continued viability for high-quality journalism. However, these efforts have not led to an agreement. #NYTvsOpenAI #CopyrightLaw #AIethics #GenerativeAI #TechLegalBattle #JournalismMatters #ContentOwnership #FairUseFight #MachineLearningRights #DigitalCopyrightEnforcement
To view or add a comment, sign in
-
🔥 Campfire's recommended read, Thursday edition 🔥 How copyright lawsuits could kill OpenAI. "Late last year, the New York Times sued OpenAI and Microsoft, alleging that the companies are stealing its copyrighted content to train their large language models and then profiting off of it. In a point-by-point rebuttal to the lawsuit’s accusations, OpenAI claimed no wrongdoing. Meanwhile, the Senate Judiciary Subcommittee on Privacy, Technology, and Law held a hearing in which news executives implored lawmakers to force AI companies to pay publishers for using their content." 📖 Read the full article here: https://buff.ly/3U500S8 #news #recommendedread #read #weeklynews #updates #trending #auckland #hiringnow #hiring #applynow #apply #digital #marketing #digitalmarketing #digitalmarketingjobs #marketingjobs #ecommerce #SEO #specialist #roles #recruitment #campfire #campfiredigitalrecruitment #agency #digitalmarketers #marketers #auckland #nz #newzealand #aucklandnewzealand #nzwide #newzealandwide #nzjobs #aucklandjobs #nzjobs #jobseekers #seek #jobhunters #jobs #jobhunt
How copyright lawsuits could kill OpenAI
vox.com
To view or add a comment, sign in
-
This is what a layoffs/quality/revenue/layoffs cycle looks like. Running a service at Internet scale isn't the hard part any more. (Want to run a site with a billion+ users? I can make an intro to a ScyllaDB sales engineer if you want, you're all set) The hard parts of any big Internet service is moderation and ad review. Honestly, as a former editor, I would rather be an editor than a moderator. * An editor knows the subjects to be covered and the language, in advance. A moderator might get a post about anything. * An editor works on the publication's own schedule. A moderator has to handle UGC as it comes in But Big Tech execs are so scared of union organizers that they would rather flood their users with this crap than hire/train/pay the moderators they need.
Let’s discuss moral license to operate, and why consent is the primary moral right. This is a Facebook ad for a Google Play Store app. Apologies to those who find it upsetting. The underlying AI model enables endless CSAM and deepfake harassment. Its maker is the defendant in several lawsuits across the world, yet is somehow still allowed on the UK government responsible AI working group. It took a team to carry out the sourcing and inadequate filtering of online material that made their developer ecosystem complicit in CSAM distribution. It took a team to make it a model, and it took a platform to distribute it. But it only took one individual to fund the operation and decide to release the results into the world. This post is not for naming names. Justice will have its course. It is about the cultural enablers and failure of systemic checks and balances behind what most of us surely recognize as a moral failure. Under binding treaties which harmonize copyright law across 193 countries including yours and mine, creators have not only moral right to attribution, and exclusive commercial exploitation rights to their works. The primary moral right is consent. To say no to uses of their works they find objectionable. (Yes, there are exceptions for scientific research and mining of facts and ideas. No, that is not what diffusion models are. Images are not ideas.) This app is called NeoMoe: “neo” as in new, “moe” as in Japanese for fetish. It recently hit 100K downloads — incidentally the number of minors sexually harassed on Facebook every day of 2021, according to a recent lawsuit. The PlayStore marketing is the usual “be an artist by typing” schtick, paired with more toned down imagery. But the title and ad are clear about intent. CSAM has exploded this past year. This is what sludge models like Stable Diffusion enable: hijacking art styles, such as the The Walt Disney Company style below, to produce vile content with little to no effort. Putting the skills of the best of us in the hands of the worst of us — severed from the social norms that otherwise enter the creative and publishing process — to the detriment of all of us. Meta: why serve ads like these on Instagram and Facebook? Google: why publish apps like these on the Play Store? Green Great Tools (app devs): why would any of this be okay? Stability AI: why spend a hundred thousand dollars to bake and release SD into the wild with a “no breaksies any lawsies ;-)” note? A billion works were opted out post facto, but this damage is irreversible. AI boosters: what makes you think present laws do not apply? Open Source enthusiasts: how do you propose to balance abuse like this with the legitimate upsides of open innovation? Honest question. Yes, incomes are threatened and commercial rights were infringed. But consent is the primary right upon which the others rest. #degenerativeai #createdontscrape 404 Media https://lnkd.in/dFcz6xwe
To view or add a comment, sign in
-
More from this author
-
Social Media Industry Round-Up #96: TikTok Banned In the EU, Snapchat On The Rise, YouTube AI Prompts & The Latest Text-To-Video Tool
The Social Shepherd 3h -
Social Media Industry Round-Up #95: Meta’s AI Studio, Snapchat Olympics Filters, How to Grow Your YouTube Channel & TikTok’s New Marketing Guide
The Social Shepherd 1w -
Social Media Industry Round-Up #94: Google’s U-Turn With Cookies, TikTok Shop Launching In The EU, AI Selfies on Meta & Saving Music From Shorts
The Social Shepherd 2w