What’s ahead for 2024? Elections in some of the most influential countries in the world including the US, India, Venezuela, Russia, South Africa, Taiwan, China and the UK. And yet we have zero handle on the use of generative AI, by legitimate or illegitimate players, in manipulating the electorates in how they will vote. That’s terrifying. In a small effort to equip myself with some strategies and frameworks for how to think through the use of AI in digital product design I took this course. It’s only a start but it’s an important one. Center for Humane Technology (CHT) remind us that it is possible, and economically viable, to develop products that help humanity rather just driving engagement. To anyone who is applying AI to products that will be out in our world, I recommend looking into the work of Tristan Harris, Randima Fernando and colleagues at CHT. So, that’s my hot take for the new calendar year. What’s shaping your thinking around AI?
miriam healy’s Post
More Relevant Posts
-
🎓🌐 Just completed the "Foundations of Humane Technology" course, and if you're passionate about tech and AI product development, this course might just pique your interest! It provides invaluable insights into creating products that honor human nature, avoid cognitive traps, prioritize values, and advocate for fairness—all aimed at empowering people to thrive. Highly recommended! 🚀💻 #TechProductDevelopment #HumanNatureInTech #EthicalAI #centerforhumanetechnology
Foundations of Humane Technology • Lisa Knoll • Center for Humane Technology
credential.net
To view or add a comment, sign in
-
Why do we need more humane technologists? If you're somehow involved in any kind of product design role I encourage you to dive in the course about design ethics by the Center for Humane Technology. You'll definitely add value to your job applying those principles through the entire process from design thinking to product development to help users satisfy their needs the right way. Learn more: https://lnkd.in/d3DwuSKa #designethics #uxdesign #ai
Center for Humane Technology
app.participate.com
To view or add a comment, sign in
-
Marketing for B2B Tech and Deep Tech | AI for Marketing | Trainer & Speaker | Podcast Author and Host @ Мечка страх, мен не
Technology is not neutral. Technology is political. "Our democracies are vulnerable to technologies that manipulate consensus. Our shared understanding of the world is being polluted just like the Earth." Our most precious human resource - our attention is turned into a monetization target and hijacked by attention-seeking platforms. "Technology exists in a complex system of human vulnerabilities, economic and social mechanisms, and deeply held paradigms of thought." I’m still processing what I learned from the “Foundations of Humane Technology'' course by Center for Humane Technology. This is just a glimpse of the serious, complex and large-scale problems of humanity that the course candidly addresses. Yet it left me feeling optimistic and inspired to find out how I can push on in my environment to contribute towards the change they are driving. This course is a must-have for everyone involved in creating technology products. It offers strong conceptual frameworks, hands-on tools, and tons of resources to help us become mindful and aware of the unintentional harms we could cause, the ones we could prevent and the ones we can’t, but for which we are not blind either. Here's a link to the course (it's completely free too): xhttps://lnkd.in/dXAdETMG Because “change happens at different levels with multiple degrees of influence.” #HumaneTechnology #YourUndividedAttention #TechEthics #ResponsibleAI #AIforGood #EthicalAI #ProductDevelopment #technology #future
Foundations of Humane Technology • Ina Toncheva • Center for Humane Technology
credential.net
To view or add a comment, sign in
-
The Center for Humane Technology cites that AI will primarily replace cognitively intensive jobs that require higher education, with skilled, manual labour jobs being impacted much later. They additionally refer to OpenAI's research which shows this could impact 57 million jobs that require a Bachelor's degree, and 21 million jobs that require a Master's degree or higher. Meaning a total of $4.2 trillion of income could be negatively impacted. So, when you hear about AI improving efficiency in companies it's because it could be taking away your job indefinitely. We are not ready for the economic impact of this kind of technology so please educate yourself before you start advocating for it. #ai #artificialintelligence #technology #ethics
AI Town Hall -- Center for Humane Technology
https://www.youtube.com/
To view or add a comment, sign in
-
Unequal Park Quality Exposed Using Social Media, Machine Learning The study uses social media and machine learning to show environmental injustices in Philadelphia's urban parks.... https://lnkd.in/eNyETYdk #AI #ML #Automation
Unequal Park Quality Exposed Using Social Media, Machine Learning
openexo.com
To view or add a comment, sign in
-
When we say Artificial Intelligence, what comes to mind? Is it slightly odd-looking computer-generated imagery? Is it robots? Or maybe even the sweet sound of birdsong? There's lots of debate about AI, how it should be used, potential copyright infringement, and the morality of using it to create art - but there are so many amazing real-world applications for it too that will allow some really extensive data sets and exciting research, just like this conservation project in Somerset. It's such a wide-reaching technology with so many different implications, but we want to share that it doesn't have to all be doom and gloom. https://lnkd.in/eH2qB65B
AI analyses bird sounds for Somerset conservation project
bbc.co.uk
To view or add a comment, sign in
-
In case you missed it! Center for Humane Technology released an video discussing the capabilities of AI, a background of LLM's, the increasing disparity regarding AI and its relationship to our economy, where there are problems and how as a society we can move forward. Click the link to watch their video!
AI Town Hall -- Center for Humane Technology
https://www.youtube.com/
To view or add a comment, sign in
-
A dear colleague of mine, Cory Cox, introduced me to a thought provoking (and somewhat scary) video attached on the potential dangers of AI. I really loved the Three Rules of new tech introductions: 1) When you invent a new technology, you uncover a new class of responsibilities; 2) If the tech confers power, it starts a race; 3) If you do not coordinate, the race ends in tragedy (think social media and its impact on depression/suicide rates) I am not advocating for any administrative burden/bureaucratic committees that stifle innovation. With that said, with power comes responsibility. We must anticipate and mitigate unintended consequences. Like the proponents (Tristan Harris and Aza Raskin) of the video said (and I paraphrase), we didn't know we needed privacy laws until photography and the internet materialized. Maybe what we really need is a framework by which new technologies are held. This would at least give all of us a reference point by which to think about unintended consequences, responsibilities and guard rails for our new innovations. Just like new tech has standards committees that define how new tech works and interacts, we might have a framework body of researchers that define these innovation guard rails. Please share your thoughts. Do we need Government Laws/Agencies to guide new tech? Do we need a "standards" committee? Other? I welcome the discussion. #ai #leadership #innnovation https://lnkd.in/gtjm6_Mk
Center for Humane Technology Co-Founders Tristan Harris and Aza Raskin discuss The AI Dilemma
https://www.youtube.com/
To view or add a comment, sign in
-
Reinforcement Learning Team Leader & BO Tech Expert @ Huawei Research London - Advisor @ Sanome - Honorary Assistant Professor at UCL - All opinions posted here are my own.
It turns out REINFORCE is a big thing again! It has been used in RLHF, and people are going crazy about what it is. Well, it is the oldest and easiest algorithm in #RL. I even had a video on it three years ago: https://lnkd.in/e9Emky2E #AI #MachineLearning
Policy Gradients Reinforcement
https://www.youtube.com/
To view or add a comment, sign in
-
Helpful and thoughtful guidance from Jim Lang. The argument for speed and efficiency always makes me think of its role in Frankenstein: Victor is developing and composing deliberately, which includes thinking about complications, then for some reason called "speed" decides to disregard any concern for audience and counter-perspectives. I guide my students in a "slow reading" of this passage in a composition course and proceed to point to Victor as a rhetorical model of what not to do when composing or creating.
Higher education has long been accused of being a slow-walking animal. When it comes to our embrace of generative artificial intelligence, we should embrace that quality as a virtue. Walking slowly creates time for reflection and discussion, oft-neglected activities in a culture that prioritizes economy and efficiency. #artificialintelligence #teaching
Advice | The Case for Slow-Walking Our Use of Generative AI
chronicle.com
To view or add a comment, sign in