Sign in to view Hossam’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
San Francisco Bay Area
Contact Info
Sign in to view Hossam’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
3K followers
500+ connections
Sign in to view Hossam’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Hossam
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with Hossam
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view Hossam’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Experience & Education
-
Data Axle
***** ******* *******
-
*********
******* ********** **********
-
******
******* *******, ******** ******** *** ***********
-
******** **********
*** ********** ** **********
-
********* **********
******** ** ******** ******* ******** *******
View Hossam’s full experience
See their title, tenure and more.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
View Hossam’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Anoop Sreenivasan
Sunnyvale, CAConnect -
Danielle Weinblatt
Miami-Fort Lauderdale AreaConnect -
Kevin Mease
Austin, TXConnect -
Francisco Uribe
Miami, FLConnect -
Sharma S
Versatile Product Strategist | Marketing Expert | Deep Tech Enthusiast
New York City Metropolitan AreaConnect -
Michele Richards
San Diego County, CAConnect -
Bart Peluso III
Knoxville, TNConnect -
Brian Elliott
San Francisco, CAConnect -
Dave Chiang
Saratoga, CAConnect -
Alex Toussaint
San Francisco, CAConnect -
Wayne N. Driggers Jr.
Saint Johns, FLConnect -
Ann Marie Isleib
United StatesConnect -
Megan Berry
Portland, ORConnect -
Harry Patz, Jr
New York City Metropolitan AreaConnect -
Nathan Guinn
Director of Product Management at OCLC
Lehi, UTConnect -
Max Branzburg
United StatesConnect -
Anthony Lye
San Mateo, CAConnect -
Lynne Zaledonis
San Francisco, CAConnect -
Mansoor Basha
New York, NYConnect -
Olivier Wellmann
San Francisco Bay AreaConnect
Explore more posts
-
Manjeet Singh
As academic based LLM benchmarks are losing their relevance -- there are 2 types of Benchmarks that will matter more in the B2B space: 👉 Crowdsourced user votes to rank LLM output (like Chatbot Arena) OR where the test DATASET is kept secret by the evaluator (e.g GSM1k from scale_AI) 👉 Enterprise use case specific LLM benchmarks (e.g produced by CRM, ITSM, HR SaaS companies) evaluated on real data. Because you can not game real customer use cases. #llm #benchmarks #llmeval #AI
11
1 Comment -
James Evans
Kyle Poyar and Lenny Rachitsky just dropped one of the most tactical "how to build products with embedded AI" guides I've seen. We have the sample size now of successful AI features to learn what separates useful from gimmicky. Really cool to see the width of advice here. Including my and Elad Gil's extremely-easy-to-apply tactic 😇 I also extremely vibe with: - Claire Vo's point about incrementalism. Sprinkling AI fairy dust around your product in small-but-cumulatively-meanignfully ways is the new tactic to make your product feel luxurious and modern. - Rahul Vohra's point about speed. We see this in our data too. Lots of user drop-off when the model is too slow to respond (which has made us loath to use slow models except for really tricky questions) - Cameron Adams's point about onboarding. ChatGPT has onboarded hundreds of millions of users to prompt engineering, but that doesn't mean they will just figure out the nuance of your AI feature. Full post here: https://lnkd.in/gkfkQubh
34
3 Comments -
Spence Green
State of the art enterprise language / localization programs are now being built on the concept of **AI model interoperability.** For decades large programs have been built on the concept of **services vendor interoperability.** The premise is that quality follows from contracting multiple vendors and compelling them to compete on price / quality. As the proportion of work done by AI increases, it follows that the focus should be on selecting and tuning the best models, and the human vendor component becomes a commodity. #enterpriseai #ai #localization #globalization #startups
40
4 Comments -
Albert Fong
That awkward silence is coming from your robotaxi. Waymo has had its share of bumpy roads, but its parent Alphabet believes the self-driving technology is on the right path to the yellow brick road, investing another $5 billion. The self-driving company is making the right turns. Drive around San Francisco on any given day, and chances are good the vehicle that comes up right next to you will have you doing a double take. Robotaxis are commonplace in the city by the bay. Whether it's the traffic-filled streets in the downtown or the narrow streets of Golden Gate Park, Waymo vehicles are dodging traffic and avoiding pedestrians. It's full speed ahead. Waymo delivers 50,000 paid rides per week, offering a fully driverless ride-hail service in San Francisco and Phoenix, and recently expanded to Los Angeles and Austin. And soon, the service will be available for airport rides to and from the San Francisco International Airport. The road well traveled doesn't mean everyone is so accepting. While Waymo has received compliments from some, it hasn't been welcomed by others. In San Francisco, vehicles have been vandalized and destroyed along with many who question the safety of self-driving vehicles. Whichever side you fall on, self-driving can be a highly polarizing issue, and only time will tell if the perception will ever change. Alphabet is betting that the sky's the limit. While the $5 billion commitment is nothing to shake your windshield wipers at, Waymo's partnership with Chinese electric automaker Zeekr is both symbolic and substantial. Waymo's deployment of a custom-built, roomier robotaxi made by Zeekr marks a significant step forward in the company's autonomous vehicle development. Zoom, zoom, just not too much zoom. Robotaxis aren't going way. Waymo is expanding if not slowly here in the states, while other countries such as China have companies such as Baidu taking the lead in major cities. And Elon Musk is gearing up with his own line of robotaxis in the near future. For Alphabet, the challenge is not only making them more acceptable to the masses, but also addressing safety concerns. The technology isn't perfect, but $5 billion will certainly help get it there https://lnkd.in/gn3TZcPx #alphabet #waymo #zeekr #robotaxi #culture #autonomousvehicles #automotiveindustry #transportation
4
-
Scott Persinger
🍜Anybody up for some ramen? I am excited to share that I am back in the garage, building again (it’s actually a house in the Berkeley Hills). We are taking the wraps off of Supercog AI, a new startup focused on solving Application Integration using LLM powered agents. My co-founder in this new venture is a great friend of mine (and a former co-founder partner), Alex Osborne. Alex and I have worked around the application integration space for a long time, and we are unrealistically excited about applying the GenAI stack to this problem. Not many people relish the task to get some bits to move from system A to system B. But it’s a critical job to be done, and that’s led to a lot of brittle scripts, a myriad of inflexible “no code” tools, and a lot of hand crafted SQL. But the power of the Large Language Model, trained on a huge corpus of information, offers a remarkable tool to solve this problem. The LLM you use today already knows the APIs of hundreds of popular systems. It knows the SQL dialects of every major database. It understands the semantics and data schemas of many popular SaaS systems. It understands much of the specific domain in which your business operates. I like to call the AI revolution “the last platform shift”. That may turn out to be hyperbole, but I definitely believe we are in the very early stages of seeing what this new stack can enable. I know we are hardly the first folks to claim that “amazing stuff is coming!”. But this is why Alex and I have decided to focus on a real and hard problem. If we can prove that LLMs can power a new way to solve this problem, it will be the first successful *new* approach in 15 years.. We will have much more to share about this new platform soon. In the meantime, if you’re interested in getting a peek - or even better if you have application or data integration tasks that you would like some help with - please reach out because we want to help. #genai #startups #backinthegarage
163
37 Comments -
Muaz Siddiqui
Hallucination is Inevitable In today's evolving landscape of Large Language Model (LLM) implementations, a critical issue persists: the inevitability of hallucinations. As we continue to integrate these advanced models into various applications, it is essential to recognize and address the inherent limitations that come with them. What Are Hallucinations in LLMs? Hallucinations in LLMs refer to outputs that deviate from a computable ground truth function. Essentially, these are inaccuracies or false information generated by the model. This concept is formalized in a recent paper, which defines hallucinations as any deviation between the output of an LLM and the actual computable ground truth. https://lnkd.in/gbufQXaf The Theoretical Foundations Through a diagonalization argument, akin to those used in Gödel's incompleteness theorems and the Entscheidungsproblem, it can be demonstrated that LLMs cannot learn all computable functions. This inherent limitation ensures that hallucinations will always occur to some extent. These arguments underscore a fundamental truth: LLMs, by their nature, require external sources to validate and correct their outputs. The Silver Lining Despite these challenges, there is immense potential in the capabilities of LLMs. They generate a richness of thought and creativity that, when properly harnessed, can lead to groundbreaking advancements. The key is grounding these models with reliable, external knowledge bases to enhance their accuracy and dependability. Introducing Cerevox: Your Solution to LLM Hallucinations At Cerevox, we understand the critical need for accuracy and reliability in LLM-powered agents. Our platform provides your engineering team with full visibility and control over all data utilized by your LLM agents. We meticulously track and manage data exceptions, ensuring your knowledge base remains pristine. Our commitment to excellence means that 100% of the data in your knowledge base is structured to effectively ground your LLM agents, minimizing hallucinations and maximizing accuracy. By maintaining a clean and reliable knowledge base, Cerevox guarantees that your LLM agents operate with the highest level of integrity. Schedule a Demo Today Discover how Cerevox can transform your LLM implementations. Schedule a demo with us today and see firsthand how our platform can enhance the accuracy and reliability of your LLM agents. Schedule a Demo - https://lnkd.in/gwbrqfhS Join us on the journey to harness the full potential of LLMs while mitigating the risks of hallucinations. Together, we can build a future where advanced language models are not just powerful, but also trustworthy and accurate.
25
1 Comment -
Alan Colantino
As we grow in our career we are faced with more and more $5 distractions everyday at work. Learning which ones limit our forward thinking and which ones must be answered is a major step in growing your career. This reminds me of the "Think Big" leadership competency from Amazon.
7
1 Comment -
Cristian Constantin Olarasu
If you're running a data or AI-focused startup, connect with Stephan Goupille and his team! When three of the industry's brightest data minds join forces to back your venture, you get more than just funding – you gain invaluable operator expertise. They understand the challenges and opportunities you face because they've been there themselves.
6
-
Fahad Najam
** This is not Financial Advice, so do your own due diligence ** $50B TAM for 1M GPU Clusters? legendary CEO Hock Tan of Broadcom believes so. I had the opportunity to listen to Hock Tan at Arista Networks 10 year IPO anniversary. Hock believes that for generative AI to truly develop AGI or ability to reason capability, it will need 1M GPU clusters and only 2 or 3 Hyperscalers or one or two sovereign customers have the capability to build such massive clusters. Hock Tan estimates the $50B TAM per 1M GPU clusters includes $30B for GPUs (assuming GPU ASP of $30K), $5B for Networking (interesting 6:1 ratio between GPU/compute to Networking), $15B for infrastructure like Power, including power generation, cooling, data center space etc. What plausible business case supports such as massive investment? remains to be seen. Hock admits its more of a moon shot type project but believes we are getting there. The biggest inhibitor to scaling GPU or compute capacity remains power. I think this has profound implications for investors. While power generation capacity takes long time to come online (7 - 8 years) and new data center buildouts take (2 - 3 years), the only way for Hyperscalers (like Amazon Web Services (AWS), Google and Microsoft) to bring more compute functionality they will have to upgrade their existing brownfield infrastructure. This should be positive for Intel Corporation, AMD and of course NVIDIA, as not only do we have the possibility of more accelerated compute deployments, but also the upgrade of non-AI compute infrastructure. Prior to the AI momentum, the upgrade cycle of traditional compute was stretching to 5 - 6 years, with most hyperscale CFOs pushing for extending the amortization schedule of the compute assets. Power limitations and growing demand for AI capabilities will force these hyperscalers to rethink their amortization schedule and thus I believe the pendulum swings in favor of infrastructure providers. Shorter upgrade cycles, more emphasis on higher, more power efficient technologies is great for networking such as Arista Networks and optics suppliers such as Coherent Corp., Lumentum etc. Interestingly, Hock Tan believes that as AI LLM models achieve AGI capability they will begin to generate their own data(thus the ability to reason) and will not require massive external data to be trained on, thus representing significant implications for wide-area bandwidth requirements. Would love to hear Bill Gartner's thoughts on this. Hock Tan also believes that power limitations will shift the balance in favor of customer ASICs vs. GPUs. While ASICs in general are more power efficient than GPUs, I think this outcome entirely depends on the maturity of the LLM models. In the past the hyperscalers optimized for $1/Gigabit of bandwidth, but now they need to optimize for power/Gigabit. This has profound implications for networking and optics supply chain. Would love to get your thoughts.
39
19 Comments -
Rahul Sood
Great article from The #WashingtonPost highlighting (unfortunately) the "sleight of hand" among Deepfake Detection vendors. Sad to see how the vendor mentioned in the report was so loose with facts. Claims like these can hurt customer's confidence in the genuine safeguards and inadvertently lead them to incorrect decisions leaving them exposed to bad actors. I have personally spoken to 50+ customers and our deepfake team has spoken with 100+ customers on deepfake detection (not counting the 100s who have participated in our webinars). Here are 3 practical tips to any company looking at a deepfake detection solution. 1. Match the technology to your use case. Deepfake detection is NOT a single use case. Use cases like live impersonation require real-time detection and operationalization unlike other use cases that require post facto forensics. VAST Majority of vendors can only do forensics (that too with varying degree of accuracy) and are not designed for real-time, at-scale detection. 2. Ask the vendor to show credentials -- Published Results against open set scenarios -- Number of Patents -- Size of Research team -- Coverage of GenAI systems -- Size and representativeness of their testing data set 3. Test the system using your own data and not don't rely just on the vendor's claims. These are not designed to give #Pindrop an advantage, but don't be surprised if #Pindrop is the only vendor that meets ALL these conditions. Its only because we have built our technology the hard, old-fashioned way. -- 8+ years of research in deepfakes -- An amazing team of researchers, built one great hire at a time -- 20+ patents granted/pending just in deepfakes -- Repeatedly participating in the industry's leading tech challenge (ASVSpoof) -- Routinely sharing our accuracy against open-set scenarios -- Making our system available for blind testing by third parties -- Accuracy tested by customers using our tech in real world (and not just labs) Reach out if you want to learn more about how to fortify your deepfake detection capabilities
26
-
Michael Schuette
You raise a few good points, however, the writing has been on the wall for at least 7-8 years now. Intel has stubbornly refused to acknowledge that IO and memory bandwidth are the driving factors for performance and concentrated on the one thing they were good at, that is single-core (thread) performance. That is something they could get away with 10 years ago but times have changed. Yet the persona-cults at Intel and a strong sense of "not invented here" have prevented the course corrections necessary and instead just driven things further into the rabbit hole. Optical chiplets, all the other accomplishments don't matter if they can't be integrated into a product and road map that makes sense. At this point what they have are scattered single accomplishments and even breakthrough technologies but nothing that would bring them all to one table. And trying to play catch-up with NVIDIA is a noble effort but ignores the reality of user bases, specifically Cuda implementations. That's what the entire scientific world has been using, all models and data bases from the last 2 decades cannot be ported over to a new architecture without losing huge amounts of data which translates into loss of veracity and accuracy of those models.
1 Comment -
Meredith Hobik
Data scientists are not trained to understand debits versus credits, CAPEX versus OPEX, accruals, revenue recognition rules, etc. FP&A professionals are not (initially) trained as data scientists. If they are, as Kurt Shintaffer states, they are torching money and time building it themselves. And what happens when the few individuals at your company with this skill set move on? ‼️🤷♀️ Check out what Sandeep Madduri and Naresh Nemali built for finance professionals. Precanto is what this generation of finance professionals need. Monetize the data … Transformation starts now! 🚀 (Disclaimer: This bold post is coming to you from recovery room while on pain meds. All is good and colon repaired. Please eat your fiber, folks, and listen to your body! But in all seriousness, Precanto is the next wave of innovation for CFOs and FP&A teams.)
27
7 Comments -
Nicholette Daniel
Hot off the press, this latest Silicon Valley Product Group article is a good reminder that there are multiple paths to success, and we have to tailor our strategies to fit our unique circumstances. There’s rarely a single answer or approach when it comes to topics like organizational design— after all, people and companies are complicated and are never the same! To quote Marty: “I believe strongly that with the right people with the right skills, a strong product team can succeed no matter the org design. […] I will never again doubt what a motivated group of empowered and skilled cross-functional professionals can accomplish.” Definitely worth a read if you have ~4 minutes and lead or are part of a product team.
4
-
Jessie Copeland
RAM too small? Not sure? If your computer doesn’t have enough RAM (random access memory), it can cause performance issues. If your programs freeze, your computer crashes or it’s running slowly, you probably don’t have enough RAM. Think of RAM as your CPU’s short-term memory, it stores the data your computer needs to open files and run software. If it runs out of RAM, it must move data between the RAM, the CPU, and the disk drive, slowing everything down. As your trusted IT partner, we make sure you have what you need to run your business smoothly, effectively, and efficiently. Contact us today! #TechnicalSupport #RandomAccessMemory #trusteditpartner #wedeliverhappiness
1
-
Jared Yarn
Since launching last Fall, its been incredible to see all the many workflows and integrations that companies are building with Lucid's Developer Platform. We've seen companies build automated processes and deeply linked templates that help with Sales, Marketing, Operations, and Engineering teams get the most out of Lucid. Come learn how to get started building from two of our engineers on the API teams who help engineers build these integrations every day.
17
-
Jason Livingood
For folks working on Conversational AI - what is your network latency and jitter budget for great QoE? I assume good QoE would mean less than 400 ms of round trip delay and very consistent delay (jitter). Appreciate any insights! #conversationalAI #artificialintelligence Came to mind after seeing a demo and the mobile device was connected to the network via USB-C-based Ethernet - to factor out network delay.
18
1 Comment -
Jared Miller
Join us in Irvine on July 17th at 6:30 PM for our latest OC ACM talk, "Systems and Software for Averting an Energy Crisis" presented by Bill Gervasi, a Principal Systems Architect for Wolley Inc. For more info or to RSVP click here: https://lnkd.in/gquYQ2z9 According to a recent Forbes article, "The rise of generative AI and surging GPU shipments is causing data centers to scale from tens of thousands to 100,000-plus accelerators, shifting the emphasis to power as a mission-critical problem to solve." making this talk particularly relevant. #acm #techtalk #gpu #ai #generativeai #nvidia #energy #energycrisis
3
2 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore More