Sign in to view George’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
New York, New York, United States
Contact Info
Sign in to view George’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
16K followers
500+ connections
Sign in to view George’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with George
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View mutual connections with George
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Sign in to view George’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
About
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Activity
Sign in to view George’s full profile
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
or
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
-
Grateful to those who let me crash on their couches or in their closets.
Grateful to those who let me crash on their couches or in their closets.
Shared by George Sivulka
-
AI is disrupting finance, law, and professional services. Live on Bloomberg TV: https://lnkd.in/edTRjV6r
AI is disrupting finance, law, and professional services. Live on Bloomberg TV: https://lnkd.in/edTRjV6r
Shared by George Sivulka
Experience & Education
-
Hebbia.AI
******* & ***
-
******** **********
****** ** ********** - *** ********** ***********
-
******** **********
****** ** ******* - ** ******* *******
View George’s full experience
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
Welcome back
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
View George’s full profile
Sign in
Stay updated on your professional world
By clicking Continue to join or sign in, you agree to LinkedIn’s User Agreement, Privacy Policy, and Cookie Policy.
New to LinkedIn? Join now
Other similar profiles
-
Seyed Sajjadi 🧠🚀
Los Angeles, CAConnect -
Denise Clements
Los Angeles, CAConnect -
Rob Nielsen
Newport Beach, CAConnect -
Kylie Rowe
Reno, NVConnect -
Snigdha Sur
New York, NYConnect -
David Grossman
Chicago, ILConnect -
Pete Winiarski
Greater HartfordConnect -
Shubh Karan Singh
New York, NYConnect -
Patrick McKenna
Los Angeles, CAConnect -
Monica Poulard Hawkins
Washington DC-Baltimore AreaConnect -
Damon Clark
Chicago, ILConnect -
Dr. Ichak Adizes
Santa Barbara County, CAConnect -
Natalie Gingrich
San Antonio, Texas Metropolitan AreaConnect -
Alex Rosman
Miami, FLConnect -
Bruce Tulgan
New Haven, CTConnect -
Donald McKenzie Jr.
Greenwich, CTConnect -
Neil Heird, MBA
Digital Marketing Strategist | Growth Marketing
Greater Tampa Bay AreaConnect -
Paul Roetzer
Cleveland, OHConnect -
Ralph Ayala
St Augustine, FLConnect -
Randy M. Long, J.D., CFP®
Greater Wilmington AreaConnect
Explore more posts
-
Adam Lazovski
The promise(s) of #DeepTech startups are appealing to many investors... But do they provide superior returns? To examine this thesis, we took a deep dive into Intel Ignite’s portfolio (Israel), known for its #DeepTech affiliation in the last 4 years. *Including the latest batch (#9). 56 companies (+2 in stealth)... Ready? Let’s go! Most recurring investors? Kmehin, NFX, toDay.Ventures, TLV Partners. 2 companies are inactive and 7 companies have been acquired (nice Eli!), some by other Israeli companies like Check Point Software & Snyk (Atmosec (Acquired by Check Point), Helios (part of Snyk). Pretty incredible stat considering the short time most these companies have been around. *Excluding them from my analysis going forward. Company sizes by headcount: 🥚 Small (0-15) - 23, 48.94%. 🐣 Medium (16 - 35) - 16, 34.04%. 🐥 Large (36+) - 8, 17.02%. Fastest growing companies YoY in employees for each category: 🥇Filo Systems (Small). 🥇Pelanor (Medium). 🥇Exodigo (Large). 💵 Zesty leads the list with the most funding to-date, totaling $117M raised, just a tad above Exodigo with $116M. 💚 18 companies, 38.3%, disclosed funding in the last 12 months. 💛 16 companies, 34%, haven’t disclosed new funding in 12-24 months. 🧡 13 companies, 27.7 haven’t disclosed new funding in 24+ months. Honestly, this time most of the portcos are stacked with fresh funding compared to other portfolios I review. Impressive given that the local market is just starting to pick up again. ARR (beta, working with the ‘lower end’ estimation): 📈 4 companies are at $5M+ / ARR 📊 8 companies are at $2M+ / ARR 💹 10 companies are at $1M+ / ARR 🏗 25 companies are either below $1M ARR or have no estimate at this point. Want to know who? Perform your own analysis? Check out the free full list through Dealigence in the link here -> https://lnkd.in/dh58eXjt If you liked this post, kindly click follow 🔔 and send it to friends who would find this interesting 😋? P.S - No one at Intel Ignite asked me to do this. I simply heard great things from David & Ohad, both program alums from LayerX Security and Hyperspace and decided to quench my curiosity 🙂. P.S #2 - Both David & Ohad lead incredible companies and you should definitely check them out. Don't take my word for it, see the data yourself in the platform. Gil, Yaniv 😎
67
21 Comments -
Chinar Movsisyan
💡 Why do we need so many evaluation tools for LLMs? As engineers, we build 'production-ready' LLM products using these metrics. But what happens next? How do we maintain control and ensure reliability? At Feedback Intelligence, we’ve crafted a cookbook to keep your LLMs reliable and aligned with user expectations. 🍲 📖 Give it a read and let’s chat!
16
-
Ruchir Patwa
Ever wondered how fine-tuning impacts the safety of your LLMs and applications? Find out below... I have been having a lot of conversations with folks who are building with LLMs, and one of the most common topics that comes up is the impact of fine-tuning on the safety and security of an LLM. Sometimes it comes up as a question and sometimes as a myth that fine-tuning makes my model "more secure" because it will only answer based on the fine-tuning data set. We at SydeLabs will soon be publishing a more detailed report around this, but here is a very quick experiment we did with one of the most popular LLMs. We fine-tuned it on a fairly small dataset (~1000 rows of data). The data involved simple questions and answers with no toxic or unsafe content. Here are the results of the responses we got for a question before and after fine-tuning the model.
64
-
Jeffrey Paine
Vinod's thoughts on AI: Key Takeaway 1: The impact of AI on various industries is expected to be highly deflationary. As AI automates many tasks and reduces labor costs, prices for goods and services are likely to decrease significantly across sectors like healthcare, content creation, and professional services. Key Takeaway 2: There are still opportunities for AI startups, particularly in niche applications and specialized domains. While large language models may become commoditized, there is potential for domain-specific AI solutions that integrate workflow knowledge and industry expertise. Key Takeaway 3: The cost of using AI models is expected to decrease dramatically in the near future due to competition between major players. This will likely make AI more accessible and ubiquitous, though training costs remain high. Key Takeaway 4: Apple's recent AI announcements demonstrate the company's ability to integrate technology, platforms, and products. This positions Apple well to bring small on-device AI models to consumers and developers at scale. Key Takeaway 5: The upcoming U.S. presidential election has significant implications for the tech industry and AI development. There are differing views in Silicon Valley on regulation and policy approaches, with some favoring less regulation while others are concerned about ethical issues. https://lnkd.in/gG9hSuUy
11
-
Do Yeoun Lee
💥Google's new Med-Gemini for healthcare outperforming GPT-4 Med-Gemini is a family of multimodal AI models built upon their Gemini models, specifically designed for the healthcare industry. From what I've read, these models are built on Google's existing Gemini AI architecture, but they've been specifically designed and trained for medical and clinical applications. It can process long-form medical records and research papers, and it can output text, images and videos to help explain things. Google is claiming that Med-Gemini has even outperformed OpenAI's powerful GPT-4 model on certain medical reasoning tasks, which is pretty impressive. On standard medical benchmarks like the MedQA exam, the Med-Gemini-L 1.0 model has scored really high, beating out even Google's own previous Med-PaLM 2 model. One cool feature is that Med-Gemini integrates web search, so it can pull in reliable information from the internet to provide more nuanced and factual responses. The models aren't publicly available yet - Google is still working on improving them and making sure they adhere to responsible AI principles around privacy and fairness before they release them. But from what I've seen, Med-Gemini is shaping up to be a really powerful set of medical AI tools.
1
-
Gates Torrey
AI-powered web agents fail. A lot. Why? Because they are inherently probabilistic tools operating in a big, complicated environment that changes all the time – the internet. At High Dimensional Research, our mission is to transform web agents from an unreliable, expensive pipedream for AI specialists into an efficient, production-ready tool for every full-stack developer. Today, we took a big step towards that goal by launching the Memory Index. This functions as a repository of user-anonymized web trajectories -- storing page structures as a graph, and navigation across those structures as searchable data. What does that mean? Well, basically, we observe the agents using our platform and transform their patterns of activity into a roadmap for the internet. The more people who use it, the more detailed and useful the map becomes. This way, an AI never has to figure out how to solve a problem that has already been solved. We just give them the answer. This radically reduces the failure rate, cost, and carbon footprint of running an AI agent. Check out our launch announcement and sign up at hdr.is/memory. https://lnkd.in/ePmCcScM
3
-
Rob Pickering
Another super interesting entrant to the field of real time multimedia access to LLMs. The bar to just building stuff gets lower every week at the moment. Real time media companies like Daily are building out to meet LLMs in their historically synchronous turn based world, and at the same time AI companies like OpenAI and Fixie.ai are putting real time media taps directly onto their models. All this can only benefit application builders, enabling better machine conversations at a lower dev cost.
7
-
Baris Aksoy
NVIDIA's Jensen said "ChatGPT democratized computing, Llama2 democratized generative AI" ...and now Llama3 is the next level 🔥 It's fascinating to watch Meta's strategic moves. With Llama3, they prioritized training on a massive 15T token dataset to pack all into a lean 70B param model, instead of building a massive model. This allows Llama 3 to match trillion+ parameter models like GPT-4, but at 1/10th the compute, storage and inference costs! 💰 This technique was published by Google DeepMind a few years ago https://lnkd.in/gx7VU8aA Meta is not a dark horse in AI anymore. They might be the top dog. https://lnkd.in/gRRJYuxt #llama2 #llama3 #llm #chatgpt #gpt4 #ai #ml
16
5 Comments -
Matt Rappaport
The Tesla Transport Protocol (TTP) appears to be an essential part of Tesla's AI stack. In an article on X titled, "TTP: The hidden powerhouse behind AI and self-driving" (linked below), patent researcher, Seti Park lays out several interesting points: - TTP is a streamlined, hardware only approach to network protocol operation as opposed to traditional network protocols like TCP/IP which rely mostly on software. - TTP allows for near-instantaneous data transfer between AI components. - TTP scales easily as it is adaptable to different AI hardware configurations. Ultimately, Park writes, TTP: 1. Reduces communication latency which could significantly speed up AI model training. 2. Improves data transfer time between sensors, processors, and actuators means improved reaction times for self-driving systems. 3. Enables more efficient distribution of AI workloads across different chips. Park argues that Tesla's integrated approach - combining custom hardware with proprietary communication protocols like TTP - creates a technological moat that's hard to cross. TTP optimizes how its AI components talk to each other, Tesla is laying the groundwork for more advanced, more responsive, and safer autonomous systems. See a link to the patent here: https://lnkd.in/gSzpig8Q? #FrontierTechnology #SelfDriving #FutureFrontier
5
-
Vivian Chan, PhD
I'm currently putting together a newsletter (vivianchan.substack.com) to share the latest startup and VC jobs in the Deeptech space. Quite a few people have asked me for intros or to tap into my network, so I thought I’d collate everything and share it with everyone. If you or your portfolio companies are hiring, let me know and I'll include it in the next edition! 🙌 Examples of available roles: AI Research Engineer, FutureHouse (AI, Non-profit research org backed by Eric Schmidt), San Francisco, US Head of Operations - AimHi Earth (Sustainability, Seed), Remote Clinical Laboratory Scientist - Juno Bio (Bioinformatics / Microbiomes, Seed), Oakland, California, US Deeptech/Climate Investor - Founders Factory (VC), London, UK or NY, US Bioinformatics Consultant - Deep Science Ventures (VC), London, UK VP Finance - Space Capital (VC), NY, US 📩 More in the first edition: vivianchan.substack.com #Deeptech #Startups #VentureCapital #JobOpportunities #Networking #TechJobs #Innovation #Entrepreneurship #Hiring #Jobs #CareerGrowth
25
5 Comments -
Meredith Hobik
Data scientists are not trained to understand debits versus credits, CAPEX versus OPEX, accruals, revenue recognition rules, etc. FP&A professionals are not (initially) trained as data scientists. If they are, as Kurt Shintaffer states, they are torching money and time building it themselves. And what happens when the few individuals at your company with this skill set move on? ‼️🤷♀️ Check out what Sandeep Madduri and Naresh Nemali built for finance professionals. Precanto is what this generation of finance professionals need. Monetize the data … Transformation starts now! 🚀 (Disclaimer: This bold post is coming to you from recovery room while on pain meds. All is good and colon repaired. Please eat your fiber, folks, and listen to your body! But in all seriousness, Precanto is the next wave of innovation for CFOs and FP&A teams.)
27
7 Comments -
Mohammad Nouman Khan
Free event: Top 10 AI Use Cases with Dan Shipper. Every founder and CEO Dan Shipper joins Section to reveal his top 10 AI use cases from hundreds of executive interviews. RSVP here.* Neuralink hosted a livestream with new updates on its BCI, including potential telepathic and limb-controlling integrations with Tesla’s Optimus humanoid robots, new patient testing, and more. OpenAI CTO Mira Murati revealed in an interview that the company’s Sora video generation model still has no public release date, with the tool still undergoing safety red teaming and testing for usefulness. The Washington Post introduced ‘Climate Answers’, a new AI climate chatbot that answers user questions from the outlet’s own reporting. AMD is acquiring Silo AI, a Finnish startup specializing in end-to-end-AI solutions, for $665M — aiming to bolster its chip development and deployment to compete with rival Nvidia. OpenAI announced a partnership with Los Alamos National Laboratory to evaluate the safe use of models like GPT-4o in bioscience lab settings and assess AI’s potential to enhance research while reducing risks. Scale is partnering with AWS to boost enterprise and government adoption of genAI, offering customization tools and secure platforms to leverage and deploy the tech effectively.
1
-
Chia Jeng Yang
Excited to announce another major upgrade to WhyHow.AI’s Knowledge Graph SDK - tying vector chunks to graph nodes automatically, for a more deterministic and richer context window. Check out how we do it, why we did it, and an example benchmark of the increased completeness of the answer. Tired of just returning single-word triples from your knowledge graph? WhyHow.AI’s latest upgrade with vector chunk linking now lets you use a graph structure to determine which raw vector chunks to return to the context window, combining the best of knowledge graphs and vector search. “While the triples in a Knowledge Graph are useful in providing specific information that semantic similarity was unable to retrieve, we wanted to also allow leeway in the information represented and retrieved from the graph, to include the surrounding words and retrieving the relevant raw vector chunk tied to that graph node as well. By tying vector chunks to a knowledge graph, we get the advantages that lie in both vector and graph search.” - WhyHow.AI Design Partner WhyHow.AI builds workflow tools for data orchestration, and graph creation, and we work on top of any data extraction model you want to bring. In this case, we work on top of OpenAI, Neo4j, and Pinecone, and will be supporting the most popular data extraction models, LLMs, graph and vector databases. https://lnkd.in/eEJdUNPi
68
3 Comments -
Bharat Khatri
Anthropic just released its new multimodal AI model: Claude 3.5 Sonnet. The benchmarks sure need a refresh as AI models keep improving at breakneck speed. But more importantly, each benchmark represents a real-world application, so it's crucial to steer clear of thin wrappers over any benchmarked capability. #claude #anthropic #chatgpt #gpt #openai
5
1 Comment -
Daniel Kornev
Super proud of Mike and his team. Many people asked me over the last year why long context and reasoning are so important. My answer is simple: long context is, in other words, a much better memory that allows LLM to utilize its ability to comprehend incoming text fully which as we all know is a million miles better than RAG. A very clear use case is with processing huge documentation like corporate wiki. Now, the deal with reasoning over long context is also simple: if models can get an ability to process long context but do that poorly then what's the value is in doing? And that's why a proper benchmark like BABILong is so desperately needed. It helps you to open your eyes on how inefficent the majority of the modern methods for processing long context by LLMs are. It helps you to understand what the real goal is. Let's make LLMs understand long context better together!
18
2 Comments -
Baris Aksoy
The best things come in small packages 👝 🚨 Pay attention to the rise of Small Language Models (SLMs)! They will become important going forward: 1️⃣ Efficiency: SLMs achieve comparable or superior performance to LLMs with fewer parameters. Microsoft's new Phi-3 series outperforms larger models on benchmarks like MMLU & MT-bench (Google: https://lnkd.in/gn5zYZHg Microsoft: https://lnkd.in/ghK8fm7V Meta: https://lnkd.in/gkbM7pGs) 2️⃣ Specialization: Trained on curated, high-quality datasets. With 3.8B-14B parameters & advanced datasets, these transformer decoder models match GPT-3.5's capabilities in a compact form. Accurate benchmarking/eval will be important. 3️⃣ Interpretability: Neurons in SLMs are easier to analyze, improving model transparency. 4️⃣ Cost-effective: Less resource-intensive (vs training large models) I have a chance to meet a few exciting startups here. Reach out if you are working on this space 🙋🏻♂️ #startups #llm #slm #smalllanguagemodels #phi3 #largelanguagemodels #gpt4 #google #gemma #llama3 #ai #ml #edgeai
24
4 Comments -
Kellen Croston
Did a cool talk today for a big public entity. If anyone wants to bring me in to demystify ML tech and explain how to actually prompt and train models - hit me up. It was fun. (Not trying to sell anything, actually trying to talk entities into running free, open-source, locally run models instead of dumb one size fits all enterprise SaaS solutions with data security concerns.) I will have a facsimile video presentation up in a couple days.
2
2 Comments -
Pierre-Louis Biojout
📊 phospho first technical paper is out! We are releasing Intent Embed, an intention embedding model for text messages! 🤖💬 Intent Embed is designed to capture and represent user intention within dense embedding vectors. Unlike traditional text embedding models that focus on semantic or syntactic aspects, Intent Embed zeroes in on the underlying user intent, making it a powerful tool for developers and machine learning engineers working on LLM-based applications. Key highlights from our report: ✅ Demonstrated effectiveness in capturing user intent from complex inputs 📈 Superior performance compared to industry-standard models like OpenAI's text-embedding-3-small 🌐 Generates 1536-dimensional vectors, ensuring compatibility with existing vector infrastructure 🛡 Robust to noise, typing errors and adversarial prompts 🔍 Potential use cases include user request classification, out-of-topic exclusion, and user message analysis 🌟 We believe Intent Embed has the potential to improve how developers and ML engineers approach user-centric applications, contributing to more effective and intuitive LLM interactions. 🤫 We already use it extensively for our user intent clustering in the phospho platform. Of course, you can access it via the phospho API (link in the comments). 📰 Read the full technical report below!
147
18 Comments -
Frank Lee
I’ve been keeping a running list on things I wish I knew or did while working in product before transitioning to founding Inari (YC S23) and here's the 2nd part! Note: this pic is me on our 1st day building in SF for YC. Smiling through how fun all the incorporation paperwork was 😂 Mastering growth and distribution 💌 I never truly internalized the adage “first time founders obsess over product and second time founders obsess over distribution” until we launched Inari for the first time last year and initially heard crickets. I hadn’t worked on growth in any of my prior roles and now I probably spend 60%+ of my capacity catching up, learning basic tactics, and experimenting with new channels for Inari. I think we could be much farther along if I had mastered in-product growth and activation tactics, had reps setting up outbound and SEO systems, and built a community and audience much sooner (I also never realized how fun engaging with people online could be 😄). No excuses not being “technical” in 2024 💻 If you’re considering working on a SaaS startup and you can’t code, just go and learn to. With the recent advances in LLMs and coding assistants, there’s literally never been a better time to learn to code, unblock yourself when you run into issues, and build an MVP. There’s nobody on the planet that will care about your product as much as you will. If you’re unable to be deep in the details, identify defects, and make fixes quickly yourself, then the overall clock speed and quality bar of your product will suffer. The quality expectations for ALL startups have increased over the last few years and it’s on you as a founder to keep that rigor and velocity extremely high, so being able to unblock yourself makes a world of difference. Managing your own psychology 🧠 Surprisingly, the toughest part when transitioning to founding was managing my own mental state. There’s a constant internal uncertainty on whether you’re building something that people actually want and spending time on the right priorities. Your view on finances also completely changes. Money used to be a resource I spent on fun discretionary spend but now money feels like “oxygen”. The stakes of each day feels much heavier when you feel like you’re running out of oxygen for your startup which you're pouring your heart into. Overall, be prepared for these internal influences before making the jump! That could be better carving out the right runway, validating your idea or building an MVP before leaving your job, or bumping up your skills around building and selling beforehand.
80
7 Comments
Explore collaborative articles
We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.
Explore MoreOthers named George Sivulka
1 other named George Sivulka is on LinkedIn
See others named George Sivulka