In the spirit of sharing more code, I have posted an example of computer generated hologram rendering using C++. This computation expensive process sees the RGB light/wavefront contribution of every point in a scene on every pixel to construct holographic interference patterns. This example loads a point cloud as a source model and generates rows of binary float images. I have not yet verified the results with a true holographic projection system. The github repo is https://lnkd.in/gPkjZZy6.
Thomas Burnett’s Post
More Relevant Posts
-
We here at Fermyon have been SUPER stoked about all the developments with the #WebAssembly Component Model! Watch Sohan Maheshwar latest Cloud Native Computing Foundation (CNCF) webinar where he explains how to get started w/ "The next evolution of WebAssembly - the component model" 🧑🏫 https://bit.ly/48bCM0x
The next evolution of WebAssembly - the component model
https://www.youtube.com/
To view or add a comment, sign in
-
Azure Chaos Studio is an Azure service that helps you measure, understand, and build application and service resilience to real-world incidents, such as a region going down or an application failure causing 100% CPU usage on a VM. With Chaos Studio, you can run chaos engineering experiments that inject faults against your service and then monitor how the service responds to disruptions. Chaos experiments help you to validate architectural choices and improve service reliability. Chaos experiments can be run ad-hoc for running manual BCDR drills and Game Days, or as part of your CI/CD pipeline to programmatically gate code flow https://lnkd.in/gV4E8SRg #microsoft #azure #learn #cloud
Azure Chaos Studio documentation - tutorials, API reference
learn.microsoft.com
To view or add a comment, sign in
-
Find the main colors in an image by implementing k-means clustering using the Accelerate framework in SwiftUI... Great Sample Code by Apple! And one more thing: the code also includes a 3D point cloud visualization of the color distribution made with SceneKit. https://lnkd.in/dBUdH_gN #scenekit #3D #pointcloud #swiftui #swift #accelerate #algorithm
Calculating the dominant colors in an image | Apple Developer Documentation
developer.apple.com
To view or add a comment, sign in
-
Hey everyone! Wanted to share a project that I've been working on for buildspace nights and weekends - SplitCompute! tl;dr - The aim of the project is help reduce cloud GPU inference costs by *partially* offloading compute requirements from the cloud to the end user's hardware. Context- For inference at scale, one can either run their models fully on cloud GPUs (through providers like Azure) or they can make use of projects like llama.cpp and WebLLM to run their models fully client side (edge). 1. Former is the most popular choice, due to ease of use. Also, the end user might not always have the requirements necessary to run the computations. 2. Latter is often not feasible as the models are hard to fully transport to the client side, as they can be multiple ~10s of GBs in size. This project explores if there is a middle ground, where only a subset of the model weights need to be streamed over (in the order of ~10-100s of MBs) to get some percentage of cloud savings, with no difference to user experience. It works by splitting the model into contiguous subsets of layers, and executing the layers partially on the cloud and partially on edge. For tensor acceleration on the browser, it makes use of WebGPU shaders. More info on the project readme -https://lnkd.in/gfJ48xp2, will be up soon with an arxiv! Work heavily inspired by Georgi Gerganov's ggml.
To view or add a comment, sign in
-
🚀 Starting the week strong on our Mistral-on-AWS repo: New release, new notebooks, and new recipes, all for you, lovely rascals! But, what is in the box? 🥳 Introducing Mistral Large 2: Everything you wanted to know about this new frontier model. 🥷 Running 8x7B using NVIDIA NIM on Sagemaker: Can't get more optimal than that! 🎯 Fine tune 8x7B using QLoRA: Make this MoE your own! Just because we release Mistral AI Large 2 on Amazon Bedrock last week, it does not mean that folks like Niithiyn Vijeaswaran, Armando Diaz, Preston Tuggle ever stop! Thank you for making sure we get to play with the latests toys. Go check it out: https://lnkd.in/gAT4Cmy9 #GenerativeAI #FoundationModels #FrontierModels #GenAI #MachineLearning #Innovation #NVIDIA #AWS #Mistral
GitHub - aws-samples/mistral-on-aws: Mistral on AWS examples for Bedrock & SageMaker
github.com
To view or add a comment, sign in
-
Enterprise Solutions Architect, GenAI @ Amazon Web Services | B.S Comp Sci and Bioinformatics | 4x AWS certified
Feels like we’re doing this every other day now but such is the landscape of Generative AI. Check out our Mistral AI-on-Amazon Web Services (AWS) repo’s latest set of notebooks from Preston Tuggle, Armando Diaz, Aman Shanbhag, and me that go over how to get started using NVIDIA NIM for inference with Mixtral 8x7B on SageMaker, release highlights for last week’s Mistral Large 2 release on Amazon Bedrock, and last but not least, Fune tuning Mixtral 8x7B with QLoRA! #mistral #AWS #Sagemaker
🚀 Starting the week strong on our Mistral-on-AWS repo: New release, new notebooks, and new recipes, all for you, lovely rascals! But, what is in the box? 🥳 Introducing Mistral Large 2: Everything you wanted to know about this new frontier model. 🥷 Running 8x7B using NVIDIA NIM on Sagemaker: Can't get more optimal than that! 🎯 Fine tune 8x7B using QLoRA: Make this MoE your own! Just because we release Mistral AI Large 2 on Amazon Bedrock last week, it does not mean that folks like Niithiyn Vijeaswaran, Armando Diaz, Preston Tuggle ever stop! Thank you for making sure we get to play with the latests toys. Go check it out: https://lnkd.in/gAT4Cmy9 #GenerativeAI #FoundationModels #FrontierModels #GenAI #MachineLearning #Innovation #NVIDIA #AWS #Mistral
GitHub - aws-samples/mistral-on-aws: Mistral on AWS examples for Bedrock & SageMaker
github.com
To view or add a comment, sign in
-
All you need resources to get started building powerful apps with the slew of advanced model capabilities from Mistral AI Large 2, and cost-effective inference optimization with NVIDIA NIM containers for Mistral models.
🚀 Starting the week strong on our Mistral-on-AWS repo: New release, new notebooks, and new recipes, all for you, lovely rascals! But, what is in the box? 🥳 Introducing Mistral Large 2: Everything you wanted to know about this new frontier model. 🥷 Running 8x7B using NVIDIA NIM on Sagemaker: Can't get more optimal than that! 🎯 Fine tune 8x7B using QLoRA: Make this MoE your own! Just because we release Mistral AI Large 2 on Amazon Bedrock last week, it does not mean that folks like Niithiyn Vijeaswaran, Armando Diaz, Preston Tuggle ever stop! Thank you for making sure we get to play with the latests toys. Go check it out: https://lnkd.in/gAT4Cmy9 #GenerativeAI #FoundationModels #FrontierModels #GenAI #MachineLearning #Innovation #NVIDIA #AWS #Mistral
GitHub - aws-samples/mistral-on-aws: Mistral on AWS examples for Bedrock & SageMaker
github.com
To view or add a comment, sign in
-
📣 🏃♀️Runhouse🏠 0.0.12 📣 This one is a doozy. We're excited to release a gaggle of new features in Runhouse OSS, including rh.Module (inspired by PyTorch's nn.Module), streaming, async, and much more. 👇 1. Rearchitecting how we use Ray 🏹: 🏇 Dropped roundtrip latency by hundreds of ms 🏞️ Improved environment flexibility within programs 📍 Improved pinning to GPU memory Remote functions are lightning fast, like your cloud GPU is attached to your laptop 👩💻🏭. If you use Ray and are interested to hear more, please don't hesitate to reach out! 2. ⛲️ Streaming and async: We heard your demands - Remote functions can now be async or generators, with streaming built-in. This is a game changer for LLM applications - you don't need to lift a finger to stream tokens back from inference 🪙 3. 🦸♀️ Introducing rh.Module, a new superpower: send classes to the cluster to call methods on them, or compose remote stateful services with zero boilerplate. Magically, you don't need to modify your existing code. Note that this isn't just serving an API based on a function like Next.js or FastAPI, it's creating your service from scratch in your own infra or cloud account.
To view or add a comment, sign in
-
Finalized changes to the bash script that creates BigEarthNet RGB composite images. It's not perfect, but it works. Diff here: https://lnkd.in/eGum2H3j Step 1: Read test set CSV file, iterate over it Step 2: Create a virtual dataset with Sentinel 2 bands 432 (RGB) Step 3: Convert virtual dataset to cloud optimized GeoTIFF Step 4: Reproject Cloud GeoTIFF to WGS84, since these are all in different UTM zones Step 5: Greate a GeoJSON footprint. Need to load these geometries into PostGIS along with vector embeddings. Step 6: Clean up Running this multithreaded over 12/16 cores, this process took 8.5 hours to process ~125k images around ~160x80 pixels. Images are going to the S3 bucket. Gotta figure out how to get these GeoJSONs parsed, into WKT/WKB and into PostGIS. There are a few ways to skin this cat...
To view or add a comment, sign in
-
NVIDIA Triton Inference Server is a powerful tool for deploying machine learning models in production environments, specifically designed to run on Kubernetes. Learn how NGINX Plus Ingress Controller can provide secure external access -as well as load balancing- to a Kubernetes-hosted NVIDIA Triton Inference Server cluster!
How I did it - "Securing Nvidia Triton Inference Server with NGINX Plus Ingress Controller" | DevCentral
community.f5.com
To view or add a comment, sign in