Discover and connect with journalists and influencers around the world, save time on email research, monitor the news, and more.
Recent: |
|
Past: |
|
In the last and final part of the series on Acorn, we will explore how to move applications running in a local development environment to a production environment running in the AWS cloud → Read More
Acorn simplifies the definition of multicontainer workloads while translating it to respective Kubernetes objects. → Read More
What fascinates me about Acorn is its simplicity and portability. It treats Kubernetes as an ideal runtime environment for deploying, scaling, and running applications without any assumptions. → Read More
Implementing Zero Trust is not an easy task for enterprise IT. Some of the challenges include the sprawl of user identities, the proliferation of internal applications, reliance on third-party SaaS applications, and the rise of hybrid, multi-cloud, and edge architectures. → Read More
This tutorial will show you how to install Ondat Community Edition on an on-prem clusters running in a bare metal environment. → Read More
This tutorial will walk you through the steps involved in performing real-time object detection with DeepStream SDK running on Jetson AGX Orin. → Read More
In this tutorial, we will configure and deploy Nvidia Triton Inference Server on Jetson Mate to perform inference of computer vision models. → Read More
A project to build a powerful AI inference engine based on 4X NVIDIA Jetson Nano modules, SeeedStudio’s Jetson Mate Mini, NVIDIA Jetpack 4.6, and the NVIDIA Triton Inference Server. → Read More
This tutorial will walk you through all the steps required to install and configure KServe on a Google Kubernetes Engine cluster powered by Nvidia T4 GPUs. → Read More
KServe is collaboratively developed by Google, IBM, Bloomberg, NVIDIA, and Seldon as an open source, cloud native model server for Kubernetes. → Read More
A common misconception is that deploying models is as simple as wrapping them in a Flask or Django API layer and exposing them through a REST endpoint. Unfortunately, this is not the most scalable or efficient approach in operationalizing ML models. → Read More
The objective of this guide is to highlight how Modin and Scikit-learn extensions are a drop-in replacement for stock Pandas and Scikit-learn libraries. → Read More
Let's take a closer look at Intel Distribution of Modin and Intel Extension for Scikit-learn. → Read More
An introduction to the VMware Tanzu Application Platform. → Read More
the step-by-step guide to using FluxCD with DigitalOcean Kubernetes. → Read More
Multi-Modal AI and Large Language Models are among the breakthroughs are among the breakthroughs that will keep AI moving ahead in 2022. → Read More
WebAssembly, GitOps, and managed Kubernetes are among the trends we expect to see in Clod Native 2022. → Read More
In this post, we will do a hands-on evaluation of Amazon SageMaker Canvas. Follow along to train a logistic regression model. → Read More
The tutorial shows how publishing serverless inference endpoints for TensorFlow models. Hope you found it useful. → Read More
Launched at the company’s re:Invent 2021 user conference earlier this month, ‘ Amazon SageMaker Serverless Inference is a new inference option to deploy machine learning models without configuring and managing the compute infrastructure. It brings some of the attributes of serverless computing, such as scale-to-zero and consumption-based pricing. With serverless inference, SageMaker decides to… → Read More