Janakiram Msv, The New Stack

Janakiram Msv

The New Stack

India

Contact Janakiram

Discover and connect with journalists and influencers around the world, save time on email research, monitor the news, and more.

Start free trial

Recent:
  • Unknown
Past:
  • The New Stack

Past articles by Janakiram:

Tutorial: Deploy Acorn Apps on an Amazon EKS Cluster

In the last and final part of the series on Acorn, we will explore how to move applications running in a local development environment to a production environment running in the AWS cloud → Read More

Acorn from the Eyes of a Docker Compose User

Acorn simplifies the definition of multicontainer workloads while translating it to respective Kubernetes objects. → Read More

Acorn, a Lightweight, Portable PaaS for Kubernetes

What fascinates me about Acorn is its simplicity and portability. It treats Kubernetes as an ideal runtime environment for deploying, scaling, and running applications without any assumptions. → Read More

Zero Trust Network Security with Identity-Aware Proxies

Implementing Zero Trust is not an easy task for enterprise IT. Some of the challenges include the sprawl of user identities, the proliferation of internal applications, reliance on third-party SaaS applications, and the rise of hybrid, multi-cloud, and edge architectures. → Read More

Ondat’s Unlimited Nodes for Kubernetes Stateful Workloads –

This tutorial will show you how to install Ondat Community Edition on an on-prem clusters running in a bare metal environment. → Read More

Tutorial: Real-Time Object Detection with DeepStream on Nvidia Jetson AGX Orin

This tutorial will walk you through the steps involved in performing real-time object detection with DeepStream SDK running on Jetson AGX Orin. → Read More

Tutorial: Edge AI with Triton Inference Server, Kubernetes, Jetson Mate

In this tutorial, we will configure and deploy Nvidia Triton Inference Server on Jetson Mate to perform inference of computer vision models. → Read More

Jetson Mate: A Compact Carrier Board for Jetson Nano/NX System-on-Modules

A project to build a powerful AI inference engine based on 4X NVIDIA Jetson Nano modules, SeeedStudio’s Jetson Mate Mini, NVIDIA Jetpack 4.6, and the NVIDIA Triton Inference Server. → Read More

Serve TensorFlow Models with KServe on Google Kubernetes Engine

This tutorial will walk you through all the steps required to install and configure KServe on a Google Kubernetes Engine cluster powered by Nvidia T4 GPUs. → Read More

KServe: A Robust and Extensible Cloud Native Model Server

KServe is collaboratively developed by Google, IBM, Bloomberg, NVIDIA, and Seldon as an open source, cloud native model server for Kubernetes. → Read More

Model Server: The Critical Building Block of MLOps

A common misconception is that deploying models is as simple as wrapping them in a Flask or Django API layer and exposing them through a REST endpoint. Unfortunately, this is not the most scalable or efficient approach in operationalizing ML models. → Read More

Tutorial: Speed ML Training with the Intel oneAPI AI Analytics Toolkit

The objective of this guide is to highlight how Modin and Scikit-learn extensions are a drop-in replacement for stock Pandas and Scikit-learn libraries. → Read More

Intel oneAPI’s Unified Programming Model for Python Machine Learning –

Let's take a closer look at Intel Distribution of Modin and Intel Extension for Scikit-learn. → Read More

VMware Tanzu Application Platform: A Portable PaaS for Kubernetes

An introduction to the VMware Tanzu Application Platform. → Read More

Tutorial: A GitOps Deployment with Flux on DigitalOcean Kubernetes

the step-by-step guide to using FluxCD with DigitalOcean Kubernetes. → Read More

5 AI Trends to Watch out for in 2022

Multi-Modal AI and Large Language Models are among the breakthroughs are among the breakthroughs that will keep AI moving ahead in 2022. → Read More

5 Cloud Native Trends to Watch out for in 2022

WebAssembly, GitOps, and managed Kubernetes are among the trends we expect to see in Clod Native 2022. → Read More

Review: Build a ML Model with Amazon SageMaker Canvas

In this post, we will do a hands-on evaluation of Amazon SageMaker Canvas. Follow along to train a logistic regression model. → Read More

Tutorial: Deploying TensorFlow Models with Amazon SageMaker Serverless Inference

The tutorial shows how publishing serverless inference endpoints for TensorFlow models. Hope you found it useful. → Read More

Explore Amazon SageMaker Serverless Inference for Deploying ML Models

Launched at the company’s re:Invent 2021 user conference earlier this month, ‘ Amazon SageMaker Serverless Inference is a new inference option to deploy machine learning models without configuring and managing the compute infrastructure. It brings some of the attributes of serverless computing, such as scale-to-zero and consumption-based pricing. With serverless inference, SageMaker decides to… → Read More