Recursos de programación de kubernetes
Katie is a cloud native leader, practitioner, and contributor, currently in a Senior Kubernetes Field Engineer role at a company called Manzana (in English). For years, as a cloud platform engineer, Katie has built the infrastructure for Conde Nast and American Express, gravitating towards cloud-native technologies, principles, and Kubernetes as the focal point. At CNCF (Cloud Native Computing Foundation), she was a Technical Oversight Committee member and led the CNCF End User Community. At present, Katie advises the Keptn startup and holds the Chief of Future Founders Officer (CFFO) position at OpenUK. Recently, Katie released the Cloud Native Fundamentals course and led the creation of the CNCF KCNA (Kubernetes and Cloud Native Associate) certification. Additionally, Katie is an active keynote public speaker, a #TechWomen100 winner, and a strong advocate for women in STEM.
Cloud and DevOps practice lead at Epam Volker was born in Germany, but has lived and worked in Spain most of his life. He started as a “classic” sysadmin and embraced the power and magic of containers and the cloud from their very inception. He has been involved in projects of all sizes and complexity, including a large migration to Kubernetes and is now leading a team of 50+ people in various industries to cloud excellence.
A medida que aumenta la adopción en el uso de contenedores y Kubernetes en producción, la seguridad comienza a ser una preocupación fundamental. Pero…¿Por dónde empezar? En esta charla queremos dar una visión general sobre los distintos controles de seguridad que se pueden aplicar durante el ciclo de vida de la aplicación, en pre-deployement, así como en runtime, lo que nos permitirá trabajar de forma proactiva, en vez de reactiva y ser más eficientes. Hablaremos de cómo reforzar el uso de buenas prácticas entre los desarrolladores, asegurarnos de que nuestros contenedores no tienen vulnerabilidades en pre-producción, de los beneficios de integrar las herramientas que usan los developer para que fluya la información y por supuesto, cómo mantener nuestros workloads actualizados con los últimos CVEs descubiertos es clave para mantener un entorno productivo seguro.
Within its 8 years of existence, Kubernetes has been the gravitational center of the Cloud Native, elevating a pluggable system that diversified the entire ecosystem. Multiple areas emerged in the industry, galvanizing solutions for components such as network, runtime, storage, and cluster provisioning. The maturity of the cloud native landscape is led by the wider adoption of enterprise and large organizations. However, for these companies deployment and handling of bare-metal infrastructure has always been essential. A pivotal tool to manage cross-provider infrastructure has been Cluster API, leading a unique and radical stance for Kubernetes distribution. In association with a model such as GitOps, Cluster API assembles a mechanism that leverages the concept of a "cluster as a resource.
Many teams are still struggling to implement good APIs, forcing RPC use cases into a semi RESTful world. Modern and efficient IPC is more than just doing REST. Take Kubernetes as example: REST on the outside, gRPC on the inside. We should use this approach for enterprise applications as well. This session focuses on modern and efficient Inter Process Communication (IPC) for microservices. We start with a REST API, built using JAX-RS and Quarkus to briefly discuss the pros and cons of this approach. Then, we will extend the API with an efficient Protobuf payload representation in order to finally transform the API into a fully fledged high-performance gRPC interface definition. But that’s not all! To put some extra icing on the cake, this talk will demonstrate how to consume the gRPC service from a JavaScript web client and also how to completely generate a matching REST API from an enhanced gRPC interface definition to ensure full interoperability in a microservice architecture.
Imagine a world where you can access metrics, events, traces and logs in seconds without changing code. Even more, a world where you can run scripts to debug metrics as code. In this session, you will learn about eBPF, a powerful technology with origins in the Linux kernel that holds the potential to fundamentally change how Networking, Observability and Security are delivered. In this session, we’ll see eBPF monitoring in action applied to the Kafka world as an example of a complex Java application: identify Kafka consumers, producers and brokers, see how they interact with each other and how many resources they consume. We’ll even show how to measure consumer lag without external components. If you want to know what’s next in Java and Kafka observability in Kubernetes, this session is for you.
At Typeform, our customer support agents handle hundreds of support chats and several thousand support tickets each month to help with all the problems our customers have, so we decided to use this direct communication with our customers to deliver the best possible service and value. In January 2022, we launched Actions After Support, where the overall goal is to understand the relative value of different actions by our customers and identify the best next action for them based on the customer profile and the stage of their customer journey. We created a recommender model based on customer similarity that is served using a custom UI to our support agents, so they can use this information while having a live chat with our customers. The model has been written in Python, and it is served using an Airflow and ML Gateway, to a custom UI created on streamlit, allocated as a docker container on a Kubernetes pod. The agents can also send direct feedback about the performance of the model to the data team. In this talk, I will share with you the whole pipeline used from building the recommender model, serving the results, showing them on a custom UI and the monitoring that is in place, as well as the collaboration between stakeholders and data team. A complete and successful data science use case.
The biggest Google tech event in Spain, carefully crafted for you by GDGMálaga community! Awesome speakers and lots of fun! La brecha entre desarrollo y operaciones puede ser más fácil de superar con las herramientas adecuadas. En esta charla hablaremos de qué es DevOps y veremos cómo utilizando Docker y Kubernetes podemos simplificar nuestro flujo de trabajo y el paso a producción para ir adoptando las buenas prácticas que la filosofía Devops propone. #GDGMálaga'22 #DevFestMálaga #BiznagaFest Síguenos en nuestras redes para estar al día de las novedades: - Twitter: https://goo.gl/MU5pUQ - Instagram: https://lk.autentia.com/instagram - LinkedIn: https://goo.gl/2On7Fj/ - Facebook: https://goo.gl/o8HrWX
¿Cómo es la cultura DevOps en Mango? Entrevistamos a Adrián Catalan (Backend Developer) y Esteve Oria (SysOps) para conocer en detalle cómo los equipos levantan y gestionan la infrastructura. Veremos ejemplos de cómo usan Terraform, Jenkins, AWS, Kubernetes y otras herramientas en su día a día para conseguirlo. Cursos relacionados: ├ ⛵ Kubernetes: https://pro.codely.tv/library/kubernetes-para-desarrolladores ├ 🐳 Docker: https://pro.codely.tv/library/docker-de-0-a-deployment └ 🤖 Integración continua con GitHub Actions: https://pro.codely.tv/library/integracion-continua-con-github-actions-51237 Entrevistados ├ Adrián Catalan: https://twitter.com/adriancataland └ Esteve Oria: https://twitter.com/eessttiiff ﹤🍍﹥ CodelyTV ├ 🎥 Suscríbete: https://youtube.com/c/CodelyTV?sub_confirmation=1 ├ 🐦 Twitter CodelyTV: https://twitter.com/CodelyTV ├ 👨🏻‍🌾 Twitter Dani: https://twitter.com/dsantaka ├ 🙋🏻‍♂️ Twitter Nino: https://twitter.com/ninodafonte ├ 📸 Instagram: https://instagram.com/CodelyTV ├ ℹ️ LinkedIn: https://linkedin.com/company/codelytv ├ 🟦 Facebook: https://facebook.com/CodelyTV └ 📕 Catálogo cursos: https://bit.ly/cursos-codely #DevOps #AWS #Jenkins
Did you ever see a Distributed Deep-Learning Platform as a Service? Sure not, it’s challenging! Join this session to discover OpenDeepHealth, a PaaS built on top of Kubernetes and designed from principles with a multi-tenancy first approach! OpenDeepHealth (ODH) is a hybrid HPC/cloud infrastructure designed and developed by the University of Torino in the DeepHealth European project. The goal was to provide a self-service platform for Deep Learning, allowing domain experts to bring their own data and run training and inference workflows in a multi-tenant container-native environment. Kubernetes, the de-facto standard for container orchestration, is the perfect framework for building such a distributed system, optimising resource usage and allowing a horizontal scaling of the infrastructure. StreamFlow, the ODH workflow engine, can schedule and coordinate different workflow steps on top of a diverse set of execution environments, ranging from single Pods to entire HPC centres. As a result, each step of a complex Data Analysis pipeline can be scheduled on the most efficient infrastructure. At the same time, the underlying run-time layer automatically takes care of workers’ lifecycle, data transfers, and fault-tolerance aspects. ODH implements a novel form of multi-tenancy called “HPC Secure Multi-Tenancy”, specifically designed to support AI applications on critical data. Thanks to Capsule, the multi-tenant Kubernetes operator, ODH can enforce multi-tenancy at the cluster level, avoiding privilege escalations and exploits, minimising operational costs, and enforcing custom policies to access external HPC facilities. Finally, ODH provides multi-tenant distributed Jupyter Notebooks as a service through the Dossier platform. This feature gives domain experts a high-level, well-known programming model to write portable and reproducible Deep Learning pipelines, augmenting standard notebooks with resource segregation, data protection and computation offloading capabilities.