kubernetes

Recursos de programación de kubernetes
Google Kubernetes Engine (GKE) es el servicio gestionado de Kubernetes que ofrece Google Cloud y sobre el que podemos desplegar nuestros contenedores. Si quieres saber cuáles son los casos de uso más comunes en los que puede aplicarse o cuáles son sus principales ventajas, te lo resumimos todo en menos de 1 minuto. ¿Quieres ver nuestros tutoriales? https://www.youtube.com/c/ParadigmaDigital/playlists ¿Quieres escuchar nuestros podcasts? https://open.spotify.com/show/4IQF9XRgHN7j5Mz52t9wJS?si=7ba64ce69fc04a92 ¿Quieres saber cuáles son los próximos eventos que organizamos?: https://www.paradigmadigital.com/eventos/
Welcome to this live Q&A session with Niklas Gustavsson! Niklas, Chief Architect at Spotify, will be sharing insights into the intriguing world of a large-scale micro component architecture. We'll also be addressing questions left unanswered from GSAS. Feel free to drop your questions in the YouTube chat during the live broadcast, and Niklas will be addressing them in real-time Let's dive into the world of Software Architecture and innovation together! -- The Global Software Architecture Summit (GSAS) is a 3-day event that aims to attract and connect software architecture experts from all over the world as well as all those interested in building working software to improve their skills, share knowledge, and connect. The event features two days of talks by industry experts, and one day of workshops. It is focused on software architecture topics as backend & frontend development, DDD, mobile development techniques, software architecture models & beyond. -- GSAS website: https://gsas.io/ Organizer site: https://apiumhub.com/ -- 0:00 Introduction 2:53 CD/CI Pipeline. (Does each component possess its own CD/CI pipeline?) 5:00 Substituting Kubernetes Components. (If you were to initiate Spotify now, would you contemplate substituting Kubernetes components with cloud-based microservices (lambda, ECS, step functions, etc.)? 8:12 Architectural Decisions and Technical Debt. (How do you handle architectural decisions made by teams that could introduce technical debt? What safeguards do you have in place?) 10:33 Metric-Triggered Changes and Organizational Response. (Could you share an instance where a metric (fitness function) triggered a change and how the organization addressed it at a broader level?) 13:28 Compatibility Across Dependent Components. (How do you ensure compatibility across dependent components? Is it through contract testing or versioning?) 15:35 Codebase Organization in a Substantial Codebase. (With a substantial codebase, are you using a super repo, or are component codebases segregated in some manner? If so, how?) 17:40 What tools do you use to build diagrams? 19:18 How often do you need to version API? 23:00 Detecting Dependencies for Coupling. (When detecting dependencies for coupling, what mechanisms do you employ? Is it restricted to static code, API calls, or perhaps even dependency injection?) 24:37 How to handle authorization in a microservices environment. 25:15 Pull Request Reviews and Handling Dependencies. (How do you manage pull request reviews to handle the significant number of dependencies between squads and components?) 28:50 Updating Contracts and Component Versions. (When updating a contract, what approach do you adopt? Does a new version imply a new component?) 29:35 Unhelpful Metrics or Approaches and Lessons Learned. (Have you encountered any metrics or approaches that proved unhelpful? Any lessons learned?) 31:44 Number of Services Owned by a Squad. (Approximately how many services are owned by a squad?) 33:30 From an Architectural point of view what's your biggest pain point at spotify? 35:09 Automated Whole Component Generation. (How do you automatically generate a whole component? Could you describe the process in detail?) 38:26 Coordinating Releases and GitOps. (How do you coordinate releases of different components? Do you employ the GitOps methodology?) 40:50 Maintaining High-Performance for Synchronous Communication. (Considering the number of components connected during a request, how do you maintain high-performance rates for synchronous communication?) 43:40 Monorepos and Atomic Commits. (What is your perspective on monorepos, particularly concerning atomic commits and ensuring all dependencies work together?) 47:51 Smooth Handover of Component Ownership. (When changing ownership of components, how do you ensure a smooth handover? You mentioned that a team should own the full lifecycle). 49:50 Preventing Components from Becoming Outdated. (How do you ensure that components do not become outdated?) 50:59 Managing Overhead of Creating New Components. (How do you manage the overhead of creating new components, including new pipelines, storage clusters, configs, etc.?) 53:12 Do you employ consumer-driven contract testing? 53:25 Chaos Testing with Numerous Components. (With numerous components, do you conduct chaos testing?) 54:53 Correspondence Between Backend and Mobile Components. (Do you maintain a 1:1 correspondence between backend and mobile components?) 56:05 How do you introduce innovation in your teams? 58:40 Addressing Latency Issues and Designing for Functionality. (With this infrastructure, how do you address latency issues for user-facing or time-sensitive operations? Do you actively design to limit the length of a service call chain when designing functionalities?) 59:00 Closure
En el mundo de la publicidad online un segundo es mucho tiempo y conseguir que la información llegue a los usuarios de una manera rápida en cualquier parte del mundo, es todo un desafío. En esta charla analizaremos cómo hemos atacado este problema en Seedtag utilizando herramientas de automatización como Terraform y componentes de GKE (Google Kubernetes Engine) para poder llevar el despliegue de nuestra infraestructura a todo el mundo. ------------- ¡Conoce Autentia! -Twitter: https://goo.gl/MU5pUQ -Instagram: https://lk.autentia.com/instagram -LinkedIn: https://goo.gl/2On7Fj/ -Facebook: https://goo.gl/o8HrWX
Argo Rollouts recently gained popularity, thanks to providing a standardized framework for the Blue-Green and Canary Kubernetes deployments and seamlessly integrating with metric providers (such as Prometheus), ingress controllers and services meshes to run automated analysis on newly deployed app versions. This is a big improvement over using Kubernetes Deployment API, because it unlocks the multi step rollout process with running automated analysis and gradually switching production traffic to the new version, without the need of customizing application codebase. At the same time, many organizations need to deploy each service across multiple clusters, to ensure high availability and disaster resilience, in case of cloud provider region/zone outages. This is a challenge for DevOps and Platform Engineers, because the Argo Rollouts controller, similar to most of the Kubernetes controllers, is designed to operate on a single cluster. In this talk, I will present how to augment Argo Rollouts using a Multi-Cluster scheduler (such as Nova or Karmada), to achieve automated Canary (or Blue-Green) rollouts on more than one cluster. We will look at a few different approaches on how connecting your production, staging and test clusters to the management control plane (such as Nova) can facilitate the process of safely deploying and testing your app in each of these environments, all the way to the safe production deployment. ------------- ¡Conoce Autentia! -Twitter: https://goo.gl/MU5pUQ -Instagram: https://lk.autentia.com/instagram -LinkedIn: https://goo.gl/2On7Fj/ -Facebook: https://goo.gl/o8HrWX
When applications change, so too must their underlying data models or database schema (commonly called “migrations”). But making changes to your database in production can be risky – causing breaking changes, data loss, and degraded performance if not handled carefully. This talk will dive into these challenges and discuss ways to make schema changes safer and more efficient. We will explore the declarative model, used by tools like Terraform and Kubernetes which has taken our industry by storm. But can this model be trusted with our databases, the heart of our applications? We will consider ambiguous scenarios like a resource rename: can our tools accidentally plan a migration that will have a dire impact on our application? We'll outline three possible approaches to this problem: automatic migration planning, policy-driven diffing, and the operator model, and show how they can be employed using Atlas, a modern, open-source schema management tool. Join us on this journey! ------------- ¡Conoce Autentia! -Twitter: https://goo.gl/MU5pUQ -Instagram: https://lk.autentia.com/instagram -LinkedIn: https://goo.gl/2On7Fj/ -Facebook: https://goo.gl/o8HrWX
One might naively think that to deploy a production app on Kubernetes, all one needs is a Kubernetes cluster. Indeed, before going to production, we'll need a Kubernetes cluster, and therefore, we'll need to make a few decisions: on-premises or on cloud? Managed or self-hosted? But there is way more to it because our new cluster will almost always require a few additions before being truly production-ready. Even if we choose a state-of-the-art managed cluster from a leading cloud provider, we still need to add something to handle logging and metrics. Supporting Ingress resources or Network Policies can also require extra work; as does managing persistent volumes or inbound traffic when running on-premises. Finally, while most of us used commands like "kubectl run" or "kubectl apply" to run our first Kubernetes containers and workloads, going to production requires a few extra tools to tailor our YAML manifests to various environments (e.g. kustomize, Helm, or Carvel), and automate its deployment (e.g. ArgoCD, Flux). The goal of this talk is to give us a production-readiness checklist. Without being exhaustive, this checklist will bring awareness to the gap that exists between Kubernetes "cluster" and "a production cluster", and give solid leads about how to bridge that gap. ------------- ¡Conoce Autentia! -Twitter: https://goo.gl/MU5pUQ -Instagram: https://lk.autentia.com/instagram -LinkedIn: https://goo.gl/2On7Fj/ -Facebook: https://goo.gl/o8HrWX
Las soluciones de seguridad tradicionales no están preparadas para las arquitecturas modernas de aplicaciones basadas en microservicios y Kubernetes debido a su falta de granularidad, escalabilidad limitada, falta de visibilidad y complejidad de gestión. En este webinar te contamos cómo Calico Open Source, Cloud y Enterprise permite adoptar soluciones avanzadas y específicas para abordar los desafíos de seguridad en estos entornos de aplicaciones modernos y dinámicos, cada vez más extendidos. 📅 AGENDA DEL EVENTO: 00:00 Inicio 00:08 Intro 05:31 Moderniza tus aplicaciones sin que la seguridad sea un dolor de cabeza 16:43 Seguridad avanzada con Calico en la práctica 46:06 Despedida 🎙 NUESTROS PONENTES: Eva Rodríguez - Marketing & Communications, SNGULAR Farancisco Gómez - Cloud Engineer, SNGULAR Rui De Abreu - Calico Solutions Architect, Tigera
Guillermo is a Telecom engineer with a backend development background, who transitioned to DevOps when someone had to deploy those new pesky microservices. He is also a Kubernetes administrator and is well-experienced in coaching developer teams to adopt good CI/CD practices.
In this talk, Micronaut committer Álvaro Sánchez-Mariscal, will demonstrate how you can quickly build optimised Microservices with Micronaut & GraalVM Native Image. Attendees will learn how the combination of GraalVM Native Image and Micronaut can lead to efficient, highly performant, and optimised applications that can be perfectly deployed to environments like Kubernetes or serverless platforms. There will be a live coding demo of an application using Micronaut Data and GraalVM. ------------- ¡Conoce Autentia! -Twitter: https://goo.gl/MU5pUQ -Instagram: https://lk.autentia.com/instagram -LinkedIn: https://goo.gl/2On7Fj/ -Facebook: https://goo.gl/o8HrWX
El proceso de creación de un banco digital no es una tarea sencilla. Requiere de profundos conocimientos en diferentes áreas y de una gran variedad de perfiles distintos. Si nos planteamos un objetivo temporal de 180 días el reto se complica. Para garantizar la alta disponibilidad, escalabilidad y portabilidad, nos basamos en una arquitectura de microservicios Java con Spring Boot, desplegado en kubernetes y servicios REST apificados para uso de frontales Angular y aplicaciones mobile nativas (Android e iOS). Un Middleware que integra el core bancario, CRM Salesforce y aplicaciones de terceros (Know Your Customer, AML, verificación documental...). Explicaremos el proceso para llegar al caso de éxito de Pibank tanto a nivel técnico y funcional, como de gestión y de relación con el cliente. ------------- ¡Conoce Autentia! -Twitter: https://goo.gl/MU5pUQ -Instagram: https://lk.autentia.com/instagram -LinkedIn: https://goo.gl/2On7Fj/ -Facebook: https://goo.gl/o8HrWX