Recursos de programación de docker
Docker, Docker Swarm, contenedores, imagenes, orquestadores, Kubernetes son palabras que están presentes en cualquier arquitectura “moderna”. ¿Pero realmente sabes lo que significan o de qué hablan? Piensas que hablan en otro idioma. No te preocupes el objetivo de esta charla es aclarar todos esto “palabros” y que puedas entender por fin de que están hablando. En esta charla se va a explicar de manera simple y sencilla que en qué consiste Docker y Kubernetes. ------------- Todos los vídeos de Codemotion 2019 en: ¡Conoce Autentia! Twitter: Instagram: LinkedIn: Facebook:
If you are a developer, you shouldn't be surprised about the fact that developers spend more than 24% of their time dealing with their development environments. Imagine having a magic button that instantiates a new development environment for you in seconds, integrated with the same hardware, network, and services that you need in production! This experience is now possible thanks to technologies like Docker and Kubernetes, and the automation provided by the cloud. ------------- Todos los vídeos de Codemotion 2019 en: ¡Conoce Autentia! Twitter: Instagram: LinkedIn: Facebook:
Un software de curación de contenidos suena a poca cosa hasta que ves las "tripas" de esta plataforma. Primer episodio de la tercera temporada del podcast. Gracias a todos por seguir ahí durante el verano escuchando con tanta atención. 100 episodios Recomiendo, por si te lo has saltado, volver al episodio 100. Aunque fuera en plena semana de vacaciones, ahí lo doy todo. Es un resumen en forma de compromiso de lo que he compartido con vosotros en los 99 primeros episodios. Puedes escuchar...
Over the last twenty years, there has been a paradigm shift in software development: from meticulously planned release cycles to an experimental way of working in which lead times are becoming shorter and shorter. How can Java ever keep up with this trend when we have Docker containers that are several hundred megabytes in size, with warm-up times of ten minutes or longer? In this talk, I'll demonstrate how we can use Quarkus so that we can create super small, super fast Java containers! This will give us better possibilities for scaling up and down - which can be a game-changer, especially in a serverless environment. It will also provide the shortest possible lead times, as well as a much better use of cloud performance with the added bonus of lower costs.
As data leaks move into the terabytes, journalists need tools to search, analyse and collaborate on their investigations. We will cover the technical lessons learnt over two years of development at the Guardian as we built our platform in both the cloud and running entirely air-gapped offline. We will introduce GIANT, the Guardian’s new platform for searching, analysing and collaborating on data leak backed investigations. With the size of leaks increasing (Edward Snowden: 55,000 files, the Paradise Papers: 13.4 million), the Guardian has built its own platform for analysis which has already seen success on several projects, most notably the Daphne Project which continues the work of the journalist Daphne Caruana Galizia. In the talk we will cover how we designed our data model to effectively handle “any” possible file type and scale up to terabytes of stored data. We will discuss how using Neo4j we are able to reconstruct the threads of conversation between individuals and companies identified in the data and the surprising limits that come with using a graph database as our storage system of record. We will also dive into our use of Elasticsearch, in particular how best to support leaks containing multiple languages and how we were able to add full Russian and Arabic language support to an existing dataset whilst the journalists continued their investigation using the tool. We will also discuss our extractors, the system of plugins that process the files when we receive them. We will cover the lessons learned as we moved from calling in-process code in the JVM to Docker and containerisation to not only take advantage of the wide ecosystem of open source processing tools but also effectively scale out our computation both in AWS and also in our completely offline air-gapped deployment for more sensitive data. Finally, we will also discuss the value of direct working relations between developers and journalists. This leads us to a change in how we developed our tooling, moving more towards building a secure platform upon which other more specialist tools can be written. We will show a great example of this with “Laundrette”, a new tool that lets data journalists add structure to hundreds of thousands of documents quickly.
A few years ago I created a small system to automate some tasks in our barn where we keep pet potbelly pig and our chickens. Simple things, like scheduling when the lights turn on and off, temperature monitoring and a webcam. It was nice to automate those things, but the best part for me was being able to tinker with the sensors and devices and learn new libraries, frameworks and languages. I’ve recently re-imagined the system as a way to learn more new technologies. I’ve created a prototype version to demonstrate how I might build it again if I were to start over from scratch and in this session we’ll look at that model and learn about the hardware and software used in it. At it’s core, the system uses an Arduino, Raspberry Pi and various pumps, solenoids, motors and sensors to simulate the automation of certain tasks like filling a water bowl, opening and closing doors, monitoring environment and turning on and off lights on demand or via schedule. The hardware runs a Groovy based program which interfaces with Kafka for messaging to both store sensor data in a MongoDB instance as well as receive commands for remotely performing certain tasks on demand. The persisted data is formatted and displayed on a web application running in a Docker container that is deployed to a cloud-based Kubernetes cluster. We’ll look at both the hardware and software that power this system and how I’ve used the project as a playground for learning new technologies, languages and frameworks. ------------- Todos los vídeos de Greach 2019 en: ¡Conoce Autentia! Twitter: Instagram: LinkedIn: Facebook:
git , http , go , php , java , azure , scala , docker , python , node
Comparamos dos alternativas PaaS (Platform as a Service) con similitudes y diferencias para desplegar tus aplicaciones en la nube. Primero felicitamos a todos los participantes del Hackathon de Programar es una mierda, especialmente a los ganadores que con su esfuerzo han conseguido una suscripción completa al contenido de pago de Recuerda que al igual que ellos en la Zona Premium podrás aprender, practicar y encontrar inspiración para crear tus propias aplicaciones. Ha sido m...
Lo más habitual es que las aplicaciones informáticas necesiten conectarse con muy diversos sistemas, a través de diferentes protocolos: bases de datos, servicios REST, los propios sistemas operativos... Realizar pruebas de integración para asegurarnos la correcta comunicación de todos estos sistemas es fundamental para crear un software robusto y de calidad. Tradicionalmente estas pruebas son costosas y árduas, y dificilmente automatizables. En esta charla veremos cómo preparar en pocos pasos un plan de construcción en Bamboo (la herramienta de integración continua de Atlassian) que automatice las pruebas de integración con diferentes sistemas mediante contenedores Docker, construyendo en tiempo de ejecución toda la infraestructura necesaria para realizar los tests. Enlace a las Transparencias Repo de la Demo en Github: aquí podéis ver el código usado en la demo así como la configuración utilizada en Bamboo