Recursos de programación de docker
Over the last twenty years, there has been a paradigm shift in software development: from meticulously planned release cycles to an experimental way of working in which lead times are becoming shorter and shorter. How can Java ever keep up with this trend when we have Docker containers that are several hundred megabytes in size, with warm-up times of ten minutes or longer? In this talk, I'll demonstrate how we can use Quarkus so that we can create super small, super fast Java containers! This will give us better possibilities for scaling up and down - which can be a game-changer, especially in a serverless environment. It will also provide the shortest possible lead times, as well as a much better use of cloud performance with the added bonus of lower costs.
As data leaks move into the terabytes, journalists need tools to search, analyse and collaborate on their investigations. We will cover the technical lessons learnt over two years of development at the Guardian as we built our platform in both the cloud and running entirely air-gapped offline. We will introduce GIANT, the Guardian’s new platform for searching, analysing and collaborating on data leak backed investigations. With the size of leaks increasing (Edward Snowden: 55,000 files, the Paradise Papers: 13.4 million), the Guardian has built its own platform for analysis which has already seen success on several projects, most notably the Daphne Project which continues the work of the journalist Daphne Caruana Galizia. In the talk we will cover how we designed our data model to effectively handle “any” possible file type and scale up to terabytes of stored data. We will discuss how using Neo4j we are able to reconstruct the threads of conversation between individuals and companies identified in the data and the surprising limits that come with using a graph database as our storage system of record. We will also dive into our use of Elasticsearch, in particular how best to support leaks containing multiple languages and how we were able to add full Russian and Arabic language support to an existing dataset whilst the journalists continued their investigation using the tool. We will also discuss our extractors, the system of plugins that process the files when we receive them. We will cover the lessons learned as we moved from calling in-process code in the JVM to Docker and containerisation to not only take advantage of the wide ecosystem of open source processing tools but also effectively scale out our computation both in AWS and also in our completely offline air-gapped deployment for more sensitive data. Finally, we will also discuss the value of direct working relations between developers and journalists. This leads us to a change in how we developed our tooling, moving more towards building a secure platform upon which other more specialist tools can be written. We will show a great example of this with “Laundrette”, a new tool that lets data journalists add structure to hundreds of thousands of documents quickly.
A few years ago I created a small system to automate some tasks in our barn where we keep pet potbelly pig and our chickens. Simple things, like scheduling when the lights turn on and off, temperature monitoring and a webcam. It was nice to automate those things, but the best part for me was being able to tinker with the sensors and devices and learn new libraries, frameworks and languages. I’ve recently re-imagined the system as a way to learn more new technologies. I’ve created a prototype version to demonstrate how I might build it again if I were to start over from scratch and in this session we’ll look at that model and learn about the hardware and software used in it. At it’s core, the system uses an Arduino, Raspberry Pi and various pumps, solenoids, motors and sensors to simulate the automation of certain tasks like filling a water bowl, opening and closing doors, monitoring environment and turning on and off lights on demand or via schedule. The hardware runs a Groovy based program which interfaces with Kafka for messaging to both store sensor data in a MongoDB instance as well as receive commands for remotely performing certain tasks on demand. The persisted data is formatted and displayed on a web application running in a Docker container that is deployed to a cloud-based Kubernetes cluster. We’ll look at both the hardware and software that power this system and how I’ve used the project as a playground for learning new technologies, languages and frameworks. ------------- Todos los vídeos de Greach 2019 en: ¡Conoce Autentia! Twitter: Instagram: LinkedIn: Facebook:
git , http , go , php , java , azure , scala , docker , python , node
Comparamos dos alternativas PaaS (Platform as a Service) con similitudes y diferencias para desplegar tus aplicaciones en la nube. Primero felicitamos a todos los participantes del Hackathon de Programar es una mierda, especialmente a los ganadores que con su esfuerzo han conseguido una suscripción completa al contenido de pago de Recuerda que al igual que ellos en la Zona Premium podrás aprender, practicar y encontrar inspiración para crear tus propias aplicaciones. Ha sido m...
Lo más habitual es que las aplicaciones informáticas necesiten conectarse con muy diversos sistemas, a través de diferentes protocolos: bases de datos, servicios REST, los propios sistemas operativos... Realizar pruebas de integración para asegurarnos la correcta comunicación de todos estos sistemas es fundamental para crear un software robusto y de calidad. Tradicionalmente estas pruebas son costosas y árduas, y dificilmente automatizables. En esta charla veremos cómo preparar en pocos pasos un plan de construcción en Bamboo (la herramienta de integración continua de Atlassian) que automatice las pruebas de integración con diferentes sistemas mediante contenedores Docker, construyendo en tiempo de ejecución toda la infraestructura necesaria para realizar los tests. Enlace a las Transparencias Repo de la Demo en Github: aquí podéis ver el código usado en la demo así como la configuración utilizada en Bamboo
¿Y si lo escuchas mientras vas al trabajo o te pones en forma?: ------------- Greach is a yearly technical conference around Android, JVM Frameworks and alternative JVM languages. It brings together JVM developers with framework creators and international speakers. Developers learn about JVM languages such as Groovy, Kotlin, platforms such as Android, JVM frameworks such as Grails, Micronaut, Ratpack, Spock... or cloud environments such as Google Cloud, Amazon Web Services or PWS. The most talented developers around Europe come to Greach to learn how to develop better, faster and smarter. ------------- Android developers are facing a common problem: how to test our applications on many devices without sacrificing too much time or money? How to build and test automatically our applications for each commit? How can we find those devices inside the company, whatever its size may be? Could there be a directory somewhere that lists those available devices? Could we use a device remotely and share it with other developers as if it were in the cloud? What if you could answer all these questions with the help of a low cost device farm that fits into a pocket? A pocket full of clouds... Poddingue, our proposal, aims to tackle this problem thanks to Docker, HypriotOS, Armbian, Gitlab CI and OpenSTF. It’s an internal solution made of OSS readily available, but it has not yet been publicly announced as a whole. This is a feedback about an idea on its way to production, a long journey full of different feelings : horror, happiness, suspense, boredom... This presentation won’t be too technical ; it is opened to anybody who has an interest into Android, exotic hardware or continuous integration, as long as you can stand a bad sense of humour. At the end of the talk, you should know how to build your own cloudy pocket farm of Android devices and how to use it to test your applications within your ci pipeline. ------------- Todos los vídeos de Greach 2019 en: ¡Conoce Autentia! Twitter: Instagram: LinkedIn: Facebook:
Poco a poco Docker va introduciéndose en nuestro flujo de trabajo, y nos preguntamos cual es la mejor forma de llevarlo a producción. ¿Cómo podemos cambiar nuestro sistema de despliegue actual para utilizar Docker sin que sea un drama? ¿Qué pasos seguir? Veremos cómo poder utilizar Docker en producción sin liarnos a aprender un stack completamente nuevo como Kubernetes o Mesos, poniendo nuestros containers en el mismo stack que ya teníamos.
Los contenedores, y en particular Docker han sido una de las buzzwords de los últimos años pero, ¿realmente ofrecen lo que prometen? En esta charla mostraré una introducción muy rápida a Docker y veremos cómo podemos aprovecharnos de todas sus ventajas tanto para el entorno de desarrollo como para desplegar nuestras aplicaciones Java.