Devoogle tiene indexados actualmente 13029 recursos relacionados con el desarrollo de software.

¿Y si lo escuchas mientras vas al trabajo o te pones en forma?: https://www.ivoox.com/45640112 ------------- Los modelos de Machine Learning suelen verse como una especie de caja negra que es capaz casi predecir o estimar cualquier cosa. Sin embargo, en cuanto empiezas a trabajar un poco con ellos te das cuenta de que la mayoría de la calidad de un modelo depende directamente de la calidad (y a veces cantidad) de los datos que use para entrenar. Durante esta charla me gustaría dar la importancia que merece a la fase de procesado y limpieza de los datos. Para ello daremos un vistazo a las dos principales arquitecturas Big Data (Batch y Streaming) y cómo influyen en nuestros modelos. Exploraremos estas arquitecturas, tanto desde el punto de vista de ingesta y generación de modelo datos, como desde el punto de vista de data augmentation y generación de conjuntos de datos de entrenamiento. Además, con cada bloque veremos pinceladas de qué herramientas open source nos permiten desarrollar estos procesos, y cómo la nube pública (AWS, GCP) nos ayuda a optimizarlos. ------------- Todos los vídeos de Commitconf 2019 en: https://lk.autentia.com/Commit19-YouTube ¡Conoce Autentia! Twitter: https://goo.gl/MU5pUQ Instagram: https://lk.autentia.com/instagram LinkedIn: https://goo.gl/2On7Fj/ Facebook: https://goo.gl/o8HrWX
These are the best podcast/talks I've seen/listen to recently.Continuous Delivery Co-Author Uncovers the Top Obstacles for Development Teams Dave FarleyAWS re:Invent 2019: Innovation at speed (ARC203) Adrian CockcroftAWS re:Invent 2019 Amazon DynamoDB deep dive: Advanced design patterns Rick HoulihanPOST No AWS Bills: Cloud Cost Optimization Without APIs Corey QuinnSysAdmin to SRE: creating Capacity to make tomorrow better than today Damon_EdwardsMoving Fast at Scale&nbs...
During this talk, Daniel Díez will tell us about Self Sovereign Identity characteristics & main open platforms, and how this will impact the core of the main industries and services that surround us, from a business and citizen standpoint. #BIGTH19 #BigData #Security Session presented at Big Things Conference 2019 by Daniel Díez, Head of Emerging Business at Paradigma Digital. 21st November 2019 Kinépolis, Madrid Do you want to know more? https://www.bigthingsconference.com/
How does the visual representation of the world is structured by the brain? How could it be useful to react adaptively to new situations and scenarios? What if an autonomous system could learn that behavior? Recent Computer Vision and Deep Learning techniques enable the possibility to solve complex visual problems. One desirable property for most applications is the dynamic adaptation ability of the system to unknown contexts. This ability could be useful in a software production environment, where the data dynamics are business and human behavior dependent, providing flexibility while keeping robustness. This concept upgrades a trainable solution to a self-adaptive solution that we will go through during this talk. #BIGTH19 #AI #ComputerVision #DeepLearning Session presented at Big Things Conference 2019 by Javier Martínez Cebrián, Deep Learning & AI Specialist at BBVA Next Technologies, and Miguel Ángel Fernández, Assistant Professor and Researcher at Carlos III University. 20th November 2019 Kinépolis, Madrid Do you want to know more? https://www.bigthingsconference.com/
In this session, Beatriz Sanz Saiz will walk the audience through the concept of the Future Human Enterprise and how the application of technoscience on top of a business data layer can disrupt the concept of competition. How to hyper- scalate and change the playground moving from building solutions to configuring solutions while driving trusted intelligence to the market: customer, regulators, stakeholders and employees. In the session, several use cases will be presented sharing the challenge and describing how the infusion of technoscience in core processes led to new business models – intended or not. The session will wrap up with the impact of technoscience in Industry 4.0. #BIGTH19 #ArtficialIntelligence #DigitalTransformation Session presented at Big Things Conference 2019 by Beatriz Sanz Saiz, Global Advisory Data and Analytics Leader at EY - Ernst and Young. 21st November 2019 Kinépolis, Madrid Do you want to know more? https://www.bigthingsconference.com/
In this talk Álvaro will introduce the concept of language models, and review some of the state of the art approaches to building such models (BERT, GPT-2 and XLNet), delving into the network architecture and training strategies used in them. Then he will move on to show how these pre-trained language models can be fine-tuned to small datasets to produce high quality results in downstream NLP tasks, by making use of the open-source PyTorch-Transformers library (https://github.com/huggingface/pytorch-transformers). This library is built on top of the PyTorch deep learning framework, and allows loading pre-trained language models and fine-tuning them easily. This talk will be focused on the theoretical grounds of these methods and on his practical experience in applying them. #BIGTH19 #DataScience #DeepLearning Session presented at Big Things Conference 2019 by Álvaro Barbero, Chief Data Scientist at IIC. 20th November 2019 Kinépolis, Madrid Do you want to know more? https://www.bigthingsconference.com/
In this talk, Carlos Herrera will start describing the phases of a Data & Research project within Cabify, and how the different roles play together across each of them. First we will see Problem Dimensioning, which starts with more or less anecdotal evidence and finishes when we have rigorously estimated the size of the problem or opportunity. If the dimensioning points to a high impact possible solution, we go to Model Prototyping, where we aim to find the simplest possible model that solves the problem to the extent we are aiming for. This phase typically includes some testing, either against cold data from the archive or real time listening to the marketplace but avoiding to affect a user, so we can take bigger risks. If a viable model is found, next phase is Industrialisation when go full engineering mode to make sure we build a fully monitored, cost efficient, highly scalable, highly reliable solution able to cope with our always growing volumes. At last we have the Inference phase, where we aim to establish a causal relationship between the improvement we are deploying and some measurable experience of our drivers, riders and companies. #BIGTH19 #BigData #Cloud #MachineLearning Session presented at Big Things Conference 2019 by Carlos Herrera, VP of Data & Research at Cabify. 20th November 2019 Kinépolis, Madrid Do you want to know more? https://www.bigthingsconference.com/
Su charla completa sobre este tema: https://youtu.be/2lfPy2Xf_R8 ------------- 1. ¿Qué opinas de los juegos serios y de las fake news? 2. Dos aplicaciones de cómo se está usando ese cerebro digital ¿Cómo podemos defendernos del mal uso de este cerebro? ------------- Todos los vídeos de WTMZ 2019 en: https://lk.autentia.com/WTMZ-YouTube ¡Conoce Autentia! Twitter: https://goo.gl/MU5pUQ Instagram: https://lk.autentia.com/instagram LinkedIn: https://goo.gl/2On7Fj/ Facebook: https://goo.gl/o8HrWX
Su charla completa sobre este tema: https://youtu.be/u8MZJqhQsLY ------------- 1. ¿Qué responderías a la gente que piensa, "mis datos no le interesan a nadie"? 2. ¿Qué consejos darías para comenzar a hacerte dueño de tus datos? ------------- Todos los vídeos de WTMZ 2019 en: https://lk.autentia.com/WTMZ-YouTube ¡Conoce Autentia! Twitter: https://goo.gl/MU5pUQ Instagram: https://lk.autentia.com/instagram LinkedIn: https://goo.gl/2On7Fj/ Facebook: https://goo.gl/o8HrWX