Devoogle tiene indexados actualmente 15478 recursos relacionados con el desarrollo de software.

How Transformers can “transform” how we fight against fake news by Ruben Miguez at Big Things Conference 2021 In this talk Newtral will show how we are leveraging the Transformer-architecture to build deep learning systems to automate fact-checking. Today, human fact-checkers are overloaded by the massive amount of disinformation in the Internet. Fake news is easy to generate but data verification is a slow, human-intensive task. Without tech support humans can not win this fight. Newtral is combining its expertise in fact-checking (we provide official fact-checking services for Facebook, Tiktok or Whatsapp) and deep learning architectures to create the next generation of fully automated fact-checking systems. Our goal is to develop AI-assistants to increase fact-checkers productivity by 30 times and save up to 90% time and cost in fact-checking operations. We will start with a brief introduction to our previous work, based on traditional machine learning models (SVM, Decision Trees…) and feature extraction through well-known NLP frameworks. Next, we will briefly introduce the transformers architecture and its quick evolution in the last three years, highlighting how this new tech has supported novel use cases in the industry, including speech to text engines, text generation models and improved QA mechanisms. After providing the audience enough context to understand our technology stack, we will detail the design-training-testing process followed to fine-tune BERT-like models for two specific use cases: 1) the automated detection of verifiable sentences and 2) measure semantic similarity among sentences in different languages We will explain the main challenges found in this process, including limitations on fixed-size vocabulary models, multi-language approaches and data quality issues. We will show the final outcomes of our current AI system tested on 21 EU languages. Once described our iterative design process, we will briefly introduce our production deployment using AWS Neuron. In order to minimize internal costs we minimized the number of models (from three to one) and compiled our architecture to run on high-performance and low latency inference using AWS Inferentia-based Amazon EC2. Finally we will present our prototype solution, being used by Newtral and other 5 fact-checkers currently, showing a real-life scenario use case of how this tech works in our day to day. Two different prototypes will be explained: A) a video fact-checking tool, that automatically transcribes video/audios and spot automatically relevant fact-checks; B) a twitter monitoring tool (ClaimHunter) that automatically follows a set of political-related accounts and notifies fact-checkers when factual tweets are detected. We will describe the main functional components of these technical solutions Finally we will introduce the main challenges ahead of us to reach fully-automated solutions. We will briefly explain how the Unified Text-to-Text Transformer (T5 architecture) works and will discuss how multi-task training could help us to build a new generation of expert fact-checking systems. Besides, we will explain what other main challenges are still unsolved in the current SoA to automated data verification. Model explainability in neural networks is one of the most important research challenges ahead, because the AI must say not only if something is true or not, but explain the reasoning behind its judgment. Besides, we will also briefly discuss AI ethics involved in the development of such a system and what principle designs to apply to limit potential bias.
Over the last decade, ride hailing companies have changed the way we move in our cities. Founded in 2011, Cabify is one of the largest international players in this industry, operating in more than 40 cities in Latin America and Spain. Cabify services are based on a powerful technological stack that allows smooth, simultaneous operation for millions of journeys worldwide, 24 hours/day, 7 days/week. One of the key components of this stack is the tool that calculates the time it will take for drivers to travel between locations when going to pick up clients or driving them to their destination, a.k.a. Estimated Time of Arrival (ETA). A good ETA calculator is crucial for driver assignment, pricing, etc. There are different approaches to obtain this ETA, depending on the company’s internal capabilities and external vendor policies. At Cabify we have developed Cabimaps, our own ETA calculator. Cabimaps is a deep neural network that predicts the time it will take for a driver to travel between two locations. The main purpose of this talk is to tell Cabimaps’ story since its inception, explaining the most important features that have been incorporated into it over the years. Cabimaps has been an integral part of our stack since 2019 and we revise, retrain and fine tune it frequently to keep up with market changes. Cabimaps gives us freedom to control how ETAs are calculated, avoiding the costs and vendor lock of relying on an external provider (e.g., on Google Maps or Here). Cabimaps is trained using exclusively our own data, removing any dependency with external sources. Cabify has been a data-driven company from the start, so we have a lot of useful information that we can use. Cabimaps is not a route calculator (it does not return the route the driver needs to follow), but a time estimator. This means Cabimaps does not need to rely on complex data representations of the city map. Cabimaps only uses the origin and destination coordinates and the date and time of the trip. Several transformations are applied before feeding this information to the neural network. Since Cabimaps does not rely on a city map, it needs a smart way to incorporate geographical information into our model. Many aspects of the geographical surroundings can impact the driver’s route, like street layout, nearby services, etc. We have to generalize these features to incorporate them into the model. We achieve this by a combination of spatial indexing and feature embedding. Cabimaps uses spatial indexes to convert geographic coordinates into cell ids, leveraging different index levels to capture geographical information at different scales. Then, it uses specially trained neural subnets to calculate index cell embeddings. We have trained Cabimaps models for more than 40 cities in 8 countries. Our objective is to obtain ETA calculations similar to state-of-the-art commercial providers, so we can rely entirely on our service. Cabimaps has evolved over the years, and currently produces similar or better results than our best vendor in 65% of the cities, including very large markets like Buenos Aires, Madrid or Mexico DF. Where Cabimaps still comes in second place, the difference with the best vendor is 16 seconds on average. From a purely technical point of view, we faced a difficult problem in terms of the necessary model accuracy, reliability and computational performance. From a business perspective, we managed to develop a piece of technology that helped us to significantly reduce operational costs and our dependence on external providers for one of our core processes, without compromising the quality of the services provided to our users.
Building Data Literacy as the Foundation of your Enterprise Data Culture by Eva Murray at Big Things Conference. Organisations are under pressure to maximise the return on their investment in data and analytics. As analytics tools become available to an increasingly large audience, data literacy is lagging behind and businesses face the risk of missing their targets. At first this may seem counter-intuitive. After all, organisations have spent large sums of money on hiring data experts and acquiring the right tools and solutions to create, collect, store and manage massive amounts of data. While many organisations are analysing data to inform their decision making and strategic direction across product development, supply chain, marketing, sales and support, a large part of them still struggle to fully align their people, processes and technology for maximum effectiveness. In other words, they have the data, but they are struggling to get real insights out of it. This is why data literacy is quickly becoming an essential skill for professionals in modern enterprises. If organisations want their analysts and knowledge workers to go beyond reporting historical observations and use data to drive decision making at a strategic level, then they must devote themselves to increasing the data literacy in their organisation. In this talk Eva Murray, Senior Evangelist at Snowflake, author, data visualisation expert and recognised thought leader on data culture and data communities, will share why it is so important to equip your people with data skills and how you can achieve a data literate workforce. Eva will also provide you with practical steps for developing a strong data culture in your organisation so you can ensure that your data literate analysts and knowledge workers can effectively share and spread their enhanced insights. Based on her experience working with thousands of analysts over the past five years, Eva has developed a framework that provides a balance of planning and preparation followed by practical application and execution. As she shares this framework during the talk you can expect specific takeaways that you can implement in your team, department and organisation straight after. People working with data – be they data analysts, business analysts, data scientists, researchers or any other professional relying on data to do their job – need to be equipped with the necessary skills and knowledge to treat data correctly and apply tools to their greatest effect. While many professionals have attained advanced degrees at colleges and universities, where statistical analysis, research projects and surveys were part of the curriculum, this part of their journey is often years if not decades in the past. Leaders and the organisations that employ them must ensure that those working with data and given responsibility to create insights for driving business decisions, are taught basic data literacy skills and are trained in working with data. Many modern analytics tools make it incredibly easy for people to work with data but can mask a lack of knowledge or understanding. With this talk Eva wants to encourage leaders to place more emphasis on education and ongoing development for their data professionals as well as their business people to ensure that the data driven culture they are working to build is based not just on facts and data but also on the correct handling of data and reliable internal processes.
Graph Data Science by Paco Nathan at Big Things Conference 2021 Since the late-2010s, business uses cases for graph technologies have become more widespread. Along with that, a practice of _graph data science_ has emerged, blending graph capabilities into existing data science teams and industry use cases. Research topics have also moved into production. For example, recent innovations such as _graph neural networks_ provide excellent solutions in business use cases for _inference_ – a topic which has otherwise perplexed the semantic web community for decades. In practice, we tend to encounter “disconnects” between the expectations of IT staff (who are more familiar with relational databases and big data tools) and business users (who are more familiar with their use cases, such as network analysis in logistics). This talk explores _graph thinking_ as a cognitive framework for approaching complex problem spaces. This is the missing part between what the stakeholders, domain experts, and business use cases require – versus what comes from more “traditional” enterprise IT, which is probably focused on approaches such as “data lakehouse” or similar topics, but not doing much yet with large graphs. We’ll explore some of the more common use cases for graph technologies among different business verticals and look at how to approach a graph problem from the point of having a blank white board. Where are graph databases needed? Where should one focus more on graph computation with horizontal scale-out and hardware acceleration? How do graph algorithms complement what graph queries perform? There’s lots of excellent open source software that can be leveraged, and our team at Derwen has been busy with open source integration on behalf of a very large EU manufacturing firm. Current solutions integrate Ray, Dask, FastAPI, RAPIDS, RDFlib, openCypher, and several C++ libraries for HPC.
👇👇👇 Referencias👇👇👇 ✅ Referencias míticas y esenciales que te cito en el vídeo: 🔴Post de 2017: https://www.javiergarzas.com/2017/05/interiorizando-el-significado-del-dual-track-y-que-las-sprint-review-no-son-para-el-product-owner.html 🔴 Post de 2015 https://www.javiergarzas.com/2015/01/descubrimiento-constante-o-continuous-discovery.html 👉 Case Study of Customer Input For a Successful Product (Lynn Miller): https://research.cs.vt.edu/ns/cs5724papers/miller.agile.pdf 👉Adapting Usability Investigations for Agile User-centered Design (Desirée Sy): https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.437.6793&rep=rep1&type=pdf ✅Para aprender más: 👉 Visita 233academy.com: https://www.233academy.com 👉 Publicaciones: https://233gradosdeti.com/publicaciones-233/ 👉 Si quieres saber más sobre Agilmantium: https://agilmantium.com/ 👉 Grupo Telegram "New Agile en español": https://t.me/joinchat/oIYpzO9z7T82ZjU8 📧 Para más información y estar en contacto, nos vemos en... 👉 Blog: http://www.javiergarzas.com/ 👉 Twitter: https://twitter.com/jgarzas 👉 Instagram: https://www.instagram.com/javiergarzas/ 👉 Linkedin: http://es.linkedin.com/in/jgarzas 👉 Facebook: https://www.facebook.com/javiergarzas.blog ¡Que la agilidad te acompañe!!!
Change for the Better: Improving Predictions by Automating Drift Detection by Peter Webb & GokhanAtinc at Big Things Conference 2021 A machine learning solution is only as good as its data. But real-world data does not always stay within the bounds of the training set, posing a significant challenge for the data scientist: how to detect and respond to drifting data? Drifting data poses three problems: detecting and assessing drift-related model performance degradation; generating a more accurate model from the new data; and deploying a new model into an existing machine learning pipeline. Using a real-world predictive maintenance problem, we demonstrate a solution that addresses each of these challenges: data drift detection algorithms periodically evaluate observation variability and model prediction accuracy; high-fidelity physics-based simulation models precisely label new data; and integration with industry-standard machine learning pipelines supports continuous integration and deployment. We reduce the level of expertise required to operate the system by automating both drift detection and data labelling. Process automation reduces costs and increases reliability. The lockdowns and social distancing of the last two years reveal another advantage: minimizing human intervention and interaction to reduce risk while supporting essential social services. As we emerge from the worst of this pandemic, accelerating adoption of machine autonomy increases the demand for the automation of human expertise. Consider a fleet of electric vehicles used for autonomous package delivery. Their batteries degrade over time, increasing charging time and diminishing vehicle range. The batteries are large and expensive to replace, and relying on a statistical estimate of battery lifetime inevitably results in replacing some batteries too soon and some too late. A more cost-effective approach collects battery health and performance data from each vehicle and uses machine learning models to predict the remaining useful lifetime of each battery. But changes in the operating environment may introduce drift into health and performance data. External temperature, for example, affects battery maximum charge and discharge rate. And then the model predictions become less accurate. Our solution streams battery data through Kafka to production and training subsystems: a MATLAB Production Server-deployed model that predicts each battery’s remaining useful lifetime and a thermodynamically accurate physical Simulink model of the battery that automatically labels the data for use in training new models. Since simulation-based labeling is much slower than model-based prediction, the simulation cannot be used in production. The production subsystem monitors the deployed model and the streaming data to detect drift. Drift-induced model accuracy degradation triggers the training system to create new models from the most current training sets. Newly trained models are uploaded to a model registry where the production system can retrieve and integrate them into the deployed machine learning pipeline.
Data Fabric, Transform your Frozen Data into liquid adaptable Data by Oscar Mendez at Big Things Conference From Data Ice Age to Adaptable Data Innovation Age. According to most of the analysts, “Data Fabric” is the current most important technology trend for 2022. In this talk we will discuss how “Data Fabric” is able to transform your current data from frozen Data inside rigid schemas in Data Silos, to liquid data that is able to adapt to different use cases and consumers, adopting new forms and shapes with dynamic schemas. We will see how this transforms the main Data use cases your company is using now, to generate maximum value from Data in a fraction of the time We will also go over some of the technologies related to “Data Fabric” automation: from AI to Automated Data Governance, Graph technologies, ontologies, semantic queries, Data Marketplace, Smart Data Contracts, and containers, as the new way to allow the data to adapt to the users and stop forcing the users to adapt to the data.
After starting as an “experiment” in 2015, Swedish startup Anatomic Studios focuses on embracing individuality to build custom-made prosthetics. For the United Nations’ International Day of Persons with Disabilities, learn how the team works at the intersection of fashion, design and 3D-tech to embrace the individuality of their clients. #IDPD2021
Good AI for Good by Richard Benjamins at Big Things Conference 2021 AI, as a transformational technology, provides us with many opportunities in business, society, government and in our lives. However, there are also negative, although oftentimes unintended, consequences of the use of AI. This talk will focus on social opportunities for AI (AI for Good) and on how to ensure that the use of AI does not have negative consequences (Good AI).
Unexplainable AI: Why machines are acting in that way by Moisés Martínez at Big Things Conference