Devoogle tiene indexados actualmente 15478 recursos relacionados con el desarrollo de software.

Composer es el administrador de paquetes líder a nivel de aplicación para PHP. Ha sido diseñado para situaciones en las que el equipo de desarrollo tiene el control total del entorno. En sistemas multiusuario, donde los desarrolladores independientes crean paquetes, no se recomienda el uso de Composer debido a los posibles conflictos. En esta sesión, vamos a revisar varias técnicas para usar Composer en Joomla y una nueva herramienta de desarrollo innovadora para simplificar la implementación de código PHP con prefijos. Ponente: Aníbal Sánchez (@anibal_sanchez) es desarrollador senior PHP y líder en su empresa Extly (http://www.extly.com) de gestión de soluciones en Laravel, WordPress, Joomla y PrestaShop. Tiene experiencia en el desarrollo rápido web, DevOps y más de 15 años de experiencia en la industria de Internet. Meetup: https://www.meetup.com/PHPMad/ Twitter: https://twitter.com/phpmad Nos vemos en PHPMad...
Un año más, desde Autentia queremos agradeceros vuestro apoyo y confianza y por supuesto desearos... ¡unas muy felices fiestas!
En 2014 Vodafone nos propuso uno de los mayores retos que hemos tenido como compañía: desarrollar desde 0 una operadora virtual. Así nacía Lowi, uno de los proyectos más ambiciosos que nos puso a prueba hace 7 años. Hoy, en este séptimo aniversario, echamos la vista atrás orgullosos de haber conseguido no solo un éxito total en el desarrollo del proyecto, sino de crear con Vodafone un único equipo. ¿Quieres saber más? https://www.paradigmadigital.com/
"¡Ha llegado el final de la 4ª temporada de los #MeetupsGeeksHubs! Un total de 10 sesiones en las que hemos hablado sobre videojuegos, testing, GIT, tecnologías y frameworks como VUE, Next.JS, Chakra UI, Ansible, cómo escalar proyectos web complejos, Management 3.0 e incluso fracasos en IT o qué es un programador para Hacienda. 💥 Y lo mejor, de la mano de grandísimos profesionales que generosamente han querido compartir con la Comunidad todo su bagaje. ¡Millones de gracias! 👏 Para cerrar la temporada por todo lo alto, no podría ser mejor idea que Manuel S.Lemos, el Community Lead de GeeksHubs, que nos ha acompañado en cada streaming, se marcará una buena sesión para compartir y hablar de lo que más le gusta... ¡DATOS! 😜 "Con la llegada de la Inteligencia Artificial, los datos se ha convertido en el principal activo de cualquier aplicación. Para que el entrenamiento de los modelos sea eficiente, necesitamos mantener una calidad y coherencia de los datos muy alta. Es por ello que nacen nuevas practicas para los desarrolladores para hacer que esto esa posible sin crear cuellos de botellas ni funcionamientos en nuestra aplicación de negocio." Contacta con Manu a través de: -Linkedin: https://www.linkedin.com/in/manuelslemos/ -Twitter: https://twitter.com/ManuelS_Lemos ¿Nos cuentas qué te ha parecido esta temporada? 😊https://geekshubscrp.typeform.com/to/glSjtvh1 Comenta en twitter mencionando a @geeks_academy con el hashtag #MeetupsGeeksHubs. 🤝 Únete a nuestra Comunidad en Slack: https://geekshubs.slack.com/join/shared_invite/zt-gwpvxz74-qmJ3VHOEbpRpbY8AoPG8KQ #/ 🚀 Bootcamp Full Stack Developer Presencial en Valencia, Madrid y Barcelona: https://bootcamp.geekshubsacademy.com/full-stack-developer/ 🎥 Canal de Youtube: https://www.youtube.com/user/geekshubs 🐦 Twitter GeeksHubs: https://twitter.com/geekshubs 🐦 Twitter GeeksHubs Academy: https://twitter.com/geeks_academy 📸 Instagram: https://instagram.com/geekshubs ℹ️️️️️ LinkedIn GeeksHubs: https://www.linkedin.com/company/geeks-hubs ℹ️️️️️ LinkedIn GeeksHubs Academy: https://www.linkedin.com/school/geekshubsacademy/ ? Facebook GeeksHubs: https://facebook.com/geekshubs ? Facebook GeeksHubs Academy: https://www.facebook.com/geekshubsacademy 📕 Plataforma online +30 cursos gratuitos: https://geekshubsacademy.com/ 🎧 Podcast I am Geek: https://open.spotify.com/show/4G4PpNzPOeWh5DrrumDXCd
Actualmente existen muchas alternativas para la gestión de estado en React. Hoy hablamos de Recoil que se nos presenta como una alternativa ligera para gestionar el estado entre componentes basándose en Atoms. Si quieres saber más sobre cómo gestionar el estado con React no te pierdas nuestro curso en Codely: https://pro.codely.tv/library/gestion-estado-en-react-171307/ {▶️} CodelyTV ├ 🎥 Suscríbete: https://youtube.com/c/CodelyTV?sub_confirmation=1 ├ 🐦 Twitter CodelyTV: https://twitter.com/CodelyTV ├ 🍺 Twitter Isma: https://twitter.com/ismanapa ├ 📸 Instagram: https://instagram.com/CodelyTV ├ ℹ️ LinkedIn: https://linkedin.com/company/codelytv ├ 🟦 Facebook: https://facebook.com/CodelyTV └ 📕 Catálogo cursos: https://bit.ly/cursos-codely
Two specialists from well-known Huuuge Games studio will guide you through A/B listings - learnings they got from thousands of different tests, how to run the listings in the right way, how to read the results and find the best possible way. All presentations are based on examples and case studies. See below details about agenda and speakers. JAKUB MARKIEWICZ, ASO Specialist in Huuuge Games He started as a graphic designer and animator ten years ago and after a few years switched to a more marketing-focused career path. Currently, he supports all products in Huuuge Games by optimising all aspects of storefronts on all platforms. Privately he likes playing board games, hiking and reading. KACPER CHWALIŃSKI, Head of Organic Growth in Huuuge games Global producer and publisher of free-to-play mobile games. For over a decade working in the Game Industry. He started as Content Graphic Designer and worked with games for pc, console and mobile. He works in Huuuge Games for 7 years, which he joined as a graphic designer and helped in the creation of one of the largest social casino games – Huuuge Casino. Then he joined the ASO team and Organic Growth, where for 5 years he has been optimizing conversion and visibility in stores for all games from the Huuuge Games portfolio. He obtained a Master of Arts degree in the Academy of Fine Arts in Łódź, where he developed his passion for computer graphics. Privately, Jeremiasz’s father, passionate about board games and long walks.
Microservices architectures are inherently distributed and building such solutions always bring interesting challenges to the table: resilient service invocation, distributed transactions, on-demand scaling, idempotent message processing and more. Deploying Microservices on Kubernetes doesn’t solve these problems and Developers need to learn and use many SDK’s on top of frameworks such as .NET, Java, Python, Golang, etc…. This session will show you how to overcome those challenges, using Dapr: a portable runtime to build distributed, scalable, and event-driven microservices.
Electronic identities (eIDs) have emerged as a novel way of identity proving under the umbrella of the digital revolution the society is experiencing in the last decades. Within these solutions access is successfully provided by means of either a password or a set of biometric features. However, these access methods are not enough for the onboarding process in which the digital identity is created, requiring physical presence. An alternative consists in the automatic verification of physical ID documents. In the framework of a European project IMPULSE “Identity Management in PUbLic Services” (Grant Agreement no. 101004459) on the generation and use of digital identities in public services, we have designed an automatic document verification system, based on the combination of different cutting-edge technologies, robust against attempts of deception and forgery and transparent for the end user. It will allow for an easy onboarding from a smartphone. Our pipeline receives both an image of the ID document (e.g., passport, national ID document, etc.) used to prove identity and a series of personal data provided by the user and retrieved via a form. We use a variety of Digital Image Processing methods to treat the received image. In this regard, we implement algorithms to detect whether the image is excessively blurry or dark, and to crop the image by matching the Scale-Invariant Feature Transforms (SIFT) of the target image to a document model. On the processed image we apply state-of-the-art Optical Character Recognition (OCR) methods, based on Long Short-Term Memory (LSTM) neural networks with a twofold objective, both to recognise the text present within the document fields and to obtain the bounding boxes of the characters that form it. The document validator must assess two aspects. First, the user sending the information must be the same person whose information appears in the photographed ID document. Second, the image cannot correspond to a forged document. The first assessment is performed by calculating a dissimilarity measure, the Levenshtein distance, between the information fields introduced in the form and the OCR-recognised text. Shall this distance remain below a certain threshold, it is understood that the ID document truly belongs to the user. ID document forgery, on the other hand, is complicated to detect due to the lack of training data stemming from privacy concerns, being particularly difficult to obtain examples of tampered documents – therefore any possibility of using supervised binary classification algorithms is discarded. Our detector uses the SIFT features of the image to detect portions that have been copied and moved into other locations. Moreover, a set of character features are built from the OCR-recognised bounding boxes and fed into a one-class Support Vector Machine (SVM) classificator, trained only on genuine documents. This development is part of a novel blockchain and artifical intelligence-based eID concept, meant to be useful to a wide amount of public service areas, and aiming to solve the inefficiencies derived from the current eID data management model implemented by governments. On this subject, six pilot case studies will be conducted in five European countries (Spain, Italy, Bulgaria, Iceland and Denmark), to aid processes such as issuing complaints, e-governance and legal identities for persons of business. In summary, our detection system for fake or forged documents will lead to an easy onboarding solution that will remove the need of physical displacements. At the same time, it is robust in the background and heavily resilient against attempts of citizens with suspicious intentions to sign up using fake or false IDs. In addition, it will find practical use in different public service areas, being user friendly, simple and transparent.
It is well known that data quality and quantity are crucial for building Machine Learning models, especially when dealing with Deep Learning and Neural Networks. But besides the data required to build the model itself, there is another often overlooked type of data required to build a production-grade Machine Learning Platform: Metadata. Modern Machine Learning platforms contain a number of different components: Distributed Training, Jupyter Notebooks, CI/CD, Hyperparameter Optimization, Feature stores, and many more. Most of these components have associated metadata including versioned datasets, versioned Jupyter Notebooks, training parameters, test/training accuracy of a trained model, versioned features, and statistics from model serving. For the dataops team managing such production platforms, it is critical to have a common view across all this metadata, as we have to ask questions such as: Which Jupyter Notebook has been used to build Model XYZ currently running in production? If there is new data for a given dataset, which models (currently serving in production) have to be updated? In this talk, we look at existing implementations, in particular, MLMD as part of the TensorFlow ecosystem.
Our purpose is to provide an analysis of the basic objectives and value propositions of any Customer Data Platform by encouraging discussion with participants and sharing our own experience. In this sense, we would like to have the opportunity to present a production use case of a multi-cloud Customer Data Platform. A posteriori, to enrich our presentation, we will start a discussion on the reasons for separating a CDP into two domains: the domain of personally identifiable data and the domain of anonymised data. We will then delve into the specific production use, examining the value propositions both for the end-customer, for businesses and from an operational point of view. Through these points, we are convinced that the audience will clearly see the business and technical drivers for designing and building the CDP, not 100% in Salesforce and not 100% in GCP. In order to illustrate our presentation through a real case, we propose to deepen the discussion with a technical twist and we will share the experience of Making Science in building a custom CDP with a cloud-first design and development. Among the points we will highlight are: • A review of the GCP and Salesforce services that the solution used. o A review of the GCP and Python-based technology stack and development design to continuously ingest signals and events from over 38 data sources. • The management of the bi-directional exchange of signals with the client’s website. • The selection of serverless GCP technologies for ingesting signals from the customer’s website while protecting the system from external predators. • The design approach to protect the solution from duplicate signal transmissions from streaming sources. • The no-harassment approach to continuous batch event processing. • The design point of view to protect the solution from duplicate batch transmissions. We will walk through our design considerations with respect to signal/event publishing to CDP processes and external machine learning enrichment systems. • Persistent keyless data store operating at the core of the CDP giving the most up-to-date view of the client in both an anonymised and de-anonymised view depending on the domain. • The bi-directional anonymisation/de-anonymisation gateway between Salesforce and GCP. The gateway had to support sending anonymised data to the marketing analytics domain within GCP and support receiving custom engagement requests from the marketing analytics domain to the customer analytics domain. We will examine in detail the GCP technologies used to determine which attributes of a given data flow/feed needed to be anonymised. • We will show how signal enrichment was supported from both the personally identifiable data domain and the anonymised CDP analytics domain. • What additional design and development steps we took to ensure GDPR compliance by leveraging features within GCP. We will also examine our implementation to ensure traceability of consent on a customer basis. • The design and development approach and technologies used to deliver ‘human readable’ analytics, even as the CDP’s customer-centric data warehouse continually changes. • We will review the selection of our GCP serverless data warehouse and take a look at the design approaches applied to ensure efficient, consistent and governed access to data. To conclude, we will put the spotlight on key learnings from the multi-cloud, multi-domain Client Data Platform implementation; as well as share Making Science’s design approach to prepare a CDP to be deployed in a cloud-agnostic manner.