Noutați

GitLab Heroes Unmasked – How I became acquainted with the GitLab Agent for Kubernetes

A key to GitLab’s success is our vast community of advocates. Here at GitLab, we call these active contributors “GitLab Heroes.” Each hero contributes to GitLab in numerous ways, including elevating releases, sharing best practices, speaking at events, and more. Jean-Phillippe Baconnais is an active GitLab Hero, who hails from France. We applaud his contributions, including leading community engagement events. Baconnais shares his interest in Kubernetes and explains how to deploy and monitor an application in Kubernetes without leaving GitLab. Since 2007, I’ve been a developer. I’ve learned a lot of things about continuous integration, deployment, infrastructure, and monitoring. In both my professional and personal time, my favorite activity remains software development. After creating a new application with multiple components, I wanted to deploy it on Kubernetes, which has been really famous over the last few years. This allows me to experiment on this platform. This announces a lot of very funny things. I know some terms, I used them in production for five years. But as a user, Kubernetes Administration is not my “cup of tea” 😅. My first deployment in Kubernetes When I decided to deploy an application on Kubernetes, I wasn’t sure where to start until I saw, navigating in my project in GitLab, a menu called “Kubernetes.” I wanted to know what GitLab was hiding behind this. Does this feature link my project’s sources to a Kubernetes cluster? I used the credit offered by Google Cloud to discover and test this platform. Deploying my application on Kubernetes was easy. I wrote a blog post in 2019 describing how I do this, or rather, how GitLab helped me to create this link so easily. In this blog post I will explain further and talk about what’s changed since then. Behind the “Kubernetes” menu, GitLab helps you integrate Kubernetes into your project. You can create, from GitLab, a cluster on Google Cloud Platform (GCP), and Amazon Web Services (AWS). If you already have a cluster on this platform or anywhere else, you can connect to it. You just need to specify the cluster name, Kubernetes API UR, and certificate. GitLab is a DevOps platform and in the list of DevOps actions, there is the monitoring part. GitLab deploys an instance of Prometheus to get information about your cluster and facilitate the monitoring of your application. For example, you can see how many pods are deployed and their states in your environment. You can also view some charts and information about your cluster, like memory and CPU available. All these metrics are available by default without changing the application of your cluster. We can also read the logs directly in GitLab. For a developer, it’s great to have all this information on the same tool and this allows us to save time. A new way to integrate Kubernetes Everything I explained in the previous chapter doesn’t quite exist anymore. The release of GitLab 14.5 was the beginning of a revolution. The Kubernetes integration with certificates has limitations on security and many issues were created. GitLab teams worked on a new way to rely on your cluster. And in Version 14.5, the GitLab Agent for Kubernetes was released! GitLab Agent for Kubernetes GitLab Agent for Kubernetes is a new way to connect to your cluster. This solution is easy to […]

Read More

How to automate software delivery using Quarkus and GitLab

In this day and age, organizations need to deliver innovative solutions faster than ever to their customers to stay competitive. This is why solutions that speed up software development and delivery, such as Quarkus and GitLab, are being adopted by teams across the world. Quarkus, also known as the Supersonic Subatomic Java, is an open source Kubernetes-native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from respected Java libraries and standards. Quarkus has been steadily growing in popularity and use because of the benefits that it delivers: cost savings, faster time to market/value, and reliability. Quarkus offers two modes: Java and native. Its Java mode builds your application using the JDK and its native mode compiles your Java code into a native executable. GitLab, the One DevOps Platform, includes capabilities for all DevOps stages, from planning to production, all with a single model and user interface to help you ship secure code faster to any cloud and drive business results. Besides DevOps support, GitLab also offers GitOps support. The combination of Quarkus and GitLab can empower your developers and operations teams to collaborate better, spend more time innovating to deliver business value and differentiating capabilities to end users. In this article, we show how to automate the software delivery of a generated Quarkus application in Java mode using GitLab Auto DevOps. Below we list the steps how to accomplish this. Prerequisite The prerequisite for the subsequent instructions is to have a K8s cluster up and running and associated to a group in your GitLab account. For an example on how to do this, please watch this video. Generate your Quarkus project using the generator and upload to GitLab From a browser window, point to the Quarkus generator site, https://code.quarkus.io, and click on the button Generate your application. Generate a sample Quarkus application using the generator On the popup window, click on the button DOWNLOAD THE ZIP, to download a sample Quarkus application in a ZIP file to your local machine. The downloaded file is named code-with-quarkus.zip. Unzip the file on your local machine in a directory of your choice. This will create a new directory called code-with-quarkus with all the files for the sample Quarkus application. From a browser window, open https://gitlab.com, and log in using your GitLab credentials. Head over to the GitLab group to which you associated your K8s cluster and create a blank project named code-with-quarkus. Create project code-with-quarkus From a Terminal window on your local machine, change directory to the newly unzipped directory code-with-quarkus and execute the command rm .dockerignore to delete the .dockerignore file that came with the sample Quarkus application. After removing this file, execute the following commands to populate your newly create Git project code-with-quarkus with the contents of this directory: NOTE: Depending on your version of git installed on your local machine, the commands below may vary. Keep in mind that the goal of the steps below is to upload the project on your local machine to your newly created GitLab project. git init git remote add origin https://gitlab.com/[REPLACE WITH PATH TO YOUR GROUP]/code-with-quarkus.git git add . git commit -m “Initial commit” git push –set-upstream origin master At this point, you should have your sample Quarkus application in your GitLab project code-with-quarkus. Modify the generated Dockerfile.jvm file and indicate its location […]

Read More

Why You’re Failing At React Grid View

Grid View is an important element for modern websites because it allows you to provide large volumes of information to the user.  If you are using React for the front-end, you must consider implementing a grid that provides all the functionalities you want. However, suppose you find that your React grid view fails at achieving the speed and user experience you want. In that case, it is high time you consider switching to the Sencha GRUI, which provides a rich development and user experience. If you are a React developer looking forward to embedding a grid into your applications, it is important to know why you are failing your React grid layout and how to use Sencha GRUI for better performance.  Is Your React Grid View Failing To Load Data Efficiently? Creating Grids is a fun Job. However, there are some secrets about JS grids that you might want to know. Usually, we fill grids with a lot of data. Therefore, how efficiently the grid loads, the data is important when embedding a grid view to your websites. Suppose the grid takes 2-3 minutes to load the complete data set. However, using React grid view, creating a grid with only a millisecond of load time from scratch can be tedious, especially if you are under a tight schedule. So this can be something you could miss when building your grids with React grid view, which can lead to your project failure. You need a better Grid view that handles efficient data loading on your behalf. Then you never need to worry about that.  Does Your React Grid View Fail To Provide All The Functionalities Your Customers Expect? Not only the efficiency but also the functionalities offered by your data grid matters most for your project’s success. Suppose your grid can only provide basic functionalities like sorting and searching but cannot provide more intuitive features like pagination and infinite scrolling. In that case, there is a possibility that it cannot extend to provide more advanced features. With time, customer requirements can also be changed. Therefore, your react grid view can fail with time if it is not easily adjustable and needs several plugins to provide additional functionalities. If you want to see examples of successful JS grids, this article might help you.  Is Your React Grid View Cannot Be Customized Easily? Customizations are important when you work with any web component. Suppose your React grid view needs to be used on another page with some customizations. Can you easily achieve it with minimal impact on your code? If not, you will have to do additional work to support a customized grid whenever you want something different. Therefore, it is better to avoid such implementations at all costs and look for a better solution that enables you to do customizations without hassle. This is where third-party Javascript frameworks like Sencha can help you.  Is Your React Grid View Failing To Handle Your Growing Data Set? Data are bound to increase with time. If you have only thousands of data at your hand, you could have millions of data within the next couple of months or years. Therefore, your React good must also accommodate this growing data without affecting the loading speed. Your React grid can fail if you cannot easily improve the functionalities with […]

Read More

Digital Twin Twitter takeovers: May recap

Lauren and Sam have a comprehensive skill set in strategy, design, and technology, and their extended reality (XR) studio, RefractAR, specializes in spatial activations. These two innovators created a whole car maintenance app with Unity MARS. 1. If you’re crunched for time, use image trackers for AR 2. Polycam makes it easy to scan and create digital twins 3. How to create your AR experience with Unity MARS Follow Lauren Follow Sam

Read More

World Oceans Day: RT3D projects make waves and encourage conservation

Healthy oceans are essential for the survival of all life on Earth, so we need to protect them. We’re committed to ocean conservation as part of our ESG (environmental, social, and governance) efforts to build a more sustainable future and invest in our planet. Here are some exciting projects using Unity to celebrate the planet’s oceans, educate audiences, and encourage action: An Otter Planet by Habithéque is an in-progress PC game designed to teach players about water and help them understand its importance to all life on earth. In addition to raising awareness through play, An Otter Planet will raise money for charities to support water-related protection and revitalization efforts through in-game purchases and charitable donations. Raft, a PC game developed by Redbeet Interactive, highlights the incredible vastness of the open ocean. Players wake up adrift on a raft and then fight for survival by crafting, growing food, and avoiding shark attacks. Experiencing this game provides a new appreciation for the danger, stillness, and mystery of the oceans. The Hydrous is an innovative project that designs science-based augmented and virtual reality experiences to engage audiences with the wonders of ocean life. The creators’ goal is to provide “equitable access to ocean exploration,” which in turn builds understanding of beautiful and threatened marine ecosystems. — We believe that the world is a better place with more creators in it, and we’re excited to see the inspiring work being done to realize a sustainable, inclusive, and equitable world for all. Want to hear more inspiring creator stories? Sign up for Unity’s Social Impact newsletter for regular news and updates about our Social Impact work.

Read More

Why You Should Know About Machine Learning and Artificial Intelligence

It is undeniable that technology is rapidly evolving. Those things that are once a concept are now being materialized. We are currently embracing a new digital age where artificial intelligence is no longer a product of various science fiction novels but a real-life technology. In this video, Jim McKeeth is joined by Embarcadero MVP Yilmaz Yoru to tackle everything about Machine Learning and Artificial Intelligence. We will learn how this technology evolved over time, the ide software, programming languages, and libraries that are good for AI and the future of this technology. Things you need to know about artificial intelligence and machine learning Generally, Artificial Intelligence refers to the intelligence exhibited by machines capable of carrying out tasks that usually require human intelligence. It refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. The term may also be applied to any machine that exhibits traits associated with a human mind such as learning and problem-solving. Some of these mental capabilities and functions may also refer to as Artificial General Intelligence or better known as the Strong AI. Machine Learning, on the other hand, is a subset of AI and uses algorithms to learn from data, find patterns in data and make predictions about future events or outcomes. Today, Artificial Intelligence can be applied to many things like chatbots, virtual assistants, autonomous cars, and more. When it comes to Machine Learning and AI development, the first thing you must consider is to pick the right programming language depending on what kind of machine or software you are building. In this video, we will get a list of ideal programming languages that work well with AI and Machine Learning. Some of which include Delphi, C++, C++ Builder, Python, and Java to name a few. We will also learn about different libraries and resources you can use for AI Software development. This includes TensorFlow, OpenCV, Mitov Software Intelligence Lab, and more. Jim McKeeth will also provide demos showing the aforementioned libraries in action using Delphi. The video will also discuss AI Ethics, AI Singularity, Movies and Programs that use AI and Machine learning as main subjects, as well as the things we could expect from these technologies in the future. To learn more about Artificial Intelligence and Machine Learning, feel free to watch the webinar below.

Read More

On the road to Tribeca 2022

“Mushroom Cloud is a project focused on accountability; one that values sharing and conserving resources, and strengthening networked systems through participation, communication, and advocacy.” – Nancy Baker Cahill The Mushroom Cloud NYC / RISE AR experience by artist Nancy Baker Cahill is a call for climate change action. The project acknowledges the impending crisis while offering hope that, through cooperative and constructive measures, a vibrant future can still be possible. During the Tribeca Festival, a custom, geo-located edition of this experience will be available to show audiences what a mushroom cloud explosion could be like, urging them to consider how we might model our collective survival on nature. The project is especially relevant to New York City – a city vulnerable to climate impacts given rising sea levels. Be sure to check out its world premiere on June 9.  

Read More

The Next Big Thing In Analytics And Reporting Tools

The world is becoming increasingly data-driven. Without data, businesses cannot succeed and expand. They may have a stream of data coming from different sources, but it is useless without analytics and reporting tools.  Data is a critical asset for businesses as it helps them make informed business decisions. Plus, data usage drives the success of a business. Which depends on analytics, and the usage of reporting tools. Reporting tools make all the information easier to parse. Without analytics and reporting tools, informed business decisions are hard to imagine. This is where Yellowfin comes into play. Gartner surveyed CIOs for analytics and reporting tools. They asked for their best pick in business’s success. As a response, the highest 24% voted for data analytics. CIOs also believe that data analysis is important to act on data. Which returns invaluable insights.  So, if an enterprise wants to succeed, it must keep up with the latest trends in data analytics. Don’t know where to look? No worries! We have prepared this guide solely for this purpose.  Continue reading to learn about big things in data analytics and reporting tools.  How have analytics and reporting tools advanced recently? 1. Contextual Analytics  Contextual analytics is a chart embedded on the page with the data. It also includes picturing and the related actions for better insights. It embeds dashboards and analytics solutions into a software application’s core workflows. In addition, users get the benefits of analytics directly in the framework. Before contextual analytics, the users had to switch away from their working environments. They did so to investigate data or derive insight. But now, with contextual analytics, the data is delivered to the end-user directly. It is in the user interface and the transaction flow. With one click, users can get instant, guided, and dynamic insights. Which helps them to train and make decisions while working as usual.  The contextual analytic’s goal is to maximize the business benefits. It does so by supporting or triggering actions users take within the app.  2. Augmented Analytics Augmented analytics uses enabling technologies like AI and machine learning. It helps with data preparation, insight explanation, and insight generation.  Its primary purpose is to boost how users explore and analyze data in analytics and BI platforms. It augments the expert and citizen data scientists. It speeds up machine learning, data science, and AI model development. So, augmented analytics is transforming how businesses prepare data. It helps find insights and share the findings from those insights. It will be no surprise if data analytics becomes mainstream. It is one of the next big things in analytics and reporting tools. Thus, data and analytics leaders should not wait and incorporate it now.  3. Automated Analytics Automated analytics detect relevant anomalies, trends, and patterns. Once found, it delivers insights to users in real-time with no manual analysis.  Enabling technologies like machine learning and AI are used to monitor working performance. They also help search large datasets and track user-defined metrics with desired business outcomes. As a result, it produces alerts of specified triggers and delivers analyzed findings.  The main goal of automated analytics is to perform automated analysis. It offers benefits for both software vendors and end-users. It comes with features of fraud detecting and tracking changes in customer behavior. That helps in automated analytics.  […]

Read More

PyTorch for Delphi with the Python Data Sciences Libraries

Last week we look at the Python developer side of the Embarcadero Python Ecosystem with DelphiFMX. This week are are looking at the Delphi (and potentially C++Builder) side of the ecosystem. Embarcadero Open Source Live Stream The next installment takes a look at the new Python Data Sciences Libraries and related projects that make it super easy write Delphi code against Python libraries and easily deploy on Windows, Linux, MacOS, and Android. Specific examples with the Python Natural Language Toolkit and PyTorch, the library that powers projects like Tesla Autopilot, Uber’s Pyro, Hugging Face’s Transformers. This is part of a series of regular live streams discussing the latest in Embarcadero open source projects. Jim McKeeth will be the host, and be joined by members of the community and developers involved in these open source projects, as well as members of Embarcadero and Idera’s Product Management. A great opportunity to see behind the scenes and help shape the future of Embarcadero’s Open Source projects. If you are interested in machine learning, artificial intelligence, or data sciences then you want to join this webinar! Thursday, Jun 9, 2022 10:00 AM CDT Come back to this blog post after the webinar for replay, slides, links and more. New Libraries for Delphi This is an early access sneak peak at these libraries we are still working on. Right now we are working on getting everything working with Delphi, but we plan and expect it to work with C++Builder eventually too. The new libraries we will look at include: Lightweight Python Wrappers – A library making it easy to quickly and easily wrap most any Python library for use in Delphi Python Environments – One of the areas of complication with Python is deploying and setting up Python and all the required libraries. These are components that allow you to quickly and easily setup everything you need for Python. Python Data Sciences Libraries – These make use of the above two libraries to give Delphi developers quick and easy access to some of the more popular Python data sciences libraries like PyTorch, NLTK, TensorFlow, NumPy, etc. All with pure Object Pascal. I really believe these libraries have the potentially to fundamentally change what it means to be a Delphi developer. You will definitely want to be here.

Read More

Should we make FlexCel work in Lazarus?

As many might know, FPC announced support for anonymous methods some days ago. This was one of the last pieces missing for us to be able to port TMS FlexCel to Lazarus and FPC.  So we decided to give it a try, and after a couple of days fixing stuff, we could make it compile. It wasn’t simple, there were internal compiler errors, there is missing functionality that had to be rewritten, and we haven’t reached 100% of the code, but most is compiling. Of course, compiling it is one thing, but getting it to work is a different matter. When we tried, we couldn’t create a simple xls or xlsx file. After a couple more days, we could get first xlsx and then xls working. We needed to do some patching in fpc, and we needed to workaround  some other stuff, but that’s the point where we are at right now. All of FlexCel code is compiling. It works in simple cases, but we haven’t tried our test suite yet. You can see here a little video of Lazarus in a M1 mac (but running as intel under rosetta) of me creating a simple xlsx file. And now comes the big question. Would you like us to spend more time on it so we add Lazarus support, or do we stop it now and focus in FlexCel for Delphi and FlexCel .NET as usual?  While we now have simple apps working, we won’t launch “Lazarus support” unless we have all tests passing, and FlexCel tests are quite difficult to pass. I expect some extra weeks of work to make it all pass. At least for now, we would focus in supporting Windows, OSX and Linux. No plans for iOS or Android. Important: To release support for Lazarus, we will require using the Trunk release until the current trunk is promoted to stable. We would really be interested in your opinion. If you want to share it with us, please answer the poll below or let us know what you think in the comments.

Read More