Noutați

Faces of Unity – Sharlene Tan

I’d encourage others to keep an open mind and never stop learning. Back when I was a college student, I never imagined that I’d wind up in the video game industry, working with languages and the written word. As technology evolves, it’s hard to predict what jobs will be in demand 10 years down the road. I’m glad my career journey led me to where I am now. Can you share a few fun facts about yourself? I was born and raised in Singapore, but have lived in many different places: Austin, Houston, Dallas, Hakodate, Tokyo, Oita, and currently, Seattle. I enjoy translating Japanese song lyrics into English, and also really love karaoke. I’ve run into Jackie Chan twice – once in a hotel in Canada, and another time in South Africa. He was filming Who Am I? atop Table Mountain.

Read More

How To Get Cross Platform Apps To Connect To A MySQL Database

Whether you are working with small or large-scale databases, MySQL is probably one of the most popular database systems today. The webinar video below will take us back to CodeRage 2018 where Yilmaz Yoru will discuss the process of creating a MySQL Database and connecting the database using the MyDAC components through C++ Builder. MySQL is undeniably one of the most popular database servers and has been part of many windows application development projects. Thanks to its free Community Edition, the server is relatively more accessible than other open-source database systems in the market. How to connect to a MySQL Database using MyDAC Components? Normally, if you are using C++ Builder, you can connect MySQL Database using the official FireDAC component, a powerful Universal Data Access library that is a relatively easier-to-use access layer that supports, abstracts, and simplifies data access, providing all the features needed to build real-world high-load applications. However, there are also third-party components that you can use and this is where Devart’s MyDAC comes into play. MyDAC is a library of components that provides direct access to MySQL and MariaDB from Delphi and C++Builder on various Operating systems like Windows, Linux, macOS, iOS, and Android for both 32-bit and 64-bit platforms. To use MyDAC with C++ Builder, first, you must download a trial or purchased version from their site and you must install it for both VCL and FireMonkey Projects. In this video, Yilmaz will demonstrate how to create a new MySQL Database with a New User on a host server. It will also encourage you to install MySQL Workbench for table creation. In this webinar, all will be explained by a C++ Builder example. MyDAC is a good component to connect MySQL database for all platforms. It also supports the FireMonkey platform, which allows you to develop visually spectacular high-performance desktop and mobile native applications. To learn more about connecting MySQL databases using MyDAC components, feel free to watch the video below.

Read More

Three ways the current paradigm shifts in technology are shaping the future of industry

Digital twins, based on 4IR technologies, are a critical enabler for Web3 in industry. These are already in use with various levels of completeness in many industries to help organizations understand system flows, anticipate maintenance needs, reduce operational costs, and enhance overall efficiencies. “Ideally you want Web3 to revolutionize your end-to-end workflows, across the board, for every person in your organization. But I think it’s more the case that we will find pockets of traction, where somebody can get started despite the significant headwinds that are facing us.” – Matt Fleckenstein, Sr. Director, Product Management, Microsoft Mesh, Microsoft  Being a leader of innovation, Vancouver Airport Authority (YVR) wanted to reinvent itself as a gateway for learning, innovation and movement of new ideas in industries beyond aviation. The result was a digital twin of its terminal and airfield on Sea Island.  The digital twin of YVR’s facilities helps solve challenges such as training, optimization, testing, evaluating environmental impact, and planning for the future – all while enabling the airport to operate without interruption. Designed with a “people-first” mindset, YVR’s digital twin offers significant benefits to airport employees, as well as the community at large.

Read More

Unity and .NET, what’s next?

.NET Standard 2.1 support in Unity 2021 LTS enables us to start modernizing the Unity runtime in a number of ways. We are currently working on two improvements. Improving the async/await programming model. Async/await is a fundamental programming approach to writing gameplay code that must wait for an asynchronous operation to complete without blocking the engine mainloop.  In 2011, before async/await was mainstream in .NET, Unity introduced asynchronous operations with iterator-based coroutines, but this approach is incompatible with async/await and can be less efficient. In the meantime, .NET Standard 2.1 has been improving the support of async/await in C# and .NET with the introduction of a more efficient handling of async/await operations via ValueTask, and by allowing your own task-like system via AsyncMethodBuilder.  We can now leverage these improvements, so we’re working on enabling the usage of async/await with existing asynchronous operations in Unity (such as waiting for the next frame or waiting for a UnityWebRequest completion). As a first step, we’re improving the support for canceling pending asynchronous tasks when a MonoBehavior is being destroyed or when exiting Play mode by using cancellation tokens. We have also been working closely with our biggest community contributors, such as the author of UniTask, to ensure that they will be able to leverage these new functionalities. Reducing memory allocations and copies by leveraging Span. Because Unity is a C++ engine with a C# Scripting layer, there’s a lot of data being exchanged between the two. This can be inefficient since it often requires either copying data back and forth or allocating new managed objects.  Span was introduced in C# 7.2 to improve such scenarios and is available by default in .NET Standard 2.1. In recent years, you might have heard or read about many significant performance improvements made to the .NET Runtime thanks to Span (see improvements details in .NET Core 2.1, .NET Core 3.0, .NET 6, .NET 6). We want to leverage its usage in Unity since this will help to reduce allocations and, consequently, Garbage Collection pauses while improving the overall performance of many APIs.

Read More

How we reduced 502 errors by caring about PID 1 in Kubernetes

This blog post and linked pages contain information related to upcoming products, features, and functionality. It is important to note that the information presented is for informational purposes only. Please do not rely on this information for purchasing or planning purposes. As with all projects, the items mentioned in this blog post and linked pages are subject to change or delay. The development, release, and timing of any products, features, or functionality remain at the sole discretion of GitLab Inc. Our SRE on call was getting paged daily that one of our SLIs was burning through our SLOs for the GitLab Pages service. It was intermittent and short-lived, but enough to cause user-facing impact which we weren’t comfortable with. This turned into alert fatigue because there wasn’t enough time for the SRE on call to investigate the issue and it wasn’t actionable since it recovered on its own. We decided to open up an investigation issue for these alerts. We had to find out what the issue was since we were showing 502 errors to our users and we needed a DRI that wasn’t on call to investigate. What is even going on? As an SRE at GitLab, you get to touch a lot of services that you didn’t build yourself and interact with system dependencies that you might have not touched before. There’s always detective work to do! When we looked at the GitLab Pages logs we found that it’s always returning ErrDomainDoesNotExist errors which result in a 502 error to our users. GitLab Pages sends a request to GitLab Workhorse, specifically the /api/v4/internal/pages route. GitLab Workhorse is a Go service in front of our Ruby on Rails monolith and it’s deployed as a sidecar inside of the webservice pod, which runs Ruby on Rails using the Puma web server. We used the internal IP to correlate the GitLab Pages requests with GitLab Workhorse containers. We looked at multiple requests and found that all the 502 requests had the following error attached to them: 502 Bad Gateway with dial tcp 127.0.0.1:8080: connect: connection refused. This means that GitLab Workhorse couldn’t connect to the Puma web server. So we needed to go another layer deeper. The Puma web server is what runs the Ruby on Rails monolith which has an internal API endpoint but Puma was never getting these requests since it wasn’t running. What this tells us is that Kubernetes kept our pod in the service even when Puma wasn’t responding, despite having readiness probes configured. Below is the request flow between GitLab Pages, GitLab Workhorse, and Puma/Webservice to try and make it more clear: Attempt 1: Red herring We shifted our focus on GitLab Workhorse and Puma to try and understand how GitLab Workhorse was returning 502 errors in the first place. We found some 502 Bad Gateway with dial tcp 127.0.0.1:8080: connect: connection refused errors during container startup time. How could this be? With the readiness probe, the pod shouldn’t be added to the Endpoint until all readiness probes pass. We later found out that it’s because of a polling mechanisim that we have for Geo which runs in the background, using a Goroutine in GitLab Workhorse, and pings Puma for Geo information. We don’t have Geo enabled on GitLab.com so we simply disabled it to reduce […]

Read More

Pull-based GitOps moving to GitLab Free tier

GitLab will include support for pull-based deployment in the platform’s Free tier in an upcoming release, which will provide users increased flexibility, security, scalability, and automation in cloud-native environments. With pull-based deployment, DevOps teams can use the GitLab agent for Kubernetes to automatically identify and enact application changes. “DevOps teams at all levels benefit from utilizing GitOps strategies such as pull-based deployment in their cloud-native environments. By offering this feature in GitLab’s Free tier, we can introduce more organizations to the power and utility of this secure and scalable functionality,” says Viktor Nagy, product manager of GitLab’s Configure Group. As an open-core company, GitLab is happy to contribute to the GitOps community and enable the adoption of best practices in the industry. What is pull-based deployment? Pull-based and push-based deployment are two main approaches to GitOps, an operational framework that takes DevOps best practices used for application development such as version control, collaboration, compliance, and CI/CD tooling, and applies them to infrastructure automation. GitOps enables operations teams to move as quickly as their application development counterparts by making use of automation and scalability, without sacrificing security. While push-based, or agentless, deployment relies on a CI/CD tool to push changes to the infrastructure environment, pull-based deployment uses an agent installed in a cluster to pull changes whenever there is a deviation from the desired configuration. In the pull-based approach, deployment targets are limited to Kubernetes and an agent must be installed in each Kubernetes cluster. “As long as the GitLab agent for Kubernetes on your infrastructure has the necessary access rights in your cluster, you can configure everything automatically, reducing the DevOps workload and the opportunity to introduce errors,” Nagy says. Pull-based deployment vs. push-based deployment Push-based deployment and pull-based deployment each have their pros and cons. Here is a list of the advantages and disadvantages of each GitOps practice: Push-based deployment pros: ease of use well-known as part of CI/CD more flexible, as deployment targets can be on physical servers or virtual containers, not restricted to Kubernetes clusters Push-based deployment cons: requires organizations to open their firewall to a cluster and grant admin access to external CI/CD requires organizations to adjust their CI/CD pipelines when they introduce new environments Pull-based deployment pros: secure infrastructure – no need to open your firewall or grant admin access externally changes can be automatically detected and applied without human intervention easier scaling of identical clusters Pull-based deployment cons: agent needs to be installed in every cluster limited to Kubernetes only How pull-based deployment impacts the Free-tier experience Including support for pull-based deployments in GitLab’s Free tier provides a tremendous competitive advantage for smaller organizations as they can now apply automation in a safe and scalable manner to their cloud-native infrastructure, including virtual containers and clusters. And, for organizations that are trying to get started quickly by minimizing the number of tools in their infrastructure ecosystem, this functionality is included in One DevOps Platform, not as a point solution. “DevOps teams don’t have to continuously write code for new infrastructure elements – they can write the code once, within a single DevOps platform, and have the agent automatically find it, pull it, and apply it, as well as configuration changes,” Nagy says. “Also, with the availability of pull-based deployment in this introductory tier, newcomers to […]

Read More

Extend TMS WEB Core with JS Libraries with Andrew: FlatPickr (part 2)

Last time out, we looked at how to incorporate FlatPickr into a TMS WEB Core project.  We had taken what might be considered the manual approach.  A link to a CDN or other source for the library is added to the Project.html file, and then a little  JavaScript code is used to manually link the library’s code to an element that has been placed on a TWebForm.  This works quite well, and is typically how I use this and many other JS libraries in my projects.  But there is another way that might be more inline with how Delphi is used much of the time – by using components. So in this post, we’ll revisit the same JS library, but we’ll walk through how to create a component that will appear on the Delphi IDE’s Component Palette.  From there, we will be able to add FlatPickr controls to any TWebForm or wherever we need the component to appear, just as easily as we do with a TWebLabel or a TWebEdit. And we’ll be able to adjust many of the options that we want to pass to FlatPickr by setting properties in the Delphi IDE Object Inspector. Motivation. Beyond just making it easier to use FlatPickr in a TMS WEB Core application, the idea of this post is to get a handle on how to create a Delphi package that can include many such controls.  As we make our way through more JS Libraries and their controls in the posts to come, we’ll hopefully be able to upgrade this package with those new controls as well, and maybe even toss in some others along the way.  This package can then be installed by anyone working on TMS WEB Core projects and thus get easier access to all the JS Libraries we’re covering in one simple step.  Note that if you’re using a JS Library in a one-off kind of situation, the work needed to create a component wrapper is likely to be substantially more work than the manual approach. But there is the potential to save time and effort in (at least) the following scenarios. When you want to use many instances of a component, perhaps in multiple forms. When you want to use the same component in multiple projects. When you don’t want to have to meddle with JavaScript or the nuances of the underlying JS Library. When you want to create something to be used with others, saving them time and effort. By having these kinds of controls in Delphi, you can simply work away as you normally would without even really having to know that you’re using a JavaScript library.   Creating a Package. But before we run off creating components, the first thing we’re going to do is create a package to hold these kinds of components.  Right out of the gate, we’ve got a few things to cover.  Writing TMS WEB Core applications in Delphi means that we’re using the Delphi IDE to do part of the work, and then transpiling our code using pas2js behind the scenes to produce the final code that runs in a browser.  But the Delphi IDE doesn’t know all that much about JavaScript or HTML or CSS or things like that – it is a Delphi environment, after all.  And once […]

Read More

RAD Server and Sencha CRUD Grid

This post shows how to create a RAD Server REST Endpoint (JSON Web Service ) using the RAD Server Database Endpoints Wizard using a FireDAC Connection, to create REST Endpoints for CRUD functionality. And create a Sencha Web Client Application with a CRUD Grid for our data! RAD Server Steps1. Using RAD Studio, Delphi or C++ Builder:File | New | Other | RAD Server | RAD Server Package | Click OK. 2. On the RAD Server Package Wizard, select Create package with resource: Click Next.3. Enter a Resource Name (any name). I’m using the name MyDataSelect DataModule for the File type. Click Next. 4. On the RAD Server Package Wizard, un-select Sample Endpoints, and un-select API Documentation.Select Database Endpoints Click Next. 5. Select an existing FireDAC connection, and Login to your database. Click OK. 6. Select one or more tables to use for your Sencha CRUD Grid Web application. Here I’m selecting Country, Customer and Employee Tables from my Employee Database: Click Finish. 7. Your RAD Studio, Delphi or C++ Builder creates your DataModule, with your FireDAC Connection (FDConnection), and FireDAC Query components (qryCountry, qryCustomer, qryEmployee) for your Country, Customer and Employee Tables, and RAD Server (EMS) DataSetResource Components: The new EMSDataSetResource component allows for greater control of the data retrieved by desktop, multi-device, web and other service-based applications that connect to your RAD Server application. Using this new component RAD Server applications can provide access all of a data set’s data, a specific page of data, updating a data set record, creating a new data set record, and deleting a data set entry. 8. The Database Endpoints wizard we used in step 4, also added TEMSDataSetResource components for each of the tables. Looking in the Object Inspector, we see the Allowed Actions property for these tables: List, Get, Post, Put and Delete, to give us full CRUD functionality! 9. Save (File | Save Project As) your RAD Server Module in a new folder, with the name: RadServer_Sencha_CRUD: 10. Build and Run your RAD Server Module. Your RAD Server Log shows you your REST Endpoints to access data from your Tables. For example: GET for the Employee table has this REST endpoint: {“name”:”dsrEMPLOYEE.List”,”method”:”Get”,”path”:”mydata/EMPLOYEE/”The RESTEndpoint would be: http://localhost:8080/mydata/EMPLOYEE/ 11. To test this Rest Endpoint, click the Open Browser button on the RAD Server UI. and Enter the REST Endpoint: http://localhost:8080/mydata/EMPLOYEE/This is your JSON Web Service for your Employee table data.You should see a JSON Array returned for your Employee Data: Great! The RAD Server part is complete.Next, lets create a Sencha Web Client Application with a CRUD Grid for our data! Sencha Web Client Application Steps 1. Using Sencha Architect; 2. New Project | Blank | Classic Project Click Create. 3. This creates a new Sencha Ext JS Classic Web Application: 4. Click the Sencha Data UI Builder icon:  5. For the Data Source, select JSON Web Service: 6. For the Model Name, let’s call it myModel. For the URL of this service, enter: http://localhost:8080/mydata/EMPLOYEE/ 7. Parts to Create.  Select all parts (Model, Store, List View, Details View, Form View, and Controller. Click Import Fields: 8. All the columns (fields) from your Employee table get added: 9. Click Generate 10. The Sencha Project Inspector shows your project has been created with your Views, and Model: 11. Select your MyModels View. Your […]

Read More

The Next Wave In Analytics Reporting Is Your IDE Software

The world is becoming increasingly data driven. Our everyday reality now is a hugely interconnected world society with data points on everything from what we want to purchase to how many times we braked too hard on our last road-trip. Without data, businesses cannot succeed and expand. They may have a stream of data coming from different sources, but it is useless without analytics. But did you know the next big thing in data analytics and reporting is your IDE software? Why should we care about data? Data is a critical asset for businesses as it helps them make informed business decisions. Plus, data usage drives the success of a business. Which depends on analytics, and the usage of reporting tools. Reporting tools make all the information easier to parse. Without analytics and reporting tools, informed business decisions are hard to imagine. This is where Yellowfin comes into play. Gartner surveyed CIOs for analytics and reporting tools. They asked for their best pick in business’s success. As a response, the highest 24% voted for data analytics. CIOs also believe that data analysis is important to act on data. Which returns invaluable insights.  So, if an enterprise wants to succeed, it must keep up with the latest trends in data analytics. Don’t know where to look? No worries! We have prepared this guide solely for this purpose.  Continue reading to learn about big things in data analytics and reporting tools.  What are the big things in analytics and reporting tools? What are contextual analytics? Contextual analytics is a chart embedded on the page with the data. It also includes picturing and the related actions for better insights. It embeds dashboards and analytics solutions into a software application’s core workflows. In addition, users get the benefits of analytics directly in the framework. Before contextual analytics, the users had to switch away from their working environments. They did so to investigate data or derive insight. But now, with contextual analytics, the data is delivered to the end-user directly. It is in the user interface and the transaction flow. With one click, users can get instant, guided, and dynamic insights. Which helps them to train and make decisions while working as usual.  The contextual analytic’s goal is to maximize the business benefits. It does so by supporting or triggering actions users take within the app.  What are augmented analytics? Augmented analytics uses enabling technologies like AI and machine learning. It helps with data preparation, insight explanation, and insight generation.  Its primary purpose is to boost how users explore and analyze data in analytics and BI platforms. It augments the expert and citizen data scientists. It speeds up machine learning, data science, and AI model development. So, augmented analytics is transforming how businesses prepare data. It helps find insights and share the findings from those insights. It will be no surprise if data analytics becomes mainstream. It is one of the next big things in analytics and reporting tools. Thus, data and analytics leaders should not wait and incorporate it now.  What are automated analytics? Automated analytics detect relevant anomalies, trends, and patterns. Once found, it delivers insights to users in real-time with no manual analysis.  Enabling technologies like machine learning and AI are used to monitor working performance. They also help search large datasets […]

Read More

How To Generate And Use 3D Objects With C++ Builder And Delphi

We’ve already learned from the previous webinars the great advantage of combining two different programming languages in one ide software. Some languages might be good at data processing while some are writing algorithms that are very easy to understand or read. It is important to note that you are not limited to one language, and you can always mix and match languages based on your purpose. The use of C++ Builder, for instance, allows you to extend the reach of Delphi. It combines the Visual Component Library and IDE written in Object Pascal with multiple C++ compilers. How to create and use 3D Objects in C++ Builder The video below will take us back to Embarcadero’s Code Rage 2018. Here, Yilmaz Yörü will guide us through the process of creating and using 3D objects in C++ Builder for Windows. Generally, in modern application development, 3D objects are often generated using 3D Designer software. Interestingly, you can also generate and animate 3D objects using the C++ Builder in Delphi. To make it possible, we need to use the Viewport3D component to display basic 3D objects. This also requires the use of TMesh classes in Viewport3D. TMesh is a custom 3D shape that can be customized by drawing 3D shapes. In this video, Yilmaz will demonstrate the process and will provide examples of how to create and use these 3D objects in C++ Builder. The interesting part about this project is that C++Builder includes tools that allow drag-and-drop visual development. This allows you to generate and use 3D objects surprisingly easy and fast. To learn more on how to create 3D objects in C++ Builder, feel free to watch the webinar below.   

Read More