Noutați

Which Frameworks Can Be Used To Develop Cross Platform Applications?

As a methodology for software development, cross platform applications development has been rapidly gaining traction – and for good reasons too. Putting it simply, cross-platform development means developing software that will work across multiple platforms and types of devices. Unlike traditional frameworks that limit your application to a specific type of device or operating system, cross-platform applications can function almost anywhere, certainly on the most popular and recent equipment. However, we’ve only covered the tip of the iceberg regarding cross-platform app development. So, what are the options? When it comes to cross-platform development frameworks, there are a few options available – yet not all of them are created equal. Some are better suited for certain types of applications than others. In this article, we will take a look at some of the most popular cross-platform development frameworks and see which ones might be the best fit for your project. Single Platform vs. Cross platform applications development – how do they differ? In the world of applications and software development, there are two main types of development frameworks: those that allow you to develop native, vertical, single target applications and those that will enable you to create cross-platform apps which will target multiple devices and operating systems from the same code base. As mentioned above, native vertical single target applications are specific to a particular individual operating system. Namely, they are written in the language and framework “native” to that operating system. Cross-platform applications, on the other hand, are not tied to any one operating system. They can be run on multiple platforms with little or no modification. The code for cross-platform applications is usually written in a language that can be compiled into bytecode or interpreted by a virtual machine. As long as the code can be correctly interpreted and translated into bytecode, any device that has a processor can work with it. This additional layer of interpreter can add a significant overhead in terms of memory and processing speed. Also, the byte code runtime often doesn’t give full access to the device’s sensors and hardware since it is a ‘one size fits all’ approach which compromises on low level accessibility in exchange for ease of development and deployment. However, some cross-platform frameworks are available which do not use this additional interpretative layer but instead compile down into native binaries. One such system is RAD Studio Delphi which allows you to have all the benefits of writing a single set of source code yet produce fully native applications which embrace the full power and range of capabilities of the operating system and hardware of the device on which the apps are running. RAD Studio offers two main development frameworks – VCL, which is aimed at Windows, and FireMonkey FMX. So, which type of development frameworks should you use? Well, there is no 100% correct answer, as it really depends on your needs. If you need to develop an application with RAD Studio that will run on multiple platforms, then a cross-platform framework is definitely the way to go and FireMonkey FMX makes a lot of sense. However, if you definitely only need to build an application for a single platform the VCL Windows native framework might be a good choice as it brings with it some specific benefits for Windows […]

Read More

Extend TMS WEB Core with JS Libraries with Andrew: Image Sliders

With our Tabulator miniseries out of the way, we’re back to check out other interesting and useful JavaScript libraries that we can use in our TMS WEB Core projects. This time out, we’re going to have a look at image sliders, sometimes referred to as image carousels.  There’s a TWebImageSlider included directly in TMS WEB Core that’s ready to go, based on the very capable Swiper.  But we’ll also have a look at the popular Bootstrap Carousel, the venerable Slick Carousel, and the ultra-modern Glide.  Each has its own default look, but they can all be customized in various ways.  Some have dependencies to be mindful of.  But they’re all well-tested and reliable, and any of them would make a solid addition to your project. Motivation. Whether you need some kind of image slider in your project is usually self-evident. But sliders don’t have to be used just for image content. A slider could be used instead of a combo box, displaying a set of cards to select from, where the user can cycle through different options. Or as an alternative to traditional menus.  Or even to display notifications. Plenty of non-image possibilities. But no matter the type of content being displayed, a slider can have a big overall impact on a project.  So finding a slider with the right mix of presentation and interaction options is important. If your project is already using Bootstrap or jQuery, then perhaps Bootstrap Carousel or Slick Carousel would be a natural fit, respectively.  Or perhaps there’s a key feature in one of these sliders that with catch your attention and be important enough to your project to warrant including a dependency that wasn’t previously needed. As always, nothing but choices here, so let’s have a look. The Setup. For our examples, we’re going to use a set of 10 images of different sizes, just to make it easier to see what is going on. Generally, things work more smoothly and look considerably nicer when the images are all the same size, but that’s often not an option.  The intent though is to find a slider that can be added to our Actorious project to display sets of photos of People, Movies, or TV Shows. Which will typically be all the same size, conveniently.  All the sliders we’ll be looking at today have no problems displaying content of different sizes, even if we have to lean on them a little to get them to cooperate. The sample images are stored in an img folder in the example project, and a link to that can be found at the end of this post. All the sliders use URLs to reference the images, via tags, naturally, so not hard to include these. With the contents of the img folder added to our project, the images are automatically copied to either the Debug or Release folders as needed, and can just be referenced via an img/filename.png link or URL. TWebImageSlider and Swiper. The obvious first choice for a TMS WEB Core project is to use the readily available TWebImageSlider component.  Let’s add this component to a new project and set the size to be 900×300, just as a set size for our examples. Then, add the sample images via the ImageURLs property in the Delphi property inspector. Without doing anything else, we get the following slider, fully […]

Read More

New TMS WEB Core v2.0 browser API support

It’s not a secret that the browser API supports ever more machine hardware related features, making it slowly but surely a cross-platform operating system on its own.  As it is also always a goal of TMS WEB Core to make it Object Pascal developers ultra easy to take advantage of everything the browser offers, we added in TMS WEB Core v2.0 classes and a new component to take advantage of 2 new interesting browser APIs. The multi-screen API and the speech recognition API. Multi-screen API The multi-screen API allows the developer to detect from a web application, how many screens are connected to a device, retrieve the characteristics of the screens and control the placement of browser windows on these multiple screens connected to the machine.You can find the w3 spec for the multi-screen API here and a more informative article about its use https://web.dev/multi-screen-window-placement/. As we added Object Pascal classes to retrieve information from and manage these multi-screen configurations, we also created a demo that you can find in the TMS WEB Core demos folder under DemoBasicsMultiscreen and wrote a blog article about it. Now today, our chief evangelist Dr. Holger Flick also discusses it in his new video. Speech recognition API Create voice command driven web applications or write a dictaphone application that now automatically captures your spoken information as text, … it now all belongs to commonly available functionality when you use web technology. The browser speech recognition API is now available in Chrome, Edge and Safari. In TMS WEB Core v2.0, this is exposed as a non-visual component with events triggered returning spoken words as text or with a command collection the component can listen to and trigger events when the commands are spoken. In this video, Holger also explains the speech recognition API and the demo in more detail so you can get started using it in your applications easily.  Miletus Note that these new web technologies are also available when you create a Miletus based native cross platform application with TMS WEB Core. Miletus applications are web technology based applications that run as native cross platform applications for Windows, macOS and Linux and also have direct access to even more operating system resources such as the file system, local databases, operating system dialogs and more. Learn more about the amazing Miletus technology in this blog article with video.

Read More

Why Do You Need A JS Test Framework?

The JavaScript testing framework enables you to significantly boost your development workflow. It increases your team’s speed and efficiency. Also, it improves test accuracy. Besides, the JS test framework reduces test maintenance costs. It can be a live saver for projects with a low budget range. There are plenty of frameworks available online. But which is the best option? Can it help you efficiently test applications? Do you really need a JS test framework? In this post, you will find all the details. What is a JS test framework? The JS test framework is a tool for examining the functionality of JavaScript web applications. It helps you to ensure whether all the components are working properly or not. It enables you to easily identify bugs. Therefore, you can quickly take take the necessary steps to fix the issues. Why do I need a JS test framework? Greater efficiency:  Manually testing large web applications can take a huge amount of time. You don’t want to get involved in such a tiresome process. By using a JS test framework, you can quickly get the job done by automating the process. It enables you to continuously test and validate newly developed features. It reduces the feedback and testing cycle. As a result, you can deliver the application in significantly less time. Improve Accuracy: It’s quite usual for testers to make mistakes during monotonous manual testing. Therefore, you will not get accurate results. However, you can solve the issue by utilizing a JS test framework. It produces the same result every time you run it. Therefore, it can execute tests with 100-percent accuracy. Reduce Cost: JS test framework helps you to get the job done with fewer resources. Therefore, you don’t have to spend extra money. It enables you to significantly reduce the costs. If you have a tight budget, it can be very helpful for you. Read: Rapidly Build Testing Automation To Supercharge JavaScript App Quality What is the best JS test framework? The best JavaScript test framework is Sencha Test. It offers the most comprehensive solution for examining Ext JS apps. It allows you to create robust unit and end-to-end tests. As a result, you can deliver quality apps. Apart from Ext JS, it supports end-to-end testing of React apps. Overall, it is a versatile JavaScript testing framework. Why Sencha Test is the best solution for testing JavaScript apps? Enables you to auto-grab events while interacting with the app under test Provides comprehensive test results from automated and manual test runs Supports Istanbul integration for fixing code coverage gaps Allows you to execute code in any browser, local machine, or browser farm Enables you to use external Node modules to expand the scope of testing Does it support automated test runs? Sencha Test comes with a Command Line Interface (CLI). It helps you achieve the full power of automated test runs. Once tests are authored and checked into the source control repository, you can launch them from your Continuous Integration (CI) system. The CI system can automatically invoke the CLI once it detects a change to the application code or the test files. Does it allow me to schedule automated test runs? Sencha Test features Test Runner. You can use it to run selected unit and functional tests on any browser on […]

Read More

Welcome, ironSource!

With ironSource, Unity will take the linear process of making games and RT3D content and experiences and make it an interconnected and interactive one – creating the opportunity to innovate and improve at every step of the cycle.  What if that process was no longer “first create; then monetize?”  What if creators had an engine for live games that by default enabled them to gain early indicators of success for their games through user acquisition of their prototype, and gave them a feedback loop to improve their games based on real player interactions as early in the process as possible? Unity and ironSource’s combined offerings will uniquely position the combined company as the only game creation and growth platform for creators. This tighter integration between Unity’s Create and Operate means a more powerful flywheel and data feedback loop that further supports creators’ success and understanding of what’s working between gameplay, design and their monetization efforts. With the addition of SuperSonic, ironSource’s publishing solution, the combined company will also break down the barriers to publishing directly through the engine. Unity provides the engine that takes ideas from inception to being viable businesses. This is an engine that negates the need for developers and the industry at large to reinvent the wheel each time a game is created. The gears of this include the tools to create, publish, run, monetize, and grow. It is this integration with ironSource, and the resulting interactivity and interoperability across the game lifecycle, that makes Unity + ironSource unique. It is our combined knowledge and passion for game developers that keeps us innovating to meet their needs today and in the future.  We succeed only if creators succeed and with ironSource joining the Unity family, we will make it easier for them to do so in one centralized, integrated platform.  Join the discussion on the Unity Forums. — Cautionary Statement Regarding Forward-Looking Statements This communication includes forward-looking statements. These forward-looking statements generally can be identified by phrases such as “will,” “expects,” “anticipates,” “foresees,” “forecasts,” “estimates” or other words or phrases of similar import. These statements are based on current expectations, estimates and projections about the industry and markets in which Unity Software Inc. (“Unity”) and ironSource Ltd. (“ironSource”) operate and management’s beliefs and assumptions as to the timing and outcome of future events, including the transactions described in this communication. While Unity’s and ironSource’s management believe the assumptions underlying the forward-looking statements are reasonable, such information is necessarily subject to uncertainties and may involve certain risks, many of which are difficult to predict and are beyond management’s control. These risks and uncertainties include, but are not limited to the expected timing and likelihood of completion of the proposed transaction, including the timing, receipt and terms and conditions of any required governmental and regulatory approvals of the proposed transaction;  the occurrence of any event, change or other circumstances that could give rise to the termination of the merger agreement; the outcome of any legal proceedings that may be instituted against the parties and others following announcement of the merger agreement; the inability to consummate the transaction due to the failure to obtain the requisite stockholder approvals or the failure to satisfy other conditions to completion of the transaction; risks that the proposed transaction disrupts current plans and operations […]

Read More

How To Build The Best Cross Platform Apps In 2022

In this fast-paced tech world, no business would like to miss the chance to have their own mobile app in the market of Google play store or App store. To make the most of it, it is essential to create an app that runs on as many operating systems as possible. Building a cross platform app can play a key role in this. In this article we’ll explain what we mean by that statement. Why choose to develop cross platform apps at all? If you choose to go down the route of developing separate apps for each of the platforms you want to target, for example Windows, macOS, iOS and Android, this separate app philosophy can attract a number of significant disadvantages which can outweigh any potential benefits. Apart from the version problems of having several different code bases and deployable packages, there are other intangible things like budgeting for potentially different development environments and suppliers. Using a single environment and IDE such as Embarcadero’s RAD Studio which can produce cross platform apps to target all your desired platforms in one place and yet create native apps focuses your development efforts, simplifies your resource requirements and budgeting into one quantifiable number. This is why cross platform app development has become the undisputed option for businesses seeking a presence on both Android and iOS platforms. Cross platform apps are popular because they reduce – in fact completely mitigates – the need to create a separate codebase to make an app for all the different platforms. A cross-platform app can run on a variety of devices and platforms. In this article, I’ll give you a run down on native vs cross platforms, the benefits of building a cross platform app, how to build a cross platform app and we’ll discuss the best cross platform app builder. Let’s start! What is the difference between single platform and cross platform apps for mobile? The debate on choosing between individual, vertical, single platform – where there is a different app for each target – and cross platform development – where there is one code base which can be compiled to produce apps and packages for all the desired targets – has been significant, often dividing opinion in the tech world. Some believe that creating separate apps for each individual platform is superior to a cross platform apps approach. Companies like Uber, on the other hand, are rewriting their driver app using a cross platform app framework. Cross platform development has become increasingly common in recent years. To be clear; cross platform app development is the process of designing mobile (and in some cases desktop) apps that can run on a variety of systems. Developers choose this type of development because it requires only one programming effort and the app can run on Android, iOS, or Windows. In the case of RAD Studio, cross platform apps written using its FireMonkey FMX framework can run Windows, macOS, Android, iOS and, with FMXLinux, Linux computers too. Compare this to single target vertical mobile apps. These kinds of apps are usually only able to operate on one vendor’s platform and, in the case of Apple, for example, also require you to use their own IDE which only works on their hardware. The work done for on such a vertical or narrow […]

Read More

How to migrate Atlassian’s Bamboo server’s CI/CD infrastructure to GitLab CI, part two

In part one of our series, I showed you how to migrate from Atlassian’s Bamboo Server to GitLab CI/CD. In this blog post we’re going to take a deep dive into how it works from a user’s perspective. Get started You’ve deployed the demo so it’s time to play with it to understand how it works. Let’s imagine that one of the members of our project is John Doe. He is a software engineer responsible for developing some components (app1, app2, and app3) of the entire product, and he and his team would like to test those components in several combinations in myriad preview environments. So, what does that look like? First of all, let’s make some commits to the app1, app2, and app3 source code and get successful builds upon those commits. After that, we should create releases for those apps to be able to deploy them (as the deployment part of the apps CI config only shows when being triggered by a Git tag, i.e., a GitLab release). A release can be created by launching the last step (manual-create-release) in a commit pipeline. That would give us a new release with the ugly name containing the date and commit SHA in the patch part (in accord to semver scheme): On the Tags tab for the same app you now can see a deployment part of the pipeline has been triggered by the just created GitLab release but no actual environments to deploy are displayed (the _ item in the Deploy-nonprod stage is not an env): Create an environment But before that we have to briefly switch to another team who is responsible for preparing infrastructure IaC templates. Navigate to the infra/environment-blueprints project and pretend you are a member of that team doing their job. Namely, imagine you have just created some initial set of IaC files (they are already kindly prepared by me and present in the repository). You’ve tested them and now you feel that they are ready to be used by the other members of the project. You indicate such a readiness of a particular version of the IaC files by giving it a GitTag. Let’s put a tag like v1.0.0 onto the HEAD version. You will see how the tags are going to be used immediately. But first let’s make some changes to the IaC files (e.g., add a new resource for some of the apps) and create a second Git tag, let’s say v1.1.0. So, at this moment we have two versions of IaC templates (or blueprints) for our infrastructure – v1.0.0 and v1.1.0. Deploy an app into the environment Now we can return back to John and his team. We assume John is somehow informed that the version of the IaC templates he should use is v1.0.0. He wants to create a new preview environment out of the IaC templates of that version and put app1 and app2 into that env. (Here starts a description of how a user interoperates with the infrastructure-set Git repo. Notice that though the eventual idea is that it should be a Merge Request workflow – where you first get a Terraform plan within a Merge Request and can apply such a plan by merging the MR – which is widely advocated by GitLab but for the sake […]

Read More

5 Tips for managing monorepos in GitLab

GitLab was founded 10 years ago on Git because it is the market leading version control system. As Marc Andressen pointed out in 2011, we see teams and code bases expanding at incredible rates, testing the limits of Git. Organizations are experiencing significant slowdowns in performance and added administration complexity working on enormous repositories or monolithic repositories.  Why do organizations develop on monorepos?  Great question. While some might believe that monorepos are a no-no, there are valid reasons why companies, including  Google or GitLab (that’s right! We operate a monolithic repository), choose to do so. The main benefits are:  Monorepos can reduce silos between teams, streamlining collaboration on design, development, and operation of different services because everything is within the same repository. Monorepos help organizations standardize on tooling and processes. If a company is pursuing a DevOps transformation, a monorepo can help accelerate change management when it comes to new workflows or the rollout of new tools. Monorepos simplify dependency management because all packages can be updated in a single commit. Monorepos offer unified CI/CD and build processes. Having all services in a single repository means that you can set up one system of pipelines for everyone. While we still have a ways to go before monorepos or monolithic repositories are as easy to manage as multi-repos in GitLab, we put together five tips and tricks to maintain velocity while developing on a monorepo in GitLab. 1. Use CODEOWNERS to streamline merge request approvals CODEOWNERS files live in the repository and assign an owner to a portion of the code, making it super efficient to process changes. Investing time in setting up a robust CODEOWNERS file that you can then use to automate merge request approvals from required people will save time down the road for developers.  You can then set your merge requests so they must be approved by Code Owners before merge. CODEOWNERS specified for the changed files in the merge request will be automatically notified. 2. Improve git operation performance with Git LFS A universal truth of git is that managing large files is challenging. If you work in the gaming industry, I am sure you’ve been through the annoying process of trying to remove a binary file from the repository history after a well-meaning coworker committed it. This is where Git LFS comes in. Git LFS keeps all the big files in a different location so that they do not exponentially increase the size of a repository. The GitLab server communicates with the Git LFS client over HTTPS. You can enable Git LFS for a project by toggling it in project settings. All files in Git LFS can be tracked in the GitLab interface. GitLab indicates what files are stored there with the LFS icon. 3. Reduce download time with partial clone operations Partial clone is a performance optimization that allows Git to function without having a complete copy of the repository. The goal of this work is to allow Git to better handle extremely large repositories. As we just talked about, storing large binary files in Git is normally discouraged, because every large file added is downloaded by everyone who clones or fetches changes thereafter. These downloads are slow and problematic, especially when working from a slow or unreliable internet connection. Using partial clone with a […]

Read More

Top 5 Tools In My Web App Development Toolkit

There is no denying that web and mobile apps are the part and parcel of all businesses, whether small or large. Organizations rely on these apps to reach out to their customers, automate their business processes, keep track of sales, monitor employee performance, and even make future strategic decisions. Naturally, this makes web app development a critical process for any software engineer. In the context of web application development, it is important to know about the best Javascript tools to speed up the entire process of app creation. These tools are now a key to accelerating development and reducing the time-to-market. Developers can now spend more time on meeting the requirements of their clients and less time writing boilerplate code. When it comes to tools for Javascript app development, Sencha Ext JS is not just a complete framework for web and mobile app building. It also includes the most important tools required in all stages of app creation. Whether you are designing the app, creating its user interface, writing its business logic, designing its database schema, or packaging it for the end-user, Sencha Ext JS has an automated solution for you. Here is a list of the top 5 must-have tools, which are indispensable for any developer. Continue reading to discover why these tools are important and why you should include them in your app development toolkit. 1. What is Sencha Architect in Web App Development? Sencha Architect is an app builder with a visual interface. You can use Sencha Architect to develop cross-platform HTML 5 web applications for the desktop. As Ext JS is also a complete Javascript mobile app framework, you can use Sencha Architect while building mobile apps. One of the most awesome features of Sencha Architect is that it empowers you to build your apps using drag and drop features. This way you don’t have to spend much time on manual coding. Much of the boilerplate code is generated for you. Moreover, the generated code is optimized for high performance. The automatic code generation feature is available for desktop as well as mobile apps. Configurations and properties of many Ext JS UI components can be altered using a visual WYSIWYG window. This eliminates the chances of code errors that developers often make and helps speed up the overall development time. 2. What is Sencha Themer? While the default styles and themes provided by Ext JS are awesome, they may not be compatible with your organization’s color schemes. Sencha provides an easy solution to customizing all styles, color schemes, and themes with Sencha Themer. Sencha Themer, like Sencha Architect, is also a visual tool with a graphical interface. It enables you to style Ext JS apps without writing a single piece of code. Additionally, you can create custom themes and reuse them in other apps. Sencha Themer also includes a smart color palette, which enables you to apply different color combinations to different component states. The color palette with its progressively lighter and darker color shades makes it very easy to choose the base, body background, and font colors. What’s more, Sencha Themer has a font management option. You can quickly select the fonts of your choice in your app. Additionally, you can add web fonts from Google fonts. 3. How Does Sencha Cmd Accelerate Web App […]

Read More

SerializeReference improvements in Unity 2021 LTS

SerializeReference includes support for polymorphism, which means that a field can be assigned to an instance of a class that derives from the field type. In fact, we support the field type as “System.Object” which is the root base class of every C# class. But this opens up the possibility of a successfully compiled project missing the definitions of classes that were previously available and had been saved into a scene or asset file. In some cases, classes can go missing when source files are removed, classes are renamed, or classes are moved to another assembly. When loading a SerializedReference host object, the fully qualified type name of each managed reference object is examined and must be resolved back to a valid class type in order to instantiate it. In previous versions of Unity, missing classes could put the entire “host” object into an error state without loading any of the valid managed reference objects. So if you had a “host” with an array of 15 managed reference objects, but a single object could not be resolved, then you would not see any of them in the Inspector. There would be an error logged in the console – even though the host object was not visually marked as being in an error state when inspected – and all the edits made would be silently discarded. In Unity 2021, we now instantiate all loadable managed reference objects and replace the missing ones by null. This gives users an opportunity to see more of the state of the host object, and to facilitate the resolution of missing types. Meanwhile, if the missing type is restored while the host object is loaded, then the triggered Domain Reload will restore the managed reference objects, and all fields referencing it will be updated properly. This is an example of how objects with missing types appear in the Inspector: In 2020.3, the Fruit class is missing yet the Inspector does not show any array elements, and there is no indication of an error: In 2021.3, the Inspector warns you that the missing Fruit objects appear as null entries, whereas the Sandwich objects continue to be displayed: The error messages in the console related to missing types have also been updated so that they’re less repetitive – they simply identify which host objects have missing types. Here’s an error message in 2020.3: Compare it to this warning message in 2021.3:

Read More