Noutați

Why Should You Spend More Time Thinking About Cross Platform Development?

In the last few years, cross platform development has taken the front seat. It lets developers write code once and implement it across all platforms.   Native app development requires different code for different platforms. It also requires Android Studio as an IDE and a suitable SDK. However, cross platform development frameworks need a single Codebase. It can compile builds for iOS and Android.  There is no need to develop applications from scratch for all platforms. Developers can save time with a low code platform. There are various tools for this purpose. These include RAD Studio, Delphi, and C++ Builder. These tools help you develop applications faster with one codebase. It works for Android, iOS, Windows, Linux, and macOS. What is cross platform Application development? The term cross platform development is also called hybrid app development. It’s an approach to building apps compatible with different platforms. Developers write code once and reuse it. It lets them release a product quickly.  For cross platform development, use intermediate programming languages. These include HTML, CSS, and JavaScript. These languages are not native to operating systems and devices. Developers package the applications into native containers. Then they integrate them into platforms. There are some fundamental strategies for using cross platform development. These include: Compiling different versions of the same program for different OS. Make a program abstract to accommodate different environments. Use of sub-tree files to fit the product to different operating systems. What is the difference between native and cross platform development? Native applications rely on native technologies. Developers can deploy native technologies to their native devices. Cross platform development offers multi-platform compatibility.  The hassle increased while using native app development. Because you need to build two separate apps for Android and iOS. However, these apps appear similar in functionality. But they need different codebases. It is required to accommodate the need for native app development.  What are the Advantages of using cross platform development? These are some advantages of using cross platform applications. 1. Cross platform development should allow you to reuse the code components Despite using new code for each platform, reuse the same code. It lets developers release products faster across all platforms. It also reduces the effort of doing repetitive tasks. You can develop features for Android and iOS with a single codebase. As a result, cross platform development optimizes efficiency. However, it’s not completely a new concept. But it’s been used in software development for years. The software industry gets benefits from reusing code with this technique. 2. A good cross platform development solution reduces overall cost Businesses are embracing different advanced strategies. But not everyone can afford to build native applications. Mobile apps help businesses deliver a personalized experience. Cross platform development helps businesses reduce the overall cost. They can build applications for distinct platforms efficiently. The approach works great for corporate products that are less profitable. Companies can save costs by developing a universal solution. 3. Implementation with the right apps builder software should be easy There are various tools like RAD Studio offering cross platform solutions. This makes it easier for developers to do adjustments. For instance, tools like RAD Studio offer a single-codebase framework. For instance, you can write code in HTML. And then modify it for different operating systems. It makes the implementation of […]

Read More

The Pros And Cons Of Low Code App Development Platforms

You know a lot about low code app development if you follow us. The beginner’s guide was the initial tutorial/article about low-code platforms. In that article, we have explored and learnt new things about no-code platforms in certain ways. If you have not read that article, here you can see what you can learn from it: No-Code Movement Differences between Low-Code Platforms Why Low-Code Tools Matter? When Should You Not Utilize Low-Code Developer Tools? Low-Code and Traditional Engineering Just check out the article here: Additionally, this article called 20 Fun Facts About The Best Low Code Platforms is helpful to really understand what is happening in the low code app development industry and other facts, for instance: Future of the Low-code Market? How and Where to Apply Low-Code Platforms Facts with Numbers related to Low-Code Industry Top Tech Companies’ Approach to Low-Code Tools and more Here go check out the article here: Now you have the resources to learn about all the necessary things, and directly we can be fully involved in the pros and cons of low-code app development platforms. As per Gartner, the clamour for application development is expanding five times faster than IT’s ability to meet it. Business owners request more web applications, and IT departments struggle to keep up. Since developing apps and software services from the ground up to take time and demands higher engineering, which also costs money, business owners are handing these problems to low-code developers. Low-code app development is fast and pretty much cheaper than traditional software development.  Moreover, Low-code development tools do not require deep coding or complex problem-solving abilities from the developer. These platforms provide you with dozens or hundreds of built-in components and functionalities that you can build service by building blocks. But using low-code or no-code platforms do have its drawbacks. Does low code mean low security? If you apply low-code development tools without IT’s knowledge, you do not know about security-first software. When you create apps with low-code platforms, you do not see any code, and you cannot alter the source code to make the process or transaction more secure. Additionally, you do not know how the source code is produced while you are developing by putting blocks of functions together. Of course, there is always a better ecosystem to develop secure and cross-platform native applications. For instance, Delphi with FireMonkey framework provides traditional and low code software development. The best thing is to mix those and create compelling applications in no time and target multiple platforms with a single code base.  FireMonkey framework is one of the best cross-platform and stable frameworks for any use: Business Apps Game Development 2D & 3D Development Utility applications to talk with hardware-specific functionalities Mobile, Web, and Desktop, all you need is here Huge third-party component pool to optimize the development process Thousands of hours of official workshops by experts to learn It doesn’t matter how stunning the looks are if the user experience is sluggish or unresponsive. On PCs, tablets, and mobile devices, FireMonkey-powered applications take full advantage of today’s hardware with native CPU performance and GPU-powered visuals. Do low code platforms produce solutions more quickly than other alternatives? As we said earlier, a low-code platform provides you with all the available building blocks for you, and you need […]

Read More

Terraform as part of the software supply chain, Part 1 – Modules and Providers

When talking about Terraform security, there are many resources covering the security aspects of the infrastructure surrounding certain Terraform configurations. Looking at the security of Terraform itself and the things which could go wrong when running it, however, have very little coverage so far. Some previously published work I’m aware of includes: “Terraform providers and modules used in your Terraform configuration will have full access to the variables and Terraform state within a workspace. Terraform Cloud cannot prevent malicious providers and modules from exfiltrating this sensitive data. We recommend only using trusted modules and providers within your Terraform configuration.” The blog post you’re reading is part one of a three-part series examining the supply chain aspects of Terraform and aims to look at malicious Terraform modules and providers. I’ll also give recommendations on securing the process of running Terraform against modules and providers gone rogue. The next two blogs in the series will build upon these findings and cover more in-depth topics and vulnerabilities. Provider security Providers in Terraform are executable binaries, so if a provider turns malicious it’s certainly “game over” in the sense that it can do whatever the host OS it runs on allows. Providers need to have a signature which gets validated by Terraform upon installation of the Provider. Version 0.14 Terraform creates a dependency lock file which records checksums of the used providers in two different formats. zh and h1 checksums The first format, zh, is simply a SHA256 hash of the zip file which contains a provider for a specific OS/hardware platform combination. The h1 hash is a so-called “dirhash” of the provider’s directory. So if we look at the following lock file .terraform.lock.hcl we can observe the two different types of hashes: # This file is maintained automatically by “terraform init”. # Manual edits may be lost in future updates. provider “registry.terraform.io/hashicorp/aws” {  version = “4.11.0”  hashes = [    “h1:JTgGUEVVuuv82X0ePjDM73f+ZM+NfLwb/GGNAOM0CdE=”,    “zh:3e4634f4babcef402160ffb97f9f37e3e781313ceb7b7858fe4b7fc0e2e33e99”,    “zh:3ff647aa88e71419480e3f51a4b40e3b0e2d66482bea97c0b4e75f37aa5ad1f1”,    “zh:4680d16fbb85663034dc3677b402e9e78ab1d4040dd80603052817a96ec08911”,    “zh:5190d03f43f7ad56dae0a7f0441a0f5b2590f42f6e07a724fe11dd50c42a12e4”,    “zh:622426fcdbb927e7c198fe4b890a01a5aa312e462cd82ae1e302186eeac1d071”,    “zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425”,    “zh:b0b766a835c79f8dd58b93d25df8f37749f33cca2297ac088d402d718baddd9c”,    “zh:b293cf26a02992b2167ed3f63711dc01221c4a5e2984b6c7c0c04a6155ab0526”,    “zh:ca8e1f5c58fc838edb5fe7528aec3f2fcbaeabf808add0f401aee5073b61f17f”,    “zh:e0d2ad2767c0134841d52394d180f8f3315c238949c8d11be39a214630e8d50e”,    “zh:ece0d11c35a8537b662287e00af4d27a27eb9558353b133674af90ec11c818d3”,    “zh:f7e1cd07ae883d3be01942dc2b0d516b9736a74e6037287ab19f616725c8f7e8”,  ] } The zh entries can also be found in the provider’s v.4.11.0 release within the SHA256SUMS file. To understand the single h1 dirhash entry we need to have a look at the provider’s directory. In our Terraform project it is constructed like this: $ ls .terraform/providers/registry.terraform.io/hashicorp/aws/4.11.0/linux_amd64/                                    terraform-provider-aws_v4.11.0_x5 $ cd .terraform/providers/registry.terraform.io/hashicorp/aws/4.11.0/linux_amd64/ $ sha256sum terraform-provider-aws_v4.11.0_x5 34c03613d15861d492c2d826c251580c58de232be6e50066cb0a0bb8c87b48de  terraform-provider-aws_v4.11.0_x5 $ sha256sum terraform-provider-aws_v4.11.0_x5 > /tmp/dirhash $ sha256sum /tmp/dirhash   253806504555baebfcd97d1e3e30ccef77fe64cf8d7cbc1bfc618d00e33409d1  /tmp/dirhash $ echo 253806504555baebfcd97d1e3e30ccef77fe64cf8d7cbc1bfc618d00e33409d1 | ruby -rbase64 -e ‘puts Base64.encode64 [STDIN.read.chomp].pack(“H*”)’ JTgGUEVVuuv82X0ePjDM73f+ZM+NfLwb/GGNAOM0CdE= The dirhash, called h1 in the lock file, is created from an alphabetical list of sha256sum filename. Once this list is sha256sum ed again, the resulting hash is taken in binary representation and then converted to Base64. From an attacker’s perspective, the interesting part about the lock file is that it can contain multiple zh and h1 hashes per provider. It is also noteworthy that those two types don’t have to have any relationship. If we modify a downloaded provider’s content on disk, we can simply place the corresponding h1 hash next to any other h1 in the lock file. As there can be multiple entries we would not break any legitimate installation and just allow-list a modified provider directory on-disk on top of what’s already allowed. Lessons learned here Put your .terraform.lock.hcl under version control (Terraform even suggests this on the […]

Read More

Learn Python with Pj! Part 5 – Building something with the Twitter API

This is the fifth and last installment in the Learn Python with Pj! series. Make sure to read: Putting it all together I’ve completed my Python course on Codecademy, and am excited to put the skills I learned into building something practical. I’ve worked with the Twitter API before; I wrote a few bots in Node.js to make them tweet and respond to tweets they’re tagged in. I thought it’d be fun to work with the API again, but this time do it in Python. I didn’t just want to make another bot, so I had to figure out something else. In this case, I made a bot that can track hashtags being used in real time on Twitter. Here’s my repo containing a few different files, but live_tweets.py is what we’ll focus on for this blog. Let’s talk about how I built it and what it does. import tweepy import config auth = tweepy.OAuth1UserHandler(config.consumer_key, config.consumer_secret, config.access_token, config.access_token_secret ) api = tweepy.API(auth) #prints the text of the tweet using hashtag designated in stream.filter(track=[]) class LogTweets(tweepy.Stream): def on_status(self, status): date = status.created_at username = status.user.screen_name try: tweet = status.extended_tweet[“full_text”] except AttributeError: tweet = status.text print(“**Tweet info**”) print(f”Date: {date}”) print(f”Username: {username}”) print(f”Tweet: {tweet}”) print(“*********”) print(“********* n”) if __name__ == “__main__”: #creates instance of LogTweets with authentication stream = LogTweets(config.consumer_key, config.consumer_secret, config.access_token, config.access_token_secret) #hashtags as str in list will be watched live on twitter. hashtags = [] print(“Looking for Hashtags…”) stream.filter(track=hashtags) Here’s how this all works. First, we import two modules: Tweepy and config. Tweepy is a wrapper that makes using the Twitter API very easy. Config allows us to use config files and keep our secrets safe. This is important since using the Twitter API involves four keys that are specific to your Twitter developer account. Getting these keys is covered in this Twitter documentation. We’ll talk about what’s in the config file and how it works later. The next line defines the variable auth using tweepy’s built in authorization handler. Normally, you’d put in the keys directly here, but since we’re trying to keep secrets safe, we handle those through the config file. In order to call those variables hosted in the config file, we type config.variable_name. Finally, in order to access the tweepy api, we create the variable api with the auth variable from the line above passed into tweepy.API(). Now, the variable api will give us access to all the features in Tweepy’s Twitter API library. You’re invited! Join us on June 23rd for the GitLab 15 launch event with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform. For our purposes, we want to find a hashtag being used, then collect the tweet that used it and print some information about the tweet to the console. To make this happen, we’ve created a class called LogTweets that takes an input tweepy.Stream. Stream is a Twitter API term that refers to all of the tweets being posted on Twitter at any given moment. Think of it as opening a window looking out onto every single tweet as it’s posted. We have to make this open connection in order to be able to find tweets that are using our hashtag. Inside LogTweets, we define a […]

Read More

UnReview a year later: How GitLab is transforming DevOps code review with ML-powered functionality

A little over a year ago, GitLab acquired UnReview, a machine learning-based solution for automatically identifying appropriate relevant code reviewers and distributing review workloads and knowledge. Our goal is to integrate UnReview’s ML-powered code review features throughout GitLab, the One DevOps Platform. We checked in with Taylor McCaslin, principal product manager, ModelOps, at GitLab, to find out the impact UnReview has had so far and what comes next. The idea of applying machine learning to code review was already underway at GitLab before the UnReview acquisition. What was it about ML/AI and automation that seemed a good fit for the code review process? How did the UnReview acquisition affect that strategy? The acquisition of UnReview gave GitLab a practical way to get started with a really focused value proposition that was obvious to the platform. ML/AI is a lot more than just having a useful algorithm. UnReview and its team gave GitLab talent with experience building MLOps pipelines and working with production DataOps workflows. As a source code management (SCM) and continuous integration (CI) platform, MLOps and DataOps are key ambitions for our ModelOps stage. UnReview is the foundational anchor of our Applied ML group, and we anticipate developing more ML-powered features with the base that we’ve built integrating UnReview into our One DevOps platform. If it’s something you manually set today within GitLab, we’ll consider suggestions and automations: suggested labels, assignees, issue relationships, etc. You can learn more about our plans on our Applied ML direction page. You’re invited! Join us on June 23rd for the GitLab 15 launch event with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform. There were three specific objectives with the UnReview project when you first started: Eliminate the time wasted manually searching for an appropriate code reviewer to review code changes. Make optimum recommendations that consider the reviewers’ experience and optimize the review load across the team, which additionally facilitates knowledge sharing. Provide analytics on the state of code review in the project, explaining why a particular code reviewer is recommended. Have you had to change or add to these in any way? We now have Suggested Reviewers running for external beta customers as well as dogfooding it internally. We’ve learned a lot about what makes a good code reviewer. Some of the obvious things like context with the changed files and history of committing to that area of code are obvious. But there are less obvious things like what type of code someone has experience with (front-end or back-end). We’re finding the concept of recency interesting: the idea that people who more recently interacted with files and functions may be better suited to review the code. Also, people leave companies, and that’s usually not something that can be inferred by the source graph, so we’re working on merging additional GitLab activity data with the recommendation engine. In addition, we’re thinking a lot about bias in our recommendations. For example, a senior engineer likely has the most commits across a project, but we don’t always want to recommend a senior engineer. The more we work with the algorithm and recommendations, the more nuanced we find it. Not every organization does code review the same way, so we’re […]

Read More

We are splitting our database into Main and CI

Improving the performance and reliability of GitLab.com has always been a top priority for GitLab. While we continuously make iterative improvements to GitLab and our production architecture, we anticipate making a larger change to improve the scalability and reliability of GitLab.com: We are splitting our single PostgreSQL database into a main and a ci database. We believe this process, also known as functional decomposition, will increase GitLab’s database capacity by roughly 2x and allows GitLab.com to continue to scale. When will the split take place and what does this mean for users of GitLab.com? This change is planned to take place between Saturday, 2022-07-02, 05:00am UTC and Saturday, 2022-07-02, 09:00am UTC. The implementation of this change is anticipated to include a service downtime of up to 120 minutes between Saturday, 2022-07-02, 06:00am to 08:00am UTC. During this time you will experience complete service disruption of GitLab.com. We are taking downtime to ensure that the application works as expected following the split and to minimize the risk of any data integrity issues. Background GitLab.com’s database architecture uses a single PostgreSQL database cluster. This single cluster (let’s call it main), consists of a single primary and multiple read-only replicas and stores the data generated by all GitLab features. Database reads can be scaled horizontally through read-only replicas, but writes cannot because PostgreSQL does not support active-active replication natively. A large portion of all writes are generated by features related to Continuous Integration (CI). So, to scale GitLab.com’s database capacity, we are splitting the single PostgreSQL main cluster into two clusters: A Continuous Integration database cluster for all CI-related features (ci). A database cluster for all other features (main). At a high level, GitLab.com’s database architecture is changing like this: You can learn more through our direction page or by visiting our public epic: Decompose GitLab.com’s database to improve scalability. Impact Splitting our database into main and ci initially will only impact GitLab.com. To ensure consistency, we plan to enable decomposition for self-managed GitLab instances later. While this split is a significant architectural change that we believe will increase GitLab’s database capacity by roughly 2x, there are other benefits as well. Increased performance By running two separate database clusters, we believe we will increase the overall count of available database connections. This means we can serve more traffic. It also means that during peak hours there is more buffer, which reduces the likelihood of congestion that may cause performance and UX degradations. Another significant advantage is that we anticipate we will be able to tune the main and ci databases independently, allowing us to optimize these different workloads. Increased stability Splitting the database cluster into main and ci means that ci writes are shifting to the ci database cluster. We anticipate this will lead to reduced database saturation, which is a major cause of incidents. Consequently, we believe that the overall stability of GitLab.com may increase following the split. We believe increased stability means that development teams can spend more time working on generating value through new features and other improvements and less time guarding against potential issues. Shipping as fast as ever A primary objective of this project was to provide tooling to our development teams so that they can continue to develop features that use multiple databases. All of these tools, […]

Read More

How To Use Amazon Polly To Easily Convert Text To Speech In Cross Platform Apps

Sometimes it’s best to say things out loud. Other times reading some text, or the contents of a message are either difficult or might even be impossible if, for example, you or your user have visual impairments. You certainly can’t be silent if you’re narrating a movie. Whatever your purpose, if you need a way to generate speech from text, the latest Appercept AWS SDK for Delphi supports Text-to-Speech (TTS) using Amazon Polly. This great solution works smoothly both for desktop and cross platform apps. What is Amazon Polly? Amazon Polly is a Text-to-Speech cloud service utilising Machine Learning (ML) to provide the most natural human-like voice synthesis. Polly provides various voices for different genders, languages, and ages. Polly supports Speech Synthesis Markup Language (SSML) to give more control and enhance the synthesis. Here is how we say something… program SaySomething; {$APPTYPE CONSOLE} implementation uses AWS.Polly; var Client: IPollyClient; Request: IPollySynthesizeSpeechRequest; Response: IPollySynthesizeSpeechResponse; begin Request := TPollySynthesizeSpeechRequest.Create; Request.OutputFormat := ‘mp3’; Request.Text := ‘Hello, Polly!’; Request.VoiceId := ‘Aria’; Client := TPollyClient.Create; Response := Client.SynthesizeSpeech(Request); if Response.IsSuccessful then begin // Do something with Response.AudioStream end; end. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 program SaySomething;   {$APPTYPE CONSOLE}   implementation   uses   AWS.Polly;   var   Client: IPollyClient;   Request: IPollySynthesizeSpeechRequest;   Response: IPollySynthesizeSpeechResponse;   begin   Request := TPollySynthesizeSpeechRequest.Create;   Request.OutputFormat := ‘mp3’;   Request.Text := ‘Hello, Polly!’;   Request.VoiceId := ‘Aria’;     Client := TPollyClient.Create;   Response := Client.SynthesizeSpeech(Request);   if Response.IsSuccessful then   begin     // Do something with Response.AudioStream   end; end. To use enable and use SSML, just set the request property TextType to “ssml” and make sure you wrap the value in Text with a “” tag. For example: Request.TextType := ‘ssml’; Request.Text := ‘Hello, Polly!‘; Request.TextType := ‘ssml’; Request.Text := ‘Hello, Polly!‘; How can I use Amazon Polly in my cross platform apps? Why not check out the Polly Speak demo in our AWS SDK for Delphi Samples on GitHub for a complete example. What will you “say” with Delphi and Polly? Why not tell us @ApperceptHQ. About Appercept AWS SDK for Delphi Appercept AWS SDK for Delphi is available exclusively on GetIt with active Enterprise or Architect subscriptions for Embarcadero Delphi or RAD Studio. You can install the SDK through the GetIt Package Manager.

Read More

Ne-am întors! Alăturați-vă nouă la Festivalul Internațional de Animație Annecy 2022

Vineri, 17 iunie, 10:00–10:45 XR pentru evenimente live la scară largă: Un studiu de caz TED 2022 Sinan AlRubaye, ofițer șef de experiență, ICVR Chris Swiatek, șef de produs, ICVR Cassandra Rosenthal, cofondator și co-CEO, Kaleidoco Particle Ink: Speed of Dark este o experiență captivantă de realitate mixtă care combină arta stradală, spectacole live și tehnologia XR. Transporta spectatorul într-un roman grafic viu folosind platforma de dezvoltare 3D în timp real Unity, cu o combinație de cartografiere de proiecție și transmisie în direct. Conceput de creatorii de la Cirque du Soleil, în timpul TED Talk inaugural din 2022, ICVR a adus experiența captivantă XR în direct unui public de 1.100 de oameni, fiecare cu unul cu iPad-uri sincronizate. Aici, vom dezvălui tehnologia din spatele dezvăluirii de către Kaleidoco a lumii Particle Ink la spectacolul de deschidere TED 2022 și vom discuta despre viitorul XR live.

Read More

Cum să utilizați Amazon Polly pentru a converti cu ușurință textul în vorbire în aplicații pe mai multe platforme

Uneori, cel mai bine este să spui lucrurile cu voce tare. Alteori, citirea unui text sau conținutul unui mesaj este fie dificilă, fie chiar imposibilă dacă, de exemplu, dumneavoastră sau utilizatorul dumneavoastră aveți deficiențe de vedere. Cu siguranță nu poți să taci dacă povestești un film. Oricare ar fi scopul dvs., dacă aveți nevoie de o modalitate de a genera vorbire din text, cel mai recent Appercept AWS SDK pentru Delphi acceptă Text-to-Speech (TTS) folosind Amazon Polly . Această soluție excelentă funcționează fără probleme atât pentru aplicații desktop, cât și pentru aplicații multiplatforme . Ce este Amazon Polly? Amazon Polly este un serviciu cloud Text-to-Speech care utilizează Machine Learning (ML) pentru a oferi cea mai naturală sinteză a vocii umană. Polly oferă diverse voci pentru diferite genuri, limbi și vârste. Polly acceptă limbajul de marcare a sintezei vorbirii (SSML) pentru a oferi mai mult control și pentru a îmbunătăți sinteza. Iată cum spunem ceva… program SaySomething; {$APPTYPE CONSOLE} implementation uses AWS.Polly; var Client: IPollyClient; Request: IPollySynthesizeSpeechRequest; Response: IPollySynthesizeSpeechResponse; begin Request := TPollySynthesizeSpeechRequest.Create; Request.OutputFormat := ‘mp3’; Request.Text := ‘Hello, Polly!’; Request.VoiceId := ‘Aria’; Client := TPollyClient.Create; Response := Client.SynthesizeSpeech(Request); if Response.IsSuccessful then begin // Do something with Response.AudioStream end; end. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 program SaySomething ;   { $ APPTYPE CONSOLE }   implementation   uses   AWS . Polly ;   var   Client : IPollyClient ;   Request : IPollySynthesizeSpeechRequest ;   Response : IPollySynthesizeSpeechResponse ;   begin   Request : = TPollySynthesizeSpeechRequest . Create ;   Request . OutputFormat : = ‘mp3’ ;   Request . Text : = ‘Hello, Polly!’ ;   Request . VoiceId : = ‘Aria’ ;     Client : = TPollyClient . Create ;   Response : = Client . SynthesizeSpeech ( Request ) ;   if Response . IsSuccessful then   begin     // Do something with Response.AudioStream   end ; end . Pentru a utiliza activarea și utilizarea SSML, trebuie doar să setați proprietatea de solicitare TextType la „ssml” și asigurați-vă că includeți valoarea în Text cu o etichetă „”. De exemplu: Request.TextType := ‘ssml’; Request.Text := ‘Hello, Polly!‘; Request . TextType : = ‘ssml’ ; Request . Text : = ‘Hello, Polly!‘ ; Cum pot folosi Amazon Polly în aplicațiile mele pe mai multe platforme? De ce să nu consultați demonstrația Polly Speak din SDK-ul nostru AWS pentru mostre Delphi pe GitHub pentru un exemplu complet. Ce vei „spune” cu Delphi și Polly? De ce să nu ne spui @ApperceptHQ . Despre Appercept AWS SDK pentru Delphi Appercept AWS SDK pentru Delphi este disponibil exclusiv pe GetIt cu abonamente active Enterprise sau Architect pentru Embarcadero Delphi sauRAD Studio . Puteți instala SDK-ul prin GetIt Package Manager.

Read More

De ce ar trebui să petreci mai mult timp gândindu-te la dezvoltarea pe mai multe platforme?

În ultimii câțiva ani, dezvoltarea cross platform a ocupat locul din față. Le permite dezvoltatorilor să scrie cod o dată și să-l implementeze pe toate platformele. Dezvoltarea aplicației native necesită cod diferit pentru diferite platforme. De asemenea, necesită Android Studio ca IDE și un SDK adecvat. Cu toate acestea, cadrele de dezvoltare multiplatforme au nevoie de o singură bază de cod . Poate compila versiuni pentru iOS și Android. Nu este nevoie să dezvoltați aplicații de la zero pentru toate platformele. Dezvoltatorii pot economisi timp cu o platformă cu cod redus . Există diverse instrumente în acest scop. Acestea includ RAD Studio , Delphi și C++ Builder . Aceste instrumente vă ajută să dezvoltați aplicații mai rapid cu o singură bază de cod. Funcționează pentru Android, iOS, Windows, Linux și macOS. Ce este dezvoltarea de aplicații multiplatformă? Termenul de dezvoltare încrucișată se mai numește și dezvoltare de aplicații hibride. Este o abordare a construirii de aplicații compatibile cu diferite platforme. Dezvoltatorii scriu codul o dată și îl refolosesc. Le permite să lanseze un produs rapid. Pentru dezvoltarea pe mai multe platforme , utilizați limbaje de programare intermediare. Acestea includ HTML, CSS și JavaScript. Aceste limbi nu sunt native sistemelor de operare și dispozitivelor. Dezvoltatorii împachetează aplicațiile în containere native. Apoi le integrează în platforme. Există câteva strategii fundamentale pentru utilizarea dezvoltării pe mai multe platforme . Acestea includ: Compilarea diferitelor versiuni ale aceluiași program pentru diferite sisteme de operare. Realizați un program abstract pentru a se adapta diferitelor medii. Utilizarea fișierelor sub-arboresc pentru a potrivi produsul la diferite sisteme de operare. Care este diferența dintre dezvoltarea nativă și cea multiplatformă? Aplicațiile native se bazează pe tehnologii native. Dezvoltatorii pot implementa tehnologii native pe dispozitivele lor native. Dezvoltarea pe mai multe platforme oferă compatibilitate cu mai multe platforme. Problema a crescut în timpul utilizării dezvoltării de aplicații native. Pentru că trebuie să construiți două aplicații separate pentru Android și iOS. Cu toate acestea, aceste aplicații par similare ca funcționalitate. Dar au nevoie de baze de cod diferite. Este necesar pentru a satisface nevoia de dezvoltare a aplicațiilor native. Care sunt avantajele utilizării dezvoltării multiplatforme? Acestea sunt câteva avantaje ale utilizării aplicațiilor multiplatforme. 1. Dezvoltarea pe mai multe platforme ar trebui să vă permită să reutilizați componentele codului În ciuda folosirii unui cod nou pentru fiecare platformă, reutilizați același cod. Le permite dezvoltatorilor să lanseze produse mai rapid pe toate platformele. De asemenea, reduce efortul de a face sarcini repetitive. Puteți dezvolta funcții pentru Android și iOS cu o singură bază de cod. Ca rezultat, dezvoltarea multiplatformă optimizează eficiența. Cu toate acestea, nu este un concept complet nou. Dar a fost folosit în dezvoltarea de software de ani de zile. Industria software-ului obține beneficii din reutilizarea codului cu această tehnică. 2. O soluție bună de dezvoltare multiplatformă reduce costul total Afacerile adoptă diferite strategii avansate. Dar nu toată lumea își poate permite să creeze aplicații native. Aplicațiile mobile ajută companiile să ofere o experiență personalizată. Dezvoltarea pe mai multe platforme ajută companiile să reducă costul total. Ei pot construi aplicații pentru platforme distincte în mod eficient. Abordarea funcționează excelent pentru produsele corporative care sunt mai puțin profitabile. Companiile pot economisi costuri prin dezvoltarea unei soluții universale. 3. Implementarea cu software-ul potrivit pentru generarea de aplicații ar trebui să fie ușoară Există diverse […]

Read More