From the blog

C11 Threads in Visual Studio 2022 version 17.8 Preview 2

C11 Threads in Visual Studio 2022 version 17.8 Preview 2 Charlie Barto September 26th, 20232 3 Back in Visual Studio 2022 version 17.5 Microsoft Visual C gained preliminary support for C11 atomics. We are happy to announce that support for the other major concurrency feature of C11, threads, is available in Visual Studio version 17.8 Preview 2. This should make it easier to port cross-platform C applications to Windows, without having to drag along a threading compatibility layer. Unlike C11 atomics, C11 threads do not share an ABI with C++’s facilities, but C++ programs can include the C11 threads header and call the functions just like any C program. Both are implemented in terms of the primitives provided by Windows, so their usage can be mixed in the same program and on the same thread. The implementations are distinct, however, for example you can’t use the C11 mutexes with C++ condition variables. C11 contains support for threads and a variety of related concurrency primitives including mutexes, condition variables, and thread specific storage. All of these are implemented in Visual Studio version 17.8 Preview 2. Threads Threads are created with thrd_create, to which you pass a pointer to the desired entry point and a user data pointer (which may be null), along with a pointer to a thrd_t structure to fill in. Once you have a thrd_t created with thrd_create you can call functions to compare it to another thrd_t, join it, or detach it. Functions are also provided to sleep or yield the current thread. int thread_entry(void* data) { return 0; } int main(void) { thrd_t thread; int result = thrd_create(&thread, thread_entry, NULL); if(result != thrd_success) { // handle error } result = thrd_join(thread, NULL); if(result != thrd_success) { // handle error } return 0; }   A key difference between our implementation and C11 threads implementations based on pthreads is that threads can not detach themselves using thrd_current() and thrd_detach(). This is because of a fundamental difference in how threads work on Windows vs Unix descendants and we would require a shared datastructure that tracks thread handles to implement the typical behavior. On Unix derivatives the integer thread ID is the handle to the thread and detaching just sets a flag causing the thread to be cleaned up immediately when it finishes. This makes detached threads somewhat dangerous to use on Unix derivatives, since after a detached thread exits any other references to that thread ID will be dangling and could later refer to a different thread altogether. On Windows the handle to a thread is a win32 HANDLE and is reference counted. The thread is cleaned up when the last handle is closed. There is no way to close all handles to a thread except by keeping track of them and closing each one. We could implement the Unix/pthreads behavior by keeping a shared mapping of thread-id to handle, populated by thrd_create. If you need this functionality then you can implement something like this yourself, but we don’t provide it by default because it would incur a cost even if it’s not used. Better workarounds may also be available, such as passing a pointer to the thrd_t populated by thrd_create via the user data pointer to the created thread. Mutexes Mutexes are provided through the mtx_t structure and […]

Read More

Clang v15 compiler support coming to C++Builder 12

For C++ developers who want to take advantage of new ISO C++ language features in Clang v15 along with the power and productivity of RAD development using C++Builder, stay tuned to the Embarcadero blogs and C++Builder product website for news about the next release of C++Builder. Note: “This blog post is based on a pre-release version of the RAD Studio software and it has been written with specific permission by Embarcadero. No feature is committed until the product GA release.” David Millington, Embarcadero Product Manager, on August 31, 2023 presented a webinar titled “Behind the Build: RAD Studio and C++Builder 12.0” that previewed the upcoming Clang compiler upgrade and integration of Whole Tomato’s Visual Assist into the IDE.  You can watch the replay of the webinar at https://www.youtube.com/watch?v=B0Be_NFmEEE The next release of C++Builder with Clang v15 support will include the following toolchain enhancements: Clang: based on Clang 15, named ‘bcc64x’ C runtime: uses the Universal C Runtime (UCRT) C++ runtime: a new RTL, based on several open source areas STL: libc++ Linker: LLVM lld Debug format: PDB (with IDE support) The toolchain emits COFF object files and uses the Itanium ABI and mangling The default language standard is C++17 and C99 Here are two screen grabs from the August 31, 2023 David Millington webinar: Embarcadero Special Offer: Buy RAD 11.3 today and Apply to join the RAD 12 beta The promotional offer is ending soon (4 days left as of this blog post). Find out additional information at https://www.embarcadero.com/radoffer Keep Up To Date on C++Builder and ISO C++ To keep up to date on using C++Builder and the ISO C++ language you should absolutely bookmark and read everything that Yılmaz Yörü posts on his Embarcadero blog at https://blogs.embarcadero.com/author/yilmazyoru/ Yilmaz also has a great site for learning C++ at https://learncplusplus.org/ Stay tuned to the Embarcadero C++Builder product page for additional information and news.   Reduce development time and get to market faster with RAD Studio, Delphi, or C++Builder. Design. Code. Compile. Deploy. Start Free Trial   Upgrade Today    Free Delphi Community Edition   Free C++Builder Community Edition

Read More

MSVC Machine-Independent Optimizations in Visual Studio 2022 17.7

MSVC Machine-Independent Optimizations in Visual Studio 2022 17.7 Troy Johnson September 25th, 20232 3 This blog post presents a selection of machine-independent optimizations that were added between Visual Studio versions 17.4 (released November 8, 2022) and 17.7 P3 (released July 11, 2023). Each optimization below shows assembly code for both X64 and ARM64 to show the machine-independent nature of the optimization. Optimizing Memory Across Block Boundaries When a small struct is loaded into a register, we can optimize field accesses to extract the correct bits from the register instead of accessing it through memory. Historically in MSVC, this optimization has been limited to memory accesses within the same basic block. We are now able to perform this same optimization across block boundaries in many cases. In the example ASM listings below, a load to the stack and a store from the stack are eliminated, resulting in less memory traffic as well as lower stack memory usage. Example C++ Source Code: #include bool compare(const std::string_view& l, const std::string_view& r) {    return l == r; } Required Compiler Flags: /O2 X64 ASM: 17.4 17.7 sub        rsp, 56 movups  xmm0, XMMWORD PTR [rcx] mov        r8, QWORD PTR [rcx+8] movaps  XMMWORD PTR $T1[rsp], xmm0 cmp        r8, QWORD PTR [rdx+8] jne        SHORT $LN9@compare mov        rdx, QWORD PTR [rdx] mov        rcx, QWORD PTR $T1[rsp] call       memcmp test       eax, eax jne        SHORT $LN9@compare mov        al, 1 add        rsp, 56 ret        0 $LN9@compare: xor        al, al add        rsp, 56 ret        0 sub         rsp, 40 mov        r8, QWORD PTR [rcx+8] movups  xmm1, XMMWORD PTR [rcx] cmp        r8, QWORD PTR [rdx+8] jne         SHORT $LN9@compare mov        rdx, QWORD PTR [rdx] movq      rcx, xmm1 call        memcmp test        eax, eax jne         SHORT $LN9@compare mov        al, 1 add         rsp, 40 ret         0 $LN9@compare: xor         al, al add         rsp, 40 ret         0   ARM64 ASM: 17.4 17.7 str         lr,[sp,#-0x10]! sub         sp,sp,#0x20 ldr         q17,[x1] ldr         q16,[x0] umov        x8,v17.d[1] umov        x2,v16.d[1] stp         q17,q16,[sp] cmp         x2,x8 bne         |$LN9@compare| ldr         x1,[sp] ldr         x0,[sp,#0x10] bl          memcmp cbnz        w0,|$LN9@compare| mov         w0,#1 add         sp,sp,#0x20 ldr         lr,[sp],#0x10 ret |$LN9@compare| mov         w0,#0 add         sp,sp,#0x20 ldr         lr,[sp],#0x10 ret str         lr,[sp,#-0x10]! ldr         q17,[x1] ldr         q16,[x0] umov        x8,v17.d[1] umov        x2,v16.d[1] cmp         x2,x8 bne         |$LN9@compare| fmov        x1,d17 fmov        x0,d16 bl          memcmp cbnz        w0,|$LN9@compare| mov         w0,#1 ldr         lr,[sp],#0x10 ret |$LN9@compare| mov         w0,#0 ldr         lr,[sp],#0x10 ret   Vector Logical and Arithmetic Optimizations We continue to add patterns for recognizing vector operations that are equivalent to intrinsics or short sequences of intrinsics. An example is recognizing common forms of vector absolute difference calculations. A long series of bitwise instructions can be replaced with specialized absolute value instructions, such as vpabsd on X64 and sabd on ARM64. Example C++ Source Code: #include void s32_1(int * __restrict a, int * __restrict b, int * __restrict c, int n) { for (int i = 0; i < n; i++) { a[i] = (b[i] - c[i]) > 0 ? (b[i] – c[i]) : (c[i] – b[i]); } } Required Flags: /O2 /arch:AVX for X64, /O2 for ARM64 X64 ASM: 17.4 17.7 $LL4@s32_1: movdqu  xmm0, XMMWORD PTR [r11+rax] add     ecx, 4 movdqu  xmm1, XMMWORD PTR [rax] lea     rax, QWORD PTR [rax+16] movdqa  xmm3, xmm0 psubd   xmm3, xmm1 psubd   xmm1, xmm0 movdqa  xmm2, xmm3 pcmpgtd xmm2, xmm4 movdqa  xmm0, xmm2 andps   xmm2, xmm3 andnps  xmm0, xmm1 orps    xmm0, xmm2 movdqu  XMMWORD PTR [r10+rax-16], xmm0 cmp     ecx, edx jl      SHORT $LL4@s32_1 $LL4@s32_1: vmovdqu xmm1, XMMWORD PTR [r10+rax] vpsubd     xmm1, xmm1, XMMWORD PTR [rax] vpabsd     xmm2, […]

Read More

Livechatting cu FreshChat – soluția de pentru a crește vânzările, satisfacția și loialitatea clienților in 2023

Livechatting este o modalitate eficientă și interactivă de a comunica online cu clienții potențiali sau existenți ale afacerilor online (e-commerce). Dacă ești interesat de o soluție de livechatting care să îți ofere funcții avansate, integrări ușoare și rezultate măsurabile, atunci acest articol este pentru tine. În acest articol, îți vom prezenta soluția de livechatting FreshChat de la Freshworks, o platformă care îți permite să inițiezi conversații în timp real cu vizitatorii e-shopului, să le oferi suport, să le faci oferte și să le fidelizezi. Vei afla cum să alegi ediția potrivită pentru afacerea ta, cum să o configurezi pe site-ul tău web, cum să folosești funcțiile avansate de chatbot, omni-chanel comunicații, cum să monitorizezi și să analizezi performanța echipei de customer service, cum să o optimizezi pentru a îmbunătăți conversația cu clienții, La finalul articolului, vei înțelege cum te poate ajuta soluția de livechatting de la Freshworks să crești vânzările, satisfacția și loialitatea clienților. Introducere: ce este livechatting și de ce este important pentru afacerea ta Livechatting este o modalitate de a comunica online cu alte persoane prin intermediul unui sistem de chat online care permite conversații în timp real. Livechatting este important pentru un business online, mai ales pentru unul care se ocupă de e-commerce, deoarece îi poate ajuta să își atingă scopul de creștere a bazei de clienți și a numărului de comenzi. Iată cum: Platformele de livechatting oferă o modalitate eficientă și interactivă de a comunica online cu clienți potențiali și existenți, care îți poate aduce beneficii semnificative pentru afacerea ta de e-commerce. În continuare, îți vom prezenta soluția de livechatting de la Freshworks, o platformă completă și personalizabilă care îți permite să creezi conversații în timp real cu vizitatorii site-ului tău web. Freshworks: o soluție de livechatting completă și personalizabilă FreshChat de la Freshworks este o soluție de automatizare a procesului de comunicare conversațională, care îți ajută echipa de customer support să comunice mai ușor cu clienții pe mai multe canale, cum ar fi chat web, email, telefon și platforme de comunicare socială ca WhatsApp, Instagram sau iMessage. Cu ajutorul acestei platforme se pot crea diferite scenarii de interacțiunie cu vizitatorii unui e-shop, cum ar fi: FreshChat de la Freshworks îți permite să creați conversații personalizate și relevante cu clienții Dvs, care să ajute la creșterea ratei de conversie, satisfacția și loialitatea clienților. Opțiuni de omni-channel communication oferite de FreshChat Canalele de comunicare cu potențiali clienți oferiți de către platforma FreshChat sunt următoarele: FreshChat îți permite să conectezi oricare dintre aceste canale cu inboxul tău unificat, astfel încât să poți gestiona toate conversațiile cu clienții dintr-un singur loc. De asemenea, FreshChat îți oferă funcții avansate de chatbot, co-browsing, video și voce, care îți permit să automatizezi și să personalizezi conversațiile cu clienții. Funcționalități avansate de livechatting de la Freshworks Freddy este chatbot-ul inteligent de la Freshworks, care îți permite să automatizezi și să personalizezi conversațiile cu clienții tăi. Cu Freddy, poți să oferi suport clienților 24/7, să le oferi răspunsuri rapide și eficiente, să le rezolvi problemele și să le îndeplinești cererile. Freddy are următoarele beneficii pentru afacerea ta: Dacă vrei să afli mai multe despre Freddy, poți vizita pagina Freddy chatbot by Freshworks. Cum să monitorizezi și să analizezi performanța soluției de livechatting de la Freshworks: rapoarte, metrici și feedback Funcționalitățile de analitică a conversațiilor oferită […]

Read More

What Is Aggregate Member Initialization In C++?

The Aggregate Member Initialization is one of the features of C++. This feature is improved and modernized with C++11, C++14 and C++20. With this feature, objects can initialize an aggregate member from braced-init-list. In this post, we explain what the aggregate member initialization is and what were the changes to it in modern C++ standards. What is aggregate member initialization in modern C++? Aggregate initialization initializes aggregates. Since C++11, aggregates are a form of listed initializations. Since C++20 they are direct initializations. An aggregate could be an array or class type (a class, a struct, or a union).  Here is the general syntax,   T object = {arg1, arg2, …};   In C++11 and above, we use without = as below,   T object {arg1, arg2, …};   In C++20, there are 3 new options that we can use,   T object (arg1, arg2, …); T object = { .designator = arg1 , .designator { arg2 } … }; T object { .designator = arg1 , .designator { arg2 } … };   How to use aggregate member initialization in modern C++? C++14 provides a solution to problems in C++11 and above, for example in C++14, consider we have x, y coordinates in a struct, we can initialize them as below in a new xy object,   struct st_xy { float x, y; };   struct st_xy xy{ 3.2f, 5.1f };   In modern C++, consider that we have a struct that has a, b, c members. We initialize first two members as below,   struct st_x { short int a, b, c; };   struct st_x x{ .a = 10, .b = 20}; // x.c will be 0   We can directly initialize as below too,   struct st_y { int a = 100, b = 200, c, d; } y;   In C++17 and above, we can use this st_y as a base and we can add a new member to a new struct, then we can initialize as below,   struct st_z : st_y { int e; };   struct st_z z{ 1, 2, 3, 4, 5};   What restrictions are there for the aggregate member initialization in C++? If we consider the C++17 standard, an aggregate initialization can NOT be applied to a class type if it has one of the below, private or protected non-static data members, a constructor that is user-provided, inherited, or explicit constructors (explicitly defaulted or deleted constructors are allowed), base class or classes (virtual, private, protected), virtual member functions If we consider the C++20 standard, an aggregate initialization can NOT be applied to a class type if it has one of the below, private or protected non-static data members, user-declared or inherited constructors, base class or classes (virtual, private, protected), virtual member functions Is there a full example of aggregate member initialization in C++? Here is a full example that explains simply most used features of aggregate member initialization, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42   #include   // Aggregate in C++14 struct st_xy { float a, b; };   struct st_xy xy{ 3.2f, 5.1f };   struct st_x { […]

Read More

10 things you didn’t know you could do with GitHub Projects

GitHub Projects has been adopted by program managers, OSS maintainers, enterprises, and individual developers alike for its user-friendly design and efficiency. We all know that managing issues and pull requests in our repositories can be challenging. To help you optimize your usage of GitHub Projects to plan and track your work from start to finish, I’ll be sharing 10 things you can do with GitHub Projects to make it easier to keep track of your issues and pull requests. 1. Manage your projects with the CLI If you prefer to work from your terminals, we’ve made it more convenient for you to manage and automate your project workflows with the GitHub CLI project command. This essentially allows you to work more collaboratively with your team to keep your projects updated with your existing toolkit. For example, if I wanted to add a draft issue to my project “Learning Ruby,” I would do this by first ensuring that I have the CLI installed and I’m authenticated with the project scope. Once authenticated, I need to find the number of the project I want to manage with the CLI. You can find the project number by looking at the project URL. For example, https://github.com/orgs/That-Lady-Dev/projects/4 the project number here is “4.” Now that we have the project number, we can use it to add a draft issue to the project! The command will look like this: gh project item-create 4 –owner That-Lady-Dev –title “Test Adding Draft” –body “I added this draft issue with GitHub CLI” When we run this, a new draft issue is added to the project: You can do a lot more with the GitHub CLI and GitHub projects. Check out our documentation to see all the possibilities of interacting with your projects from the terminal. 2. Export your projects to TSV If you ever need your project data, you can export your project view to a file, which can then be imported into Figjam, Google Sheets, Excel, or any other platform that supports TSV files. Go to any view of your project and click the arrow next to the view name, then select Export view data. This will give you a TSV file that you can use. Though TSV offers much better formatting than a CSV file, you can ask GitHub Copilot Chat how to convert a TSV file to a CSV file, copy the code, run it, and get your new CSV document, if CSV is your jam. Here’s a quick gist of how I converted a TSV to a CSV with GitHub Copilot Chat! 3. Create reusable project templates If you often find yourself recreating projects with similar content and structure, you can set a project as a template so you and others can use it as a base when creating new projects. To set your project as a template, navigate to the project “Settings” page, and under the “Templates” section toggle on Make template. This will turn the project into a template that can be used with the green Use this template button at the top of your project, or when creating a new project. Building a library of templates that can be reused across your organization can help you and your teams share best practices and inspiration when getting started with a project! 4. Make a […]

Read More

GitHub Enterprise Server 3.10 is now generally available

GitHub Enterprise Server 3.10 is now generally available. With this version, organizations are able to give developers and administrators more control over their repositories with enhanced security and compliance controls, and ensure secure development is a top priority. Highlights of this release include: GitHub Projects is generally available, with additions that help teams manage large projects (#650) Always deploy safely, with custom deployment protection rules for GitHub Actions (#199) and new policy control over runners Start finding vulnerabilities in all your repositories, in just a few clicks with a new default setup experience for GitHub Advanced Security code scanning (#642), and track security coverage and risk at the enterprise level (#766) Fine-grained personal access tokens (PATs) bring granular control to PATs (#184) Branch protections meet more compliance needs with more control over merge policies Backup instances faster and more incrementally, for more confident operations Download GitHub Enterprise Server 3.10 now. For help upgrading, use the Upgrade Assistant to find the upgrade path from your current version of GHES to this new version. GitHub Projects is generally available Organize and track your team’s work directly on GitHub using the new Projects, now generally available on Enterprise Server. Built like a spreadsheet, project tables give you a live workspace to filter, sort, and group issues and pull requests. This gives administrators greater visibility across everything that’s happening, and development teams can collaborate and stay in flow more efficiently. Before GitHub Projects, I would have needed two or more tools to get context from interdisciplinary teams on their projects. Now I can get context at a glance all in one place, so teams can be efficient and stay in the flow. – Lisa Vanderschuit, Engineering Program Manager, Office of the CTO, Shopify Always deploy safely, with custom deployment protection rules for GitHub Actions Shipping software faster means knowing you’re doing so safely. That means deployments need to be both governed and automated. Teams using GitHub Actions for continuous deployment have long been able to protect specific environments to enforce deployment protection rules, such as requiring approval from specific team members. With GitHub Enterprise Server 3.10, teams can create their own custom deployment protection rules (beta) to set up rigorous guardrails that ensure only the deployments that pass all quality, security, and manual approval requirements make it to production. What’s more, this release also gives administrators new control over the security and management of runners for GitHub Actions. Centrally managing self-hosted runners is a best practice that helps companies ensure that runners aren’t compromised by untrusted code in a workflow. Now, enterprise administrators can disable repository level self-hosted runners across organizations and user namespaces, ensuring that all jobs are hosted on centrally governed machines. Start finding vulnerabilities in all your repositories, in just a few clicks We’re always looking for ways to make it easier for developers to secure their code. This means building security tools that provide a frictionless experience for developers so they can focus on innovation. With code scanning, automated security checks are run with every pull request, surfacing issues in the context of the development workflow and empowering developers to fix 48% of vulnerabilities in real time and 72% within 28 days. In this release, we’re making it easier for all developers to realize these results with seamless enablement. Developers […]

Read More

mTLS: When certificate authentication is done wrong

Although X.509 certificates have been here for a while, they have become more popular for client authentication in zero-trust networks in recent years. Mutual TLS, or authentication based on X.509 certificates in general, brings advantages compared to passwords or tokens, but you get increased complexity in return. In this post, I’ll deep dive into some interesting attacks on mTLS authentication. We won’t bother you with heavy crypto stuff, but instead we’ll have a look at implementation vulnerabilities and how developers can make their mTLS systems vulnerable to user impersonation, privilege escalation, and information leakages. We will present some CVEs we found in popular open-source identity servers and ways to exploit them. Finally, we’ll explain how these vulnerabilities can be spotted in source code and how to fix them. This blog post is based on work that I recently presented at Black Hat USA and DEF CON. Introduction: What is mutual TLS? Website certificates are a very widely recognized technology, even to people who don’t work in the tech industry, thanks to the padlock icon used by web browsers. Whenever we connect to Gmail or GitHub, our browser checks the certificate provided by the server to make sure it’s truly the service we want to talk to. Fewer people know that the same technology can be used to authenticate clients: the TLS protocol is also designed to be able to verify the client using public and private key cryptography. It happens on the handshake level, even before any application data is transmitted: If configured to do so, a server can ask a client to provide a security certificate in the X.509 format. This certificate is just a blob of binary data that contain information about the client, such as its name, public key, issuer, and other fields: $ openssl x509 -text -in client.crt Certificate: Data: Version: 1 (0x0) Serial Number: d6:2a:25:e3:89:22:4d:1b Signature Algorithm: sha256WithRSAEncryption Issuer: CN=localhost //used to locate issuers certificate Validity Not Before: Jun 13 14:34:28 2023 GMT Not After : Jul 13 14:34:28 2023 GMT Subject: CN=client //aka “user name” Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public-Key: (2048 bit) Modulus: 00:9c:7c:b4:e5:e9:3d:c1:70:9c:9d:18:2f:e8:a0: The server checks that this certificate is signed by one of the trusted authorities. This is a bit similar to checking the signature of a JWT token. Next, the client sends a “Certificate verify” message encrypted with the private key, so that the server can verify that the client actually has the private key. How certificates are validated “Certificate validation” commonly refers to the PKIX certificate validation process defined in RFC 5280. In short, in order to validate the certificate, the server constructs a certification path (also known as a certificate chain) from the target certificate to a trust anchor. The trust anchor is a self-signed root certificate that is inherently trusted by the validator. The end entity certificate is often signed by an intermediate CA, which is also signed by another intermediate certificate or directly by a trust anchor. Then, for each certificate in the chain, the validator checks the signature, validity period, allowed algorithms and key lengths, key usage, and other properties. There are also a number of optional certificate extensions: if they are included in the certificate, they can be checked as well. This process is quite complicated, so every language or […]

Read More

A faster way to manage version updates with Dependabot

When the next Log4j lands, you don’t want to find out that you’re several versions behind, and that it’s going to take the team days to fix all of the breaking changes. Dependabot version updates automate the patching process, giving you a measure of protection from unwelcome surprises. But just as with any inbox, the sheer volume of available updates can quickly swell into a management hassle. And given that some types of related dependencies must be updated together–such as with Storybook, Angular, and AWS SDK–it’s all too easy to put off these tasks until it’s too late. Today’s general availability release is all about making it easier to stay on top of version updates and prevent breaking changes. Previously, Dependabot would open individual pull requests for each update, adding overhead to a developer’s workflow and raising the possibility that related dependencies could fall out of sync. Now, you can specify groups of dependencies to update with a single pull request. Grouped version updates confer significant benefits to your development process: simplified dependency management, a reduced risk of breaking changes, and an opportunity to phase out third-party tools and manual workarounds. I’ve been testing the grouped Dependabot updates and just wanted to say it is awesome! Grouping functionality and configuration are exactly what we want.– Nick Gibson, Causeway Capital Management How it works To take control of how Dependabot structures its pull requests, you can define groups of dependencies that will be updated together using groups in dependabot.yml. For example, you can configure groups that update many dependencies at once, or groups that minimize the risk of breaking changes. When you first configure a group, specify a group name that will display in pull request titles and branch names. Then, define other options to include or exclude specific dependencies from the group. dependency-type: specify a dependency type to be included in the group. dependency-type can be development or production. patterns: define strings of characters that match with a dependency name (or multiple dependency names) to include those dependencies in the group. update-type: specify the update type based on semantic versioning (for instance, where 1.2.3 equates to major.minor.patch) to include all updates of the same level in the same group. You can also exclude dependencies from groups to always manage their updates individually: exclude-patterns: define strings of characters that match with a dependency name to exclude them from a group. For these dependencies, Dependabot will continue to raise single pull requests to update the dependency to its latest version. Bear in mind that you can only create groups for version updates. Example 1: Group by dependency-type In this example, “production” and “development” dependencies are grouped together, excluding those that match the pattern “rubocop*”: # `dependabot.yml` file using the `dependency-type` option to group updates # in conjunction with `patterns` and `exclude-patterns`. groups: production-dependencies: dependency-type: “production” development-dependencies: dependency-type: “development” exclude-patterns: – “rubocop*” rubocop: patterns: – “rubocop*” Example 2: Group by patterns In this example, a group named dev-dependencies will update dependencies in the bundler ecosystem at a weekly interval: # `dependabot.yml` file with customized bundler configuration # In this example, the name of the group is `dev-dependencies`, and # only the `patterns` and `exclude-patterns` options are used. version: 2 updates: # Keep bundler dependencies up to date – package-ecosystem: “bundler” directory: “/” schedule: […]

Read More

CodeQL team uses AI to power vulnerability detection in code

AI is fundamentally changing the technology and security landscape. At GitHub, we see AI as a way for developers to both speed up their development process and simultaneously write more secure code. For instance, GitHub Copilot includes a security filter that targets the most common vulnerable coding patterns in Python and JavaScript–including hardcoded credentials, SQL injections, and path injections–to prevent vulnerable suggestions from being made. We’re also looking at ways security teams can use AI to enhance their organizations’ security posture, specifically leveraging prescriptive security intelligence to contextually assess, prioritize, visualize, and audit security posture in complex and interconnected and hybrid environments. For example, our CodeQL team is responsible for creating models for frameworks/APIs to help CodeQL discover more vulnerabilities out of the box. Creating and testing these models is a time consuming process, so we started thinking about ways to use AI to help speed things up. The results have been incredibly exciting; the team was able to leverage AI to optimize our modeling process and power the way we detect vulnerabilities in code. How the CodeQL team discovered a new CVE using AI modeling For CodeQL to produce results, we need to be able to recognize APIs as sources, sinks or propagators of untrusted user data also known as tainted data. The open source software (OSS) community has developed thousands of packages that potentially contain APIs that we need to recognize. Keeping up with these packages is critical because missing a source, a sink or a taint propagator could lead to false negatives. Traditionally, we modeled the APIs manually, but this was incredibly time consuming for our team given the thousands of OSS frameworks. In the last six months, we’ve started using Large Language Models (LLMs) to automatically model APIs for us. This not only turbo charged our modeling efforts, but allowed CodeQL to recognize more sinks, reducing CodeQL’s false negative rate, and helping it detect more vulnerabilities. When we make improvements to CodeQL, we often test them using a technique called variant analysis, which is a way to identify new types of security vulnerabilities. We often use this technique to run CodeQL queries across thousands of repositories hosted on GitHub.com. We did exactly that, and ran queries that use the AI-generated models across the most impactful repositories on GitHub.com. This combination of AI generated models and variant analysis led the team to discover a new CVE (CVE-2023-35947), a path traversal vulnerability in Gradle. For more information about the exact vulnerability, check out the entry on the Security Lab’s CodeQL Wall of Fame and the GitHub Advisory Database entry. Learn more about multi-repository variant analysis AI is fundamentally changing the way we secure our software. We will continue to strategically leverage AI to iterate and improve upon our security offerings with an eye towards bringing AI-powered security testing into your development workflows. The discovery of the CVE in Gradle is just one example of how GitHub’s security teams have been leveraging GitHub Advanced Security and AI to unlock incredible results. In March this year, we shipped Multi-Repository Variant Analysis (MRVA) allowing you to perform variant analysis at scale. If you’re looking to get started with CodeQL and code scanning on your repository, check out our documentation. As always, CodeQL is free to use on open source repositories. If […]

Read More