DevOps – Unanimous: Elevating Success Through Expert AI Solutions https://unanimoustech.com Elevate your online presence with UnanimousTech's IT & Tech base solutions, all in one expert AI package Tue, 02 Dec 2025 10:50:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://unanimoustech.com/wp-content/uploads/2021/12/cropped-Unanimous_logo1-32x32.png DevOps – Unanimous: Elevating Success Through Expert AI Solutions https://unanimoustech.com 32 32 210035509 Sovereign AI Infrastructure Dubai: The Ultimate 2026 Guide https://unanimoustech.com/sovereign-ai-infrastructure-dubai/?utm_source=rss&utm_medium=rss&utm_campaign=sovereign-ai-infrastructure-dubai https://unanimoustech.com/sovereign-ai-infrastructure-dubai/#respond Tue, 02 Dec 2025 10:46:29 +0000 https://unanimoustech.com/?p=92458 Introduction: The “Wrapper” Era is Officially Over

If you look back at the digital landscape of the GCC, the demand for Sovereign AI infrastructure Dubai and Riyadh enterprises are asking for has exploded. The last two years were about experimentation, but 2026 is about ownership., it was defined by a single, electrifying word: Experimentation.

Every boardroom from the Dubai International Financial Centre (DIFC) to the King Abdullah Financial District (KAFD) was buzzing with “Proof of Concepts.” We saw a frantic rush to integrate public chatbots, wrap generic APIs, and see just how fast we could inject Generative AI into our workflows. It was a necessary phase of discovery. We needed to touch the fire to understand its heat.

But as we stand here in December 2025, looking squarely at the roadmap for 2026, the mood has shifted. The novelty of “chatting with a bot” has worn off. A harder, more strategic reality has settled in.

The question for 2026 is no longer, “Can AI do this?” The question is now, “Can AI do this securely, on our own infrastructure, without a single byte of data leaving the country?”

We are witnessing the end of the “Rented Intelligence” era and the beginning of the Sovereign AI infrastructure Dubai and Riyadh have been waiting for. For the nations of the Gulf Cooperation Council, 2026 is not about adopting AI; it is about institutionalizing it. And that requires a fundamental shift in infrastructure

The “Rented Intelligence” Trap

For the past two years, most enterprises have essentially been “renting” their cognitive capabilities. When a bank in Riyadh or a hospital in Abu Dhabi relies solely on public models hosted on servers in Oregon or Frankfurt, they are building their digital future on rented land.

While convenient, this “API-wrapper” approach is colliding with three critical realities as we enter 2026:

1. The “Black Box” Liability

In a regulated, post-2025 world, “we don’t know why the AI said that” is no longer an acceptable answer during an audit. When you rent a model via an API, you cannot inspect the weights. You cannot guarantee that your proprietary IP isn’t being used to train the next version of a public model that your competitors will use. If your AI hallucinates and causes financial damage, you have no audit trail to prove why.

2. The Latency of Geography

Dubai’s D33 Agenda calls for an “AI-Enabled State.” That implies real-time decision-making in smart cities, logistics, and energy grids. You cannot build real-time critical infrastructure if your intelligence depends on the latency and uptime of a server farm 8,000 miles away. If the undersea cable slows down, your business intelligence shouldn’t have to slow down with it.

3. The Corporate Firewall is Dead (If You Use APIs)

Enterprise security used to be about keeping bad actors out. In the AI era, it’s about keeping your data in. Every time you send a prompt to a public model, that data leaves your perimeter. For sectors like Defense, Healthcare, and Finance, this is a non-starter.

The Regulatory Firewall: Why Compliance is Driving Sovereignty

The biggest driver for the shift to Sovereign AI in 2026 isn’t just technology; it is the law. The legal landscape in the GCC has matured rapidly, creating a “Regulatory Firewall” that makes public APIs risky for enterprise use.

Saudi Arabia: The NDAMO Mandate

In the Kingdom of Saudi Arabia, the National Data Management Office (NDAMO) has established strict governance frameworks.

  • Data Sovereignty: Critical national data (which often includes data from government contractors, energy sectors, and healthcare) generally must be stored and processed within the Kingdom’s borders.
  • The Cost of Non-Compliance: Penalties for mishandling sensitive data can reach up to 3 Million SAR (approx. $800,000 USD), not to mention the reputational damage.
  • The Implication: If your AI solution relies on sending customer data to an API hosted in the US, you are likely navigating a compliance minefield. This is why we specifically engineer NDAMO compliant AI solutions that ensure data never leaves the Kingdom.

UAE: The Personal Data Protection Law (PDPL)

The UAE’s Federal Decree-Law No. 45 of 2021 (PDPL) places stringent controls on cross-border data transfer.

  • Consent & Transfer: Transferring personal data to a country that does not have an “adequate level of protection” (as defined by the UAE Data Office) requires specific derogations.
  • The Implication: For UAE enterprises, managing AI Data Residency UAE requirements means “On-Premise” or “Local Cloud” isn’t just a tech preference; it is the safest legal posture to adopt.

The 2026 Mandate for Sovereign AI Infrastructure Dubai

Mandate image

So, what does the pivot to 2026 look like?

It looks like Sovereign AI.

Sovereign AI means moving away from generic, one-size-fits-all models and building AI infrastructure that you own, control, and host. It is the transition from consuming intelligence as a service (SaaS) to deploying intelligence as infrastructure (IaaI).

This requires more than just a “Python script.” It requires a fusion of AI/ML development, DevOps security, and MLOps lifecycle management.

Technical Deep Dive: How Unanimous Tech Architects Your Sovereign Future

At Unanimous Technologies, we don’t just build apps; we build the Sovereign AI infrastructure Dubai needs to meet D33 goals.. We help GCC enterprises deploy “Local Brains.” Here is how our technical capabilities deliver the Sovereign AI you need for 2026.

1. The Models: Local Heroes vs. Global Standards

Sovereign AI starts with the model itself. You cannot achieve sovereignty with a closed-source API. We specialize in the selection and deployment of Open-Weights models:

  • The “Local Hero” (Falcon 180B): We specialize in On-Premise Falcon LLM Deployment. Developed right here in the UAE by the Technology Innovation Institute (TII), Falcon 180B is one of the world’s most powerful open models. It is a beast of a model, requiring significant compute, but it offers true sovereignty. We optimize it for enterprise deployment.
  • The Agile Standard (Llama 3): For businesses needing speed and efficiency, Meta’s Llama 3 (70B and 8B versions) offers incredible reasoning capabilities. We bring these models inside your firewall, ensuring Meta never sees your data.
  • Fine-Tuning (PEFT/LoRA): A generic model doesn’t understand UAE Labor Law or Saudi Fintech regulations. We use Parameter-Efficient Fine-Tuning (PEFT) to train these models on your proprietary data (PDFs, SQL databases, internal wikis), creating a model that is an expert in your business.

2. DevOps: The Secure Perimeter

Deploying a Large Language Model (LLM) is not like deploying a website. It requires massive compute resources (GPU Clusters) and distinct security protocols.

  • Containerization (Docker/Kubernetes): We containerize your AI agents, ensuring they run seamlessly on your on-premise servers or private cloud instances (like AWS Outposts or local GCC cloud providers).
  • Air-Gapped Deployments: For our most sensitive government and defense clients, we architect fully air-gapped environments. This means the AI server has literally zero physical connection to the public internet. It is an island of intelligence, completely immune to external cyberattacks.
  • IAM Integration: We integrate the AI directly with your enterprise Identity and Access Management (IAM). If a Junior Associate isn’t allowed to see a confidential “Merger & Acquisition” file, the AI agent won’t summarize it for them. Security is baked into the logic.

3. MLOps & AgentOps: The Brain that Doesn’t Decay

This is the piece most agencies miss. An AI model is not a static piece of code; it is a living system. Without Machine Learning Operations (MLOps), a model’s performance degrades over time (a phenomenon known as “model drift”).

  • Continuous Training Pipelines: We build automated pipelines that allow your Sovereign AI to learn from new data daily. As your business generates new reports, the model updates its knowledge base within your secure environment.
  • Agent-Ops for 2026: As we move toward Enterprise Agentic AI, we implement specific monitoring to ensure your autonomous agents don’t go rogue.
  • Data Lineage: We create an audit trail, allowing you to trace exactly which document led the AI to a specific conclusion—a requirement for compliance in 2026.

The “Agentic” Shift: Moving from Chat to Action

Why is this infrastructure so critical for 2026? Because the market is moving toward Agentic AI.

In 2024, we used AI to write emails. In 2026, we will use AI to send them, book the meetings, update the CRM, and trigger invoices.

This is “Agency.” When you grant an AI the power to take action in your systems, you cannot risk that AI being a “black box” hosted on a shared server. You need total control over its logic and permissions.

Unanimous Tech builds the “Safe Sandbox” environments where these agents operate. By combining DevOps security with MLOps monitoring, we ensure that your autonomous agents operate efficiently without hallucinating or overstepping their boundaries.

The Economic Argument: CapEx vs. OpEx

Many CFOs ask us: “Isn’t building on-premise infrastructure expensive?”

It is true that the upfront Capital Expenditure (CapEx) of buying GPUs or reserving private cloud instances is higher than a $20/month subscription. But the long-term economics of 2026 tell a different story:

  1. Cost Predictability: With public APIs, your costs scale linearly with usage. If your customer base doubles, your AI bill doubles. With Sovereign AI, you own the compute. The marginal cost of the next 1,000 queries is near zero.
  2. Asset Creation: When you fine-tune a model on your data, you are building a proprietary intelligence asset that adds valuation to your company. Renting an API adds no asset value; it is just a utility cost.
  3. Risk Mitigation: The cost of a single data breach or regulatory fine in the GCC significantly outweighs the cost of secure infrastructure. Sovereignty is an insurance policy.

Conclusion: The Choice is Yours

The “Wild West” days of AI are behind us. 2026 demands maturity, security, and sovereignty.

The organizations that will lead the market next year won’t be the ones with the flashiest chatbots—they will be the ones who had the foresight to bring their intelligence in-house. They will be the ones who realized that in the Middle East, data isn’t just an asset—it’s national wealth.

At Unanimous Technologies, we are ready to help you make that transition. We have the AI researchers, the DevOps engineers, and the MLOps architects to build your fortress of intelligence.

Don’t just use AI. Own it. Ready to Secure Your Infrastructure?

AI Services | Contact Our Strategy Team | Visit Unanimous Tech Home

]]>
https://unanimoustech.com/sovereign-ai-infrastructure-dubai/feed/ 0 92458
The Ultimate Guide to Choosing the Right Technology Stack for Your Project: Why MERN Matters https://unanimoustech.com/the-ultimate-guide-to-choosing-the-right-technology-stack-for-your-project-why-mern-matters/?utm_source=rss&utm_medium=rss&utm_campaign=the-ultimate-guide-to-choosing-the-right-technology-stack-for-your-project-why-mern-matters https://unanimoustech.com/the-ultimate-guide-to-choosing-the-right-technology-stack-for-your-project-why-mern-matters/#respond Fri, 12 Apr 2024 07:42:36 +0000 https://unanimoustech.com/?p=91517 In today’s quick-moving digital environment, the success of a web development project greatly depends on the technology stack it’s built on. The technologies chosen affect not only how well and how much the application can grow but also things like how fast it can be developed, the support you can get from other developers, and how easy it is to keep up over time. Out of many choices, the MERN stack (MongoDB, Express.js, React.js, Node.js) stands out as a powerful option for creating dynamic, high-performing web applications. Unanimous Technologies, with its deep experience in using the MERN stack, presents this comprehensive guide to why MERN is important and why it might be the perfect pick for your upcoming project.

Understanding the MERN Stack

Before we look at the benefits of the MERN stack, let’s understand its parts:

pasted image 0

  • MongoDB: This is a type of NoSQL database that’s great for dealing with lots of data. It’s flexible, meaning you can change how data is structured pretty easily.
  • Express.js: This is a server-side framework that works with Node.js. It’s made for creating web applications and APIs fast and without much hassle.
  • React.js: This is a library for JavaScript used on the front end to make user interfaces. It’s especially good for single-page applications (SPAs) and is known for being fast and letting you reuse components.
  • Node.js: This runs JavaScript on the server side, using Chrome’s V8 engine. It helps with writing server-side code and making network applications that can handle many connections at once.

Why MERN Matters

Why MERN Matters

Seamless Full Stack Development

One of the main highlights of the MERN stack is its all-JavaScript setup, which means JavaScript is used throughout the development process, from the front end to the back end. This makes the development process smoother because developers don’t have to switch between languages for different parts of the project. They can stay in the same language context, which boosts productivity and cuts down the time it takes to get a product out to the market.

Robust and Scalable Applications

The MERN stack is designed with performance and the ability to grow in mind. MongoDB’s lack of a fixed schema means it’s more flexible with data, which helps when you need to expand your application. Node.js and Express.js make it possible to build server-side applications that are quick and don’t wait on processes to finish before moving on, which is great for efficiency. React improves the user experience by using a virtual DOM, which makes the interface smooth and quick to respond, even in applications where users do a lot of interacting. This combination ensures that applications can not only perform well from the start but also scale up as needed without major overhauls.

Strong Community Support

Every part of the MERN stack benefits from strong community support, which is vital for fast development and solving issues. Whether it’s figuring out a problem, looking for libraries, or keeping up with new updates, the active communities around MongoDB, Express.js, React.js, and Node.js offer an essential resource for developers. This support can make the development process smoother and quicker, as help and resources are readily available.

Open Source and Cost-Effective

The MERN stack is fully open source, which means using it doesn’t come with any licensing fees. This can greatly lower the total cost of developing a project. Plus, there’s a huge selection of free resources, tools, and libraries available for the MERN stack. These freebies can help reduce costs even more, while giving developers access to strong tools that can make their applications work better and do more.

Future-Proof and Versatile

The MERN stack isn’t only widely used; it’s also geared towards the future. For example, React.js has the support of Facebook, which helps keep it modern and relevant. Node.js is constantly improving, thanks to its popularity among major tech companies. This focus on staying current, along with the stack’s ability to work well for various kinds of web applications, makes MERN a smart option for businesses wanting to put their money into lasting technology.

Choosing MERN for Your Project

When thinking about using the MERN stack for your project, consider these points:

  • Project Requirements: MERN is especially good for single-page applications (SPAs), real-time applications (like chat apps), and projects that need databases that can grow easily.
  • Development Team Expertise: If your team is good at JavaScript or wants to make development simpler by using one main programming language, MERN is a great choice because it’s all JavaScript.
  • Community and Support: MERN has a strong advantage if having a large support network and access to many third-party libraries is key for your project. This makes it a strong option to consider.

MERN in Action: Success Stories

At Unanimous Technologies, we’ve used the MERN stack to complete a variety of successful projects. This includes creating e-commerce platforms that manage millions of transactions and developing real-time communication tools that help teams around the world stay connected. By using MERN, we’ve been able to create solutions that aren’t only strong from a technical standpoint but also meet the strategic goals of our clients.

Conclusion

Choosing the right technology stack is a critical decision that can dictate the success of your web development project. The MERN stack stands out because of its flexibility, performance, and strong community support, making it an attractive choice for various projects. Its unified JavaScript environment makes development more streamlined, while its individual components are tailored for creating modern, scalable web applications. Looking ahead, it’s essential to pick a stack that fits your current needs but can also grow and adapt over time. With MERN, Unanimous Technologies has helped businesses reach and surpass their digital goals, highlighting the stack’s value in the modern development world.

]]>
https://unanimoustech.com/the-ultimate-guide-to-choosing-the-right-technology-stack-for-your-project-why-mern-matters/feed/ 0 91517
Custom Software Solutions in 2024: Leveraging the Power of Unanimous Technologies https://unanimoustech.com/custom-software-solutions-in-2024-leveraging-the-power-of-unanimous-technologies/?utm_source=rss&utm_medium=rss&utm_campaign=custom-software-solutions-in-2024-leveraging-the-power-of-unanimous-technologies https://unanimoustech.com/custom-software-solutions-in-2024-leveraging-the-power-of-unanimous-technologies/#respond Thu, 04 Apr 2024 06:46:29 +0000 https://unanimoustech.com/?p=91025 As we are moving ahead, the world of making video games is changing a lot because of new technology, what players want, and new ways to tell stories in games. Unanimous Technologies is right in the middle of these changes, using what we know to help guide us through. Our experts have figured out the main things that are making gaming change and how game makers can use these changes to make really cool and fun games. 

These big changes include using VR (virtual reality) and AR (augmented reality) for more realistic game experiences, using smart technology to make game worlds that change based on what players do, making games that everyone can play together no matter what device they’re using, making it easier for people to play games without needing expensive equipment, and making sure games tell stories that include everyone. By paying attention to these trends, game creators can make games that are exciting and new for players.

The Dawn of a New Era in Gaming

The gaming industry is experiencing a significant transformation, driven by technological progress that surpasses the conventional boundaries of gaming. Virtual Reality (VR), Augmented Reality (AR), Artificial Intelligence (AI), and the Internet of Things (IoT) are more than trendy terms; they are the foundation of a new phase in gaming. These technologies empower developers to craft more engaging and interactive experiences, diminishing the distinction between virtual and actual realities.

Virtual and Augmented Realities: Immersion Redefined

VR and AR technologies are leading the change in how we experience games. Unanimous Technologies is at the cutting edge, creating games that use VR and AR to take players into worlds that are rich in detail and interaction. These technologies have a huge potential to make stories more engaging and to draw players in like never before, offering an unmatched level of immersion. 

As we move into 2024, we anticipate a surge in VR and AR gaming experiences, driven by advancements in hardware and software that make these technologies more accessible and compelling.

Artificial Intelligence: The Backbone of Dynamic Gameplay

AI is changing the way games are made, introducing innovative methods to craft game environments that are dynamic and react to players. At Unanimous Technologies, we use AI to create stories that adapt to player choices, smart non-player character (NPC) actions, and gaming experiences that are tailored to each player.

QOYeDihepXaH vmez6BOnHft7a28y9BrTRUzDn28 VL7AsmKPADAiNm iSmdnr77l9mrfrlMh0QmdLtm ZVs8ULyt90zfn RAMCAjZYycF3wK1r wmUsAL k0

AI’s impact goes further than just what happens in the game; it also helps in analyzing data, predicting how players will act, and automatically creating game content. This doesn’t just make games more enjoyable to play; it also makes them easier to develop, leading to games that are more complex and engaging.

The Rise of Cross-Platform Play

The lines between different gaming platforms are fading. Players now expect to be able to play games across multiple devices without issues, highlighting the importance of creating games that work smoothly on any device. Unanimous Technologies is leading this trend by making games that let players come together, compete, and work with each other, no matter what platform they’re using. This strategy doesn’t just attract more people to our games; it also helps build a gaming community that welcomes everyone.

Leveraging Analytics for Player-Centric Development

Knowing how players act and what they like is key to making games that really connect with them. Unanimous Technologies uses advanced analytics to learn about how players interact with games, what they prefer, and what problems they encounter. This approach, based on data, allows us to adjust how games play, set the right difficulty levels, and customize content for different groups of players. This way, we can keep players more engaged and make sure they keep coming back.

Ethical Monetization Strategies

In a field sometimes known for pushy ways of making money, Unanimous Technologies supports fair monetization strategies that take care of players’ experiences while also maintaining the company’s health. We opt for things like purchases, battle passes, or optional subscriptions, aiming to give players real value in ways that make them want to spend without hurting the quality of the game.

Cloud Gaming: A Gateway to Universal Access

Cloud gaming is changing the game in terms of how video games are shared and played, aiming to make gaming accessible to everyone. With cloud gaming, games are streamed straight to devices, allowing players to enjoy top-notch gaming without needing costly equipment. Unanimous Technologies is looking into how cloud gaming can help our games reach more people, breaking down the usual obstacles that stop people from getting into gaming and making the gaming community larger.

kWfWLTfIevKBo5dlkfbUhq0qk94ug1GsdZlWm3BSlu6N2QxKkPQYb0Iks1nmGsgvFQlROjxhde2X26hpq jcX37NGnOOD55vo Srxql9FJoqBZHiNqK5X6cAeA U06Q3ifBocwWBcSyxyE17S0R JU

Nurturing a Vibrant Gaming Community

The gaming industry revolves around its community. Unanimous Technologies sees gaming as a powerful way to unite people, bridging distances and cultural differences. We make a point of connecting with our community by including player suggestions in our game development, backing esports, and applauding content made by users. This cooperative spirit doesn’t just improve our games; it also tightens the connections within the gaming community.

Conclusion: Shaping the Future of Gaming

As we look towards the future of making games, the advice from experts at Unanimous Technologies shows us a way forward that’s all about new ideas, including everyone, and really valuing the player’s experience. By welcoming new tech, making games that work across all platforms, using data to focus on what players want, choosing fairways to make money, seeing what cloud gaming can do, and keeping our gaming community lively, we’re not just getting ready for the future—we’re helping to create it.

The road ahead will have its ups and downs, but with our strong dedication to expanding what games can be, Unanimous Technologies is ready to lead. The future of gaming is full of endless possibilities, and together with our players and partners, we’re on a mission to explore, create, and change the gaming world. Come with us on this exciting adventure as we pave new ways and craft experiences that thrill, motivate, and bring us together.

]]>
https://unanimoustech.com/custom-software-solutions-in-2024-leveraging-the-power-of-unanimous-technologies/feed/ 0 91025
Deploying frontend of an application using Google Cloud Run and Cloud Build from remote repository https://unanimoustech.com/deploying-frontend-of-an-application-using-google-cloud-run-and-cloud-build-from-remote-repository/?utm_source=rss&utm_medium=rss&utm_campaign=deploying-frontend-of-an-application-using-google-cloud-run-and-cloud-build-from-remote-repository https://unanimoustech.com/deploying-frontend-of-an-application-using-google-cloud-run-and-cloud-build-from-remote-repository/#respond Mon, 30 Oct 2023 06:28:20 +0000 https://unanimoustech.com/?p=89586 Introduction

In this documentation, we’ll be deploying our application frontend code which is in our GitHub repository using two services of GCP (Google Cloud Platform), first is Google Cloud Run and then further we will be configuring a trigger using Google Cloud Build service so that whenever a code is pushed in the repository in the branch defined with trigger it will automatically update the application running on the internet.

s2RZ0zBAZ54 zeGgd9Z2FQqYyTyDxBcof0 DiNUZYGObnSaZTEC rCzk7C4HbFjwsHYz3QZizw6J Ye7tWGtGV BMdyqNlW nqeTbpwY C04im9wa 1r8REkCBvbSdC qk3zdz6CzTXTKYpjSQJw9nYAWim5qeyk

Google Cloud Run 

qLGEicUsqeNjD0KfjludI3N44igEW7qcCxdaPoxqRHCHfmZt9muB7d6uvuYVpnC Z7TFxMqWXuFXyKAmGbiM9 Io6lHK3ZhaV31zAsH0BeJDFWbF7mL3MTiGucxE X6LPcOGMtTGSlOYbe8gj GZ9ValIwJ9JtO

Google Cloud Run is a serverless computing platform powered by Google Cloud Platform (GCP) that allows developers to easily deploy and manage containerized applications without the need to manage servers or infrastructure. With Cloud Run, developers can package applications in Docker containers, deploy quickly, and automatically measure as they’re delivered. Key features of Google Cloud Run include auto-scaling, pay-as-you-go pricing, and support for popular programming languages and frameworks. Designed for microservices-based architectures, this software allows developers to easily build and run applications by focusing on writing code rather than managing processes. Google Cloud Run simplifies the deployment and scaling of containerized applications, making it simple and efficient for a variety of applications.

Google Cloud Build

BXGxgN99AHUD7x3 3WovLqA5rGHLoOj6h5ZxW4h0lNwLmCU6dwd9fXrVhmpFDjybAD8h9OIvKpG

Google Cloud Build is a continuous integration/continuous deployment (CI/CD) service provided by Google Cloud Platform (GCP). It automates and simplifies the software development process by allowing developers to create, test, and distribute their code consistently. Cloud Build; It is designed to work seamlessly with popular version control systems such as Git, GitHub, and Bitbucket, and supports multiple programming languages ​​and development tools. Key features include build pipelines, autoscaling for faster execution, and integration with other GCP services such as Kubernetes Engine and App Engine. Cloud Build’s development team improves performance, improves code quality, and accelerates the delivery of software applications, making a valuable contribution to everyday software development.

Steps to implement:

  1. Create a GitHub repository:

Create a Git repository and push your code for the frontend of your application there.

2OcQPeV0Giz70h qPVxW6tywqgie1RepcHlvHjXcPA8gqY0V2lbo BoUuV5
  1. Now go to GCP console by visiting https://cloud.google.com and log in into your account and select the project where you need to deploy the application.
nGYss135LB2zwHPjXTCyXHZK3CsvynMSq81sTrAhwYcWn3zbkTFhXpYta7Evlj7 HLaPlNbS5uxZiC7OmPWIgQregsGRohG03eDdsSloAH5xSsKkHG bJWN2uG7NhyqkDsD93gaW2kGZmkzVzRrM8zuTuTUXYoT
  1. Now type Cloud Run in the search box of your GCP console and click on it. It will open the cloud run service window.
ohXoZpUdhLMQ8tuu5TB9BEXxV9 9MxA1Tm93 xdwv8l6kG5OYFIXh7UOCZMgzqWqyQo
  1. The Cloud Run console will look like this:
edS RTA2Io gqlMuAEtnuITv9C OO5USVfEWWiTaL4Bzfzo4cSX CoyBEEpuk4HjC NaMZsEpCOpNCbFIfRj idEikWkbYjm97kNjJHmd4VksLLrSFTMCMqtqy4cmVEiq2n0snqj piF4SjMNOTaAsZKoBBRSATR
  1. Now click on CREATE SERVICE. This will open the service configuration window for Cloud Run.
s AlAbXsVgEf5rgNsOCH9chI7JISKbrbCl83 XwCuq9F29vDMvk1 lF ARxg390HluvwcF71fxP54XDZLjP2jgXWYzVWAuJDXeXjNCeG6vaMKo98FCgBfZpSEmltB f42QCxBA8eP
  1. Now click on TEST WITH A SAMPLE CONTAINER option for the first time so that it will automatically select the hello image which is default provided by the cloud run to test the service. After that Give a suitable Service Name for your deployment and select the region in which you want to deploy your Cloud Run deployment.
dh9bMlQV2381tckxzEo7GlUCm9fk LbdfRCitGSLaeAcVjV4jtYdPNbEEV8IDsksxAR0Osvfq68juadClxb0dc4aHh t6Jj6e mA7 TFCP9vfDnWFrAAcv1cYY7EU o9uMHwMEE8TppQxK 04Uo5dOX7nPzRQrKo
  1. Select the CPU is only allocated during request processing in the CPU allocation and pricing section.
7Ft 6Lsk56mFBQ33sXN6U1wIdHrEVCFzY EJZNwpaFKKxh UiBqeskznuGH907Jd9Oqa
  1. Under the free tier of GCP First 180,000 vCPU-seconds/month, First 360,000 GiB-seconds/month and 2 million requests/month won’t be chargeable to the user.
  1. In the AUTOSCALING section keep the keep the minimum number of instances to 0 and maximum number of instances to 100.
nkmaEmqgMyAmFXz2SoK NoRj4EkMmQaTbuN9FXd2DB4l B evJ xA79GV89wecysolCDlkiW1Yue M4CQcZM xi8BheAochNs3R hbUXlv 6n5CR G4w5TQIv
  1. In the Ingress control section select the ALL option so that your service is accessible over internet.
kPYoMpxzGWTQcTMKKwyPmLlf1KOyLnaoArwoxcZ4ESfDgle7Qoh 9kzcm8AkV0FTs zgK0fPmC0pd
  1. Now in the Authentication section select the Allow unauthenticated invocations and select this option only if you wish to make your API public or a public website
oWVXIqK7lMRwRpyvWWDDfQeX7p1xXQE2zhwzIckmJte3GV39R2zoMEigXjA
  1. Now click on the Container, Networking, Security section and here you can select the Container Port, Capacity of the container, max timeout and concurrent request per second. By default they are set as max timeout 300 seconds and 80 concurrent request per second.
5MCVQAjfFJGvET2Y7DaMiyiLiSHHSMUhcy6pYx4PSpeuEAaMyMLwXT7t8LzDYLbGqCyhmeUC3KD1DEwJ1zVAZYpATp3Z4ao3MCHdcN NsirR ccrDi5ESbFHBzycDiC ZysX52lDoiJF9NTfoADIOkfWLmtvqAmy
sv8ruLPYL4ef ci8C2JFi9C8fGAn vKTQhNWVFlGWro2Yl9qR5irD03UinlvJ0iIvcLMJHEthiS8vwzj WHuR b UcIwi2Y5mHFF1Tx8hfEOiaT2QNICPD3RnBCPXUuETnekGT3Y2oMTj 4MwZ8s3fZILf 9Nuc3
  1. Now in the Execution Environment section select the default option.
v64 YFfwQfvky Rpww6P7bHvdBgCQg uOAvYVUArW4H injQrKeYLcIy1ahQN0XS6u8FJ07 8YoVeliQb9CcrpNrd78Y6RLEGIeWid9CzwULbuV3t8aNpPiN2t0Q9HzgbsV5QtYXVDX2WJaaPEPboj2soiR9VgbD
  1. Once the service is created using the default hello container image you will see a dashboard for this cloud run service.
6MlrAUiOsZkqK4vBFE SCG5 yfLpxE3cnQk43Pd00ibbyJDfvz8mAzfzc5OAcF5hZN5pdIr FaHjgzf MNCPkWmKKIdlusT6ZvIRI
  1. Now here we can see a auto-generated url from this clous run service if we click on this URL it will redirect to the hello container and display its successful deployment message on our browser.
jQ yN oxtnnUJKA9Wd5czzp7OvYTnVEDu iJlgBf64AtBUIkK8TL3h5xHzOCc0cagYULDyZX6J5cS7q kwVgi qlTu6HJGCSkNkRrWAA2b58mAaSBMVI0oZ2bv77hkJLMXZXTQ84
  1. Now for using the Cloud Build  service click on the SETUP THE CONTINOUS DEPLOYMENT option.
JaSaVkTGEjPasqgKEz3yRav1LM8K7z3U5ELKoiUI5tnz2xxxZeMdVgTqfGKFdIh DU Hpgnkq7F1ywhFYfJTC5dla3RL3loyWfEuNv7C94vYtGYs6UKe R
  1. The SETUP with CLOUD BUILD dialog box will open where you can authenticate your GitHub account and then select your repository and the branch from which you want your deployment to be picked whenever a push is made to the code from this branch. And once created a trigger will be attached to this Cloud Run service.
FqM7pUIk1Ht07TFGYbmYJ32rii C4NJdZEC kU3FrFiVCI4tLMTc2JrTQqFjy8I4gcMygbPywihZV39Mk
  1. Now type Cloud Build in the search option of your GCP console and click on it.
6EC
  1. Once the CLOUD BUILD is open go to TRIGGERS and click on them.
O9tTpjpc68lfwtagCmmJLOVuL JhY0rZn3aSKAvxSg0YJnaTusTO 6pGJk85VPsrHX dgGXcyc6Z5nLMqpVT4OoKsDKzDUTPFiH7DgZcl8DIAy3YREEOvZvt35WmZCjsp0Yzui3hYevJaUU GrC30AboL1a9B1UT
  1. Select the trigger associated to your CLOUD RUN services and click on EDIT option.

It will open the Edit Trigger dialog box you will be able to se the short description of your trigger and from which repository it is linked.

lLyKohF87hgStukCUdSOEdblhWNiTaTn20KxwyM9rn9Y6M7200Yd L7xCly1u8tYSUh WM5B4MB O6qrxIeAKszTw4YA5bqtzx1dtT6pIDQBJxveITQJaHpwGZleswEgG gN6nPRsE1WwumnnGf4Fj etLajdnM
  1. Now create a cloudbuild.yaml which will instruct the cloudbuild while reading your repository that what instructions should be followed in order to build this code ands deploy it to cloud run.

steps:

  # Build the Docker image

  – name: ‘gcr.io/cloud-builders/docker’

    args: [

      ‘build’, 

      ‘-t’, ‘gcr.io/$PROJECT_ID/<yourcloudrunservicename>$SHORT_SHA’, 

      ‘–build-arg’, ‘GENERATE_SOURCE_MAP=${_GENERATE_SOURCE_MAP}’,

      ‘–build-arg’, ‘REACT_APP_MODULE_NAME=${_REACT_APP_MODULE_NAME}’,

      ‘–build-arg’, ‘REACT_APP_API_URL=${_REACT_APP_API_URL}’,

      ‘–build-arg’, ‘SKIP_PREFLIGHT_CHECK=${_SKIP_PREFLIGHT_CHECK}’,

      ‘.’

    ]

  # Push the Docker image to Google Container Registry

  – name: ‘gcr.io/cloud-builders/docker’

    args: [‘push’, ‘gcr.io/$PROJECT_ID/<yourcloudrunservicename>:$SHORT_SHA’]

  # Deploy to Cloud Run

  – name: ‘gcr.io/cloud-builders/gcloud’

    args: [‘run’, ‘deploy’, ‘<yourcloudrunservicename>’, ‘–image’, ‘gcr.io/$PROJECT_ID/<yourcloudrunservicename>:$SHORT_SHA’, ‘–platform’, ‘managed’, ‘–region’, ‘<your-region>’, ‘–allow-unauthenticated’, ‘–port’, ‘8080’, ‘–timeout’ , ‘900’]

And save this file as cloudbuild.yaml and push this file to your remote repository and comeback to your edit trigger option and scroll down to the Configuration option and select type as cloudbuild configuration file and Location as Repository

tQOGBgKmHq9JZOs2nZFs8wPT1iCkB gR0C3BYGHhu7qiHHkO0y6 4cvamFB GBIzyFiT9iBOD841Lx6GwuNR59aRNCYb8VOWBy61AVMrS IpSMFCM1MHRm CzmkwNqFROSsMvJjGO7aWVRhL3kcwj0HkEJvVBpdq
  1. Now scroll down to the Environment variables section and add the variables that are required for your application and click on save.
  2. Now you have successfully configured the continuous deployment setup and whenever a code will be pushed it will automatically fetch the changes the from your source repository build it and deploy it to your cloud run.
BkSK Uu4bznqr5H 8w Ydx4cEZrdmvySvNDx27sAjUmGQtvW L8zZKW ESZEnAgucRVwXdWbHrSq7LCeXEW qV U300uggpyjYb5usd
  1.  Once the build is completed you can visit your cloud run autogenerated url and now when you click on it you will be able to see your application’s frontend is deployed and accessible through your Cloud Run service URL.
EgARMBVmet wxUUM8ObPOw2PbQvFVes2qKcN6oiR SlvvyUHWst7WtwsOdJ2RQTxSSBzotYeuXJ2mDBG1J3w8aaDN1g5l4nrXghahqBsR5BWxYDY5fCLgprKkj xIDc0RKfxDSGvMqpHhyKFCperMq

So, in this documentation we deployed our application’s frontend using the Cloud Run and Cloud Build services of GCP (Google Cloud Platform).

]]>
https://unanimoustech.com/deploying-frontend-of-an-application-using-google-cloud-run-and-cloud-build-from-remote-repository/feed/ 0 89586
Deploy Angular App from GitHub to AWS with CodeBuild & CloudFront https://unanimoustech.com/deploy-angular-app-from-github-to-aws-with-codebuild-cloudfront/?utm_source=rss&utm_medium=rss&utm_campaign=deploy-angular-app-from-github-to-aws-with-codebuild-cloudfront https://unanimoustech.com/deploy-angular-app-from-github-to-aws-with-codebuild-cloudfront/#respond Fri, 27 Oct 2023 07:07:19 +0000 https://unanimoustech.com/?p=89573 Introduction

This blog will go into the art of AWS-smooth Angular application deployment. We’ll demonstrate the potential of automation by using AWS CodeBuild to build the app and AWS S3 to host it. Not only that but we’ll up your deployment game by distributing your software over AWS CloudFront. With this configuration, developers may concentrate entirely on code while the infrastructure handles deployment seamlessly. Let’s get started on this quest to make Angular app deployment on AWS easier.

Configuration steps: –

  1. Setup a GitHub repository for your angular application with all the necessary files
jWAyM2op3MAgJqifWcHPhVYeVfQYjpiV8X2rv8aeRi1cTzwpHy7dfMb4P2sG
  1. Now create a S3 bucket on which the artifacts of the code will be stored after being built by the AWS Codebuild and then further get distributed via the Cloudfront URL.
  2. Open Your AWS Management Console after logging into your Amazon Web Services account with your IAM user.
q8w4oJy5rf1bAF8AdWKkrhB0arEKtVosJvcIGpAF dLI0E2R GS5 1mzo9Nkdf62B qcnfengjyAg2tHTI
  1. Go to the search bar and type S3. If you click on it you’ll be redirected to the S3 dashboard.
AkRGGp45cPJy7pg4zJC0UeDdo6m4My94fxADvV0PFvnm2IFDmKIkAfpKHn9ot ZbfUFELS8JL4tB1OF5y4SScH23 auXyID RUaT604psnPB0K0ub swk8EWWicvyV7uNFQ0pDITcJZRz5uR48wBeDK W6vMdv4
  1. Click on Create bucket button. And in the general configuration, give an a unique name to your bucket and select the AWS region for your bucket which in our case will be ap-south-1 (Mumbai).
y223S4vqqHYMmhBSjceCRcNn2EHWL26V26DSQclSF4eI9XIVc0QSsAs413E78vcJxXHPGZPN9RkxQvqsvJjzTuoMAMamifxpG2gzRbNw98En01HQLGd9rqQz2 Lf lGpgA99WBNSzlc6QI68sEf aiRJveWtOuCQ
  1. In the next steps keep the rest options for ACL, Block Public access as default and only enable the Bucket versioning for your bucket.
ejq3DACFAHbZaeRmOzLNW2zjgso iAtT1gZX TIgKmqE1Ki9eN OmZno31f1gUKKkdI9 xJHzvDqYQvvl0EMTucJSCrIh k XvZZw6tAT
  1. Keep the encryption of the object same as the bucket encryption. By default the bucket is encrypted with Amazon SSE-S3 encryption. 
z1q8eGZzQAqdr0tX4jzZi6SsKiAl3oiyFkO5CvBhS5a35uVICuUmXeHndqkR3
  1. Click on the Create bucket icon.
areYNaoYJYopfl1AppRfN X5n8HFHy15gotcL X g1TeomCgUPEQ1POIhGIjB3RneMOtIMDiOIftHZ3lNYSd7S8q8IPnrHktWkEKvFS n3Q6FvppRsfBtWxFlRoCFnhXgppIsUP2VbnC3d6aNW iSbT68pgBwsrL
  1. Once the bucket is created we need to enable the static web hosting. For that go to the properties of your bucket , enable the static website hosting and, add the index.html at the start and error pages.
Khng0SIfQ0RudMxXBaeV6IZvmXxoaV27u rzkBMbh5VjDUKIU DW9rIZkkzWNrf1d 7JkV6WfNoGge0PtSrN3sA49tBo2LGWkeBYVKrJIQpH1BqHMe vsq954SAD89vz0Ai6fFKNswGYBTiqL64oW4U7P1GbFA
  1. Now create a Codebuild project to setup connection between our GitHub repository and Amazon S3 bucket for storing the artifacts.
  2. Go to the search box of your Amazon console, type Codebuild and click on it.
KZulzKAaFdmS1Vd5u9 vK7Yu7iUrqBPxZJzEI5QyYka6I l03LP8MJ9VjyLCVvKvT99IAenGpSyUHkmEgsZ4HSM3BiWdgUwmzp1aAntTAKBu8SQ2a7j9xU6z4sPXQYF6jhi6LE0XySdSrGcGfwCg1aYSrEuM9 d
  1. Once clicked it will redirect you to the Codebuild dashboard. And click on Create build project. 
  1. In the Project Configuration block add a name and description to your project.
rEHMdSU8qD6tdYwsmlLRRnU6zmVJpBGP77PhZBQx UtiEN59o ESQNZsYPiBvbVOcOCEOPLu9RN52 i5iy0GxIpKYiQlksko7NIIISZ0XCLhRDKBcxSPSJH ULO3CTjudWuj4EtwSya12AP7lCJKz YRldYNISw
  1. In the Source block, select provider and authenticate, and then choose your repository in which your angular application and necessary files are present.
qH41rk7Rx3kssln2EG71 uSWg3livUAv545YMF9 aYLXtV0Ki5t1EWMsZOBaEBKZSRkVy1R84m41a
  1. In the Primary source webhook events block, check the Rebuild everytime a code is pushed in to the repository option. Select Build type as Single Build  and Webhook event type as PUSH
uRTUd4JO5 WHW9bHEVGPGy8PmcVvQl2Jwb WeHIvXYs6RcIuEV1e4as0yCuDjmj9etGdC5sDUJusIpsXd6tCUgAFG14Szf5j43qTzdZPINDUzMkduZgyjw05d 2BORdW5HRIceQYBi1LENBvgAdr2YTd rETA5RK
  1. In the Environment block, we will be defining the variables and values that our code will require and the image or OS it will use to build the application. A Codebuild service role will be created at this block in order to configure the IAM access for these operations.
becG3ekBkJ69HHrwOIJbPU1a19f6QcdV68EHfRkJuXS ODSgvf6lziAgvGlHLeRyjimeXw3
  1. Next is the buildspec block. In this block you have to mention a buildspec.yml file that should be created in your root repository.
0Dp6tU 0 r9pkse4Qd2f r4BvGfrkVSwqcUkEOTVv3WzxBTw60wR2uTFoBLVDmsn UQWz8oE 97DVPnKyR7ofjFWwm2WyHiFG3xTSvcblFMyO37bHSbnPUteV2VRinfHoFzrfKG 9qZ2kQmhiDFdIFm 2ue2DzwL
  1. Here is an example of the buildspec.yml file for an angular application 

version: 0.2

phases:

  install:

    runtime-versions:

      nodejs: 16

    commands:

      – echo Installing source NPM dependencies…

      – npm install -g @angular/cli@15.2.5

      – npm install

  build:

    commands:

      – echo Build started for $BUILD_ENV on `date`

      – npm run build

      – echo Build completed…

  post build:

    commands:

      – echo Pre-Build started on `date`

      – aws s3 sync ./dist s3://$BUCKET_NAME

      – echo Pre-Build completed on `date`

artifacts:

  files:

    – dist/**/*

  discard-paths: yes

  1. Now in the Artifacts block, select the location and bucket where you want to store the artifacts once they are built using the Codebuild.
WqieFHbbqDmAa3RYoTWKfRUtFN WXYeRZIqJeKTjYNKha6H4cUe4h7CluPMzRXS2AhUG9hRqpAcoFD0o1 Bf FY60QMfRm2k1AXdp2quKW57Esje
  1. Click on Create build project.
XPr6hWUnwxGA0owB5Kl tvcegW24D5EspW8AdIDnRw7oujmkgcznsfxyJz0j8zGK4GcYzpsAJvVA6G4iFmOM0My6 3PWUnNq7WWcqtDGqpGV a2DDMcXYcZgqxYqDXxSkqa6 5Ek AH VRHlaD27SfMGLG 8WYME
  1. Once the build project is created click on start build and observe the workings of the build process through logs and phase details.
UcgfIiKaVjB LZ8fd3B9OyZhbNElEQYHZXOIc2U67Sbv pv5byPDhoC7vzXZruWp238IyQQX j4bTgaS01RRA7hPIADfVOHVI cH8cRybIilYcn vJD9NB4IMSfTLcmgJxkPPBy2onYKDZ20BPZ7

The above image shows that the build was successful, and now the artifacts are copied to your S3 bucket.

  1. Creating a Cloudfront distribution for my S3 bucket.
  1. Go to the Search box on your AWS management console, type Cloudfront, and click on it it will redirect you to Cloudfront console.
EMMW1RV7QovaXuYtfjkzpzRqw cLWlFFgmpXVD VNdTR WNqEBjrUh8lshU4nyVCBEukvxM 0aTnsWqwN5Ulit0q4DBkgdz5W0z7TJQig6XhJ fW5PzGssrHhKlHfIDp7z1Jva4me6wEkucIpSgJmiU 2o0qWkSg
  1. Now click on Create a Cloudfront distribution.
  1. Now in the Origin block, enter the AWS origin, which in our case will be our S3 bucket
BJq8WXRA1XPpzuiEgKFVnb wk7aWfB5y4OaFTUIA1 kz CbsSpARs17FiYFtk8CcagU7uZAus92HSahzH87LlbxRaWy9jUsk1k7uziURMdXA8681YTjIu9oEgvW nKkDQtSuXOWlCeIGYj5tOj49Q7s3cYGf4zX
  1. Now in the Default cache behavior, select path pattern as default, compress objects as Yes, viewer protocol policy as Redirect Http to Https, allowed head for HTTP as GET HEAD, restrict viewer access as No and Cache policy as CacheOptimized
w KOvGaocSJinEggyvRgPpq5YkwSIP7f8Iltg9J2WAxUpVRmDPp SZdeuHrvk 5eIUb 3B6ullxR SU1pHerCJCAWavO
  1. In the Web application firewall click on do not enable security options for now.
  1. In the Settings block, select the use all edge locations option for price class for best performances, and add CNAME if an alternate domain and SSL certificate are available. HTTP/2 and HTTP/3 can both be used and now click on Create distribution.
  1. It will generate an IAM policy for the bucket which can be copied to the policies.
mnCGht49IA tJLbP8iiH0VovuelFj6U5sXZUoayosFllBKeBOurQRN9HaJMOJ0OOtjfwcUDMkSfP9u K bNAdEiLwedud3j9ZDqF YVjBClIJGLcIKEW5 pNy40rBVmyKEWvkIL9oSBVhN6fVLibmoCtvuoMgedj
  1. Now hit the Cloudfront URL and you’ll be able to see your angular application.
Cj3JddRK6NplxZ4tWeWsG s heeR5tdoCb84cw8wnKRzCsKA4N48uDDGM pjEwgHxsJtxGGN0l K3fKUNME I01nmfNy0tNBNqAE2uBHID0AAYH2u1SvL0Rm2CGxaPeSOSUPyvJwGUGR 899bv05oTduAqhtBdMU
]]>
https://unanimoustech.com/deploy-angular-app-from-github-to-aws-with-codebuild-cloudfront/feed/ 0 89573
Locust Load Test Setup with GKE Google Kubernetes Engine https://unanimoustech.com/locust-load-test-setup-with-gke-google-kubernetes-engine/?utm_source=rss&utm_medium=rss&utm_campaign=locust-load-test-setup-with-gke-google-kubernetes-engine https://unanimoustech.com/locust-load-test-setup-with-gke-google-kubernetes-engine/#respond Thu, 26 Oct 2023 07:14:33 +0000 https://unanimoustech.com/?p=89557 Introduction

In this documentation, we will configure the locust load test and generate a locust URL by deploying it inside a pod deployed inside a Google Kubernetes Engine autopilot cluster using a simple python script which will be executed inside the pod when created.

Key Points for Setup:

  1. Locust: a brief introduction
  2. Python script for locust
  3. Dockerfile for building the image for locust setup.
  4. Register image in GCR
  5. GKE cluster autopilot setup
  6. Configuration and creation of deployment and service manifest  
  7. Testing and verifying the URL for locust

Locust

ypVdka4gpgbphZqAvVoGkzPJ 9Zw3tXW553Gz8c c1p3x5JWuQjmwMHklZuvccundtOx0AJWXacHbx4APPThZgDFYom b7vzTVYByqUzpfm0RFLgq12Jl8hptqbuR8Y6 MoMS2a EFNiLS6A4Ypq 4fwXAHtInmt

Locust is an open-source load testing tool used to measure the performance and scalability of web applications. It simulates virtual users, called locusts, to perform tasks like making requests and processing responses. It allows customization through Python code and supports distributed testing across multiple machines. Locust provides real-time monitoring and reporting of performance metrics, helping identify bottlenecks and optimize system capacity. Overall, it helps ensure applications can handle heavy traffic and perform well under stress.

Python script for locust

from locust import HttpUser, task, between

class MyUser(HttpUser):

    wait_time = between(1, 5)

    @task

    def my_task(self):

        self.client.get(“path/to/the/web”)

Dockerfile for building the image for locust setup

  1. Creating a Dockerfile.

FROM python:3.9

COPY my_locust_script.py /locust/

WORKDIR /locust

RUN pip install locust

CMD locust -f my_locust_script.py

  1. Building a docker image using this Dockerfile by executing following command:

docker build -t <image-name> .

Registering the image to GCR (Google Container Registry)

  1. Configure gcloud for docker 

gcloud auth configure-docker

  1. Tag the image to the GCR repository

docker tag gcr.io/[PROJECT_ID]/<image-name>:latest

  1. Push the image to the GCR repository:

docker push gcr.io/[PROJECT_ID]/locust-image:latest

GKE autopilot cluster setup

In this section we will be creating an autopilot cluster in the Google Kubernetes Engine. Following these steps will setup the autopilot cluster:

  1. Go for the url https://cloud.google.com and search for Kubernetes Engine and hit enter it will redirect you to the Google Kubernetes Engine page.
  2. Click on Create, by default GKE provides you an option to create an autopilot cluster so that most of the things inside a cluster are managed by the Kubernetes engine itself. But if you want to manage all by yourself you can go for the standard method. Here we choose the autopilot method. 
MkpJFGr MsAUh3PsDq15WW haA7RzFIVoJFBuASFqNw1bUMhAy9SCMUiza 31GT8X0fdWGf nRnc7414zUXlQ2d 5SSkfyBDkYG2RS07ubCSfGOoOD0hpqFtbA1wcmd94CO BHqGt1Q9OJtfCqNf5IaeKGXpcVcd
  1. Name your cluster and select your region.
CTC6qxNiFyKEzcnc1Sgixcjr4jOWruEvbCK MoMhyXu87CyDAfQTSK7GQ1xGMmj5TCyNGYirCGvMLmNxqB2pgTiIcrD9x2kP9ORG3dsuJiEwhlqAEur 8yNaAKl b2JNikkcedbSWdzfW9lvFe7z76twbBDQfGgT
  1. After naming your cluster and selecting the region click on Next for networking configuration and choose the network and mode of cluster whether it is going to be public or private cluster, which in our case will be a public cluster and then click on next for Advanced settings.
RmXvLOgNFQCRFGr0DN1 4qEVDgb48su2CoqlTFFB0y2AIiiLE1nTxpIhri
  1. Now select the cluster release channel, we choose Regular channel which is the default option. And then we review and create the cluster. 
7v9PWNrMzuOxdOg z
  1. Now review the configuration for once and then click on Create cluster. It will take some time to create the cluster so we’ll have to wait till the time it gets provisioned.
cRfvseTRgKX2dHHEHLtk8sZkbatd7Z92ae BNF9JzIByQOHFzRy 8bN1McNk91JWQaTh0yzgMwm7fbtEjBGDF jasWGVWS 0rACB agwqW5mXuFryPqioUzW1pP6fIeLALPrAU4NiGHMZRTHiAYO KdqjWMzOfq

Configuration and creation of deployment and service manifest files inside the cluster

]]>
https://unanimoustech.com/locust-load-test-setup-with-gke-google-kubernetes-engine/feed/ 0 89557
Using Google Managed SSL Certificates with GKE Ingress Controller https://unanimoustech.com/using-google-managed-ssl-certificates-with-gke-ingress-controller/?utm_source=rss&utm_medium=rss&utm_campaign=using-google-managed-ssl-certificates-with-gke-ingress-controller https://unanimoustech.com/using-google-managed-ssl-certificates-with-gke-ingress-controller/#respond Wed, 25 Oct 2023 10:30:17 +0000 https://unanimoustech.com/?p=89550 Google Managed SSL Certificates on Google Cloud Platform (GCP) is a service that secures your website or app with SSL or TLS encryption. It offers automatic certificate management to ensure secure and reliable data transmission for your website worldwide. 

These certificates integrate with numerous GCP services, undergo automatic renewal to prevent outages, and prove highly beneficial. Configuring SSL/TLS encryption is straightforward and offers easy accessibility, even for users without a background in security.

Basically, Googlemanaged SSL certificates on GCP to simplify and improve website security while simplifying the certificate management process to ensure a good and reliable experience.

  1. You should be owning own your domain in order to point the load balancer to your hosting name.
  2. Reserve an External IP address in VPC 
  3. Go to VPC Services and click on “IP Address”.
tRPEYISDxJH6pg5XK9th99yUZTNzF MnFyVXtsTdpL5Vcgg2pvOzpxz6JK4LOOrBw2cj5FtXOSIPtmoomKl
  1. Now click on Reserve External IP address 
FYXIyaZWWmZEfZbKKw5N03YJW429fLyOMfHTFbZiTQ71pNUXGcA6juVEWEovLmoxO08bH3LjGMGijlTcBOMBj 5xjJobBA uCGbeWtqvBPSJVEgXV0 ssTfVV7vRQa5fAH2SzR4CyZCR Xjg6vRIXzduV7vgCJLs
  1. Type a name for this IP and provide a little description.
  2. Select Global in TYPE secation and click on reserve. 

Now, assign this IP to your subdomain by creating an A record in your DNS zone. Use your desired domain name and point the Reserved External IP to this domain.

  1. Go to the GKE cluster and connect through the cloud shell by typing the gcloud cli command for authenticating into your GKE cluster.

gcloud container clusters get-credentials <your-cluster-name> –region <your-cluster-region> –project <your-project-name>

  1. Now create a managed certificate yaml manifest file in order to create  a SSL certificate using any text editor.

apiVersion: networking.gke.io/v1

kind: ManagedCertificate

metadata:

  name: managed-cert

spec:

  domains:

    – <your-hosting-domain>

Save this file as managed-cert.yaml

  1. Now execute this yaml file by using “kubectl” command:

Kubectl apply -f managed-cert.yaml

Now wait for some time to get this certificate provisioned and the status to turn to Active.

  1. To check whether the certificate is active or not, type the following command:

kubectl get managedcertificate <your-cert-name> -n <your-namespace>

P LPC9yiB4tYA8Pua B1lxGE6HIuxtoMy0bKQ3gPouUSI4V2SRXm 041ckUjLeh2V9Z4h4F30p1ryosky kmVBhh3uoRaHGffw6EPFXLtk6oltcOTG2toIBwmgvKzA2G89L Sf9vZB SJCpmGfY0 1dFrXwOrVG
  1. Now create an ingress file and name it “managed-cert-ingress.yaml” and its content and structure should be like this:

apiVersion: networking.k8s.io/v1

kind: Ingress

metadata:

  name: joyscore-ingress

  namespace: joyscore

  annotations:

    networking.gke.io/managed-certificates: “<your-cert-name>”

    kubernetes.io/ingress.global-static-ip-name: “<your-reserved-static-external-ip-name>”

spec:

  rules:

    – host: stgapi.joyscore.dev

      http:

        paths:

          – path: /

            pathType: Prefix

            backend:

              service:

                name: exp-gateway

                port:

                  number: 8080

  1. Now execute it using “kubectl” command:

Kubectl apply -f managed-cert-ingress.yaml

  1. Now check the description of the services through the following command:
  1. For check the managed certificate 

kubectl describe managedcertificate -n <your-namespace>

1CY2Z jBcx7eygGYOIfeT0hf0XAALTYMzedXEjLan2SI8k6LpZaHiKr dVN0Bsar3PCbjILhwxtD4TKomEVLVYjSDdFXKRry7XusazWbYZjAX1tTx subIJy2 o3 QTNl0hLs2wq7MfAP8l8ZOpGRIq csul bDU
  1. For checking the ingress controller service

kubectl describe ingress <ingress-name> -n joyscore

NA59T8UqxVEX Kj8vWTnuULCY7RtSHSbEmE2JpQk OheNaUIFEcMFYWlO21BMC3AOQZ3XkmDzjtxcE8YbWHToGfP97XekjqEcMfEWGXOX0iXLayhRj2vneL0SUdzz5ajMi73pu4Q o5r8ya6FgjzoNn1v2Ka 5GM
  1. For checking the ingress 

kubectl get ingress -n <your-namespace> 

2O9UYRoCmFxOkI SN2g NIhTyg3osa0Sf6uuFGye49CDybQlSkl2LvPDVsah oZGdGytOl9CvGvRUSsw8R81QJVjU vZbsY0jfM77qQFDYMtpdbDqdW1RrJvghnhPXFyeaT Ml52NuYkN97YRS0qWNsVLwrkLV5X

Now if you go to the browser and hit the URL that you have hosted to point this ingress, it will display the page with valid https ssl certificate.

bXElSvBMfhAV9cu4WN4BOcH7jKbdqZMIaeAFNfdoThSs4UsJJe64GThxD7vq3wYgprkIuu9FTMmBQktSeEV11hWydR71b0TzkHH6bo x8RtVv2G8wU3

So, in this blog, we have learned how to use Google-Managed SSL Certificate for the GKE Ingress Controller.

]]>
https://unanimoustech.com/using-google-managed-ssl-certificates-with-gke-ingress-controller/feed/ 0 89550