AI Security Specialist
We are hiring an AI Security Engineer to join our Network & Infrastructure Security team. As Plata adopts AI across product features, developer workflows, and internal operations, this role exists to make that adoption safe by design.
You will own the AI Security program at Plata, partnering with product, infrastructure, and ML teams. The role is hands-on (you will build, integrate, and operate controls) and advisory (you will shape policy and educate teams) in roughly equal measure. A strong DevSecOps foundation is required, but day-to-day work is AI-first.
Challenges that await you:
AI Security Program & Guidelines. Define and maintain Plata's AI security standards: acceptable-use policies for AI dev tools (Copilot, Cursor, Claude Code, ChatGPT), secure development guidelines for LLM-powered features, and data-handling rules for AI workloads. Keep guidelines current as the landscape evolves
Securing Customer-Facing AI Features. Threat-model and review designs for LLM-powered product features. Implement and validate defenses against prompt injection, prompt leakage, output-handling vulnerabilities, abuse and content-safety risks, and authorization bypass through AI surfaces. Integrate guardrails into CI/CD
Securing Internal AI Tools, Agents & Dev Tooling. Set access boundaries and data-scope policies for internal LLM apps, RAG systems, and agentic workflows, with audit logging and human-in-the-loop checkpoints for sensitive actions. Maintain the approved list of AI dev tools, configure data-leakage controls, run security reviews before adoption, and educate engineers on safe-use patterns
AI Vendor & Supply Chain Risk. Evaluate AI vendors and model providers. Review DPAs, data residency, retention, model provenance, and supply-chain risks for AI models and libraries. Maintain the AI vendor inventory
Threat Modeling, Detection & IR. Run threat models for new AI features and integrations, provide concrete recommendations, track remediation, and maintain an AI risk register. Partner with the SOC team to define AI-related security events
Collaboration, Education & Advocacy. Act as the in-house AI security expert. Train developers and product teams. Stay current on the AI security landscape (OWASP LLM Top 10, MITRE ATLAS, emerging research) and turn that into actionable internal guidance
What makes you a great fit:
AI Security (must-haves — this is the day-to-day)
2+ years total in DevSecOps, Cloud / Infrastructure Security, or Application Security, with demonstrable hands-on AI security work (production projects, contributions, research, or a credible self-driven portfolio).
Working knowledge of LLM application security: prompt injection (direct and indirect), output handling, jailbreaks, data leakage through model context, abuse and content-safety risks. Familiarity with OWASP LLM Top 10 and MITRE ATLAS.
Hands-on experience with LLM APIs and agent or orchestration framework. Understanding of tool-use, RAG, and agent-permission models.
Practical experience designing or evaluating AI guardrails (input/output filtering, allow-lists, rate limits, structured-output enforcement, evaluation harnesses).
Ability to threat-model AI systems — reason about data flow, trust boundaries, model and agent capabilities, and where injection or data leakage can occur. Translate threat models into concrete controls.
Strong scripting skills in Python (the LLM ecosystem default) for building POCs, integrations, and guardrails
DevSecOps Baseline (required but not the day-to-day)
Solid understanding of AWS security fundamentals (IAM, VPC, S3, KMS, Secrets Manager, CloudTrail). Does not need senior-level depth — needs enough to secure AI workloads and converse credibly with infrastructure teams
Working knowledge of Kubernetes security at the deployment level (network policies, secrets, RBAC, image scanning). Does not need to operate clusters
CI/CD pipeline experience (GitLab CI preferred): integrating security checks, managing secrets, OIDC for cloud authentication
Familiarity with container security (image scanning, supply-chain integrity, SBOMs)
Comfort with IaC (Terraform) at a review-and-recommend level
Understanding of common AppSec vulnerabilities (OWASP Top 10) and secure-coding fundamentals
Your bonus skills:
AI red-teaming experience — hands-on jailbreaking and prompt-injection testing
LLM security tooling — production or POC experience with Promptfoo, NeMo Guardrails, Lakera Guard, Protect AI, or similar
AI governance frameworks — NIST AI RMF, ISO/IEC 42001, EU AI Act, OWASP MLSec Top 10.
Fintech compliance context — PCI DSS, GDPR, SOC 2; experience translating compliance into engineering controls.
Public contributions to AI security — published research, CTFs, OSS contributions, conference talks, or blog write-ups.
Service mesh (Istio) for securing traffic between AI services.
Certifications — CKS, AWS Security Specialty, OSCP, CISSP, GCSA, or vendor AI-security courses and certifications as they emerge
Our ways of working:
Innovative Spirit: A commitment to creativity and groundbreaking solutions
Honest Feedback: valuing open, transparent communication
Supportive Team: a strong, collaborative community
Celebrating Achievements: recognizing our wins together
High-Tech Environment: a team full of smart and revolutionary people who date to challenge the status quo of incumbent finances
Our benefits:
Relocation support to one of our hubs — Cyprus, Spain, Serbia, or Kazakhstan — with assistance for the employee and their family
Flexible work from one of our offices or remote
Healthcare Coverage
Education Budget: Language lessons, professional training and certifications
Wellness Budget: Mental health and fitness activity reimbursements
Vacation policy: 20 days of annual leave and paid sick leave
Published on: 5/14/2026

Plata Card
Plata is a tech platform powered by AI that is redefining the financial experience in Mexico. Plata's mission is to simplify the financial experience.
Please let Plata Card know you found this job on Wantapply.com. It helps us to get more jobs on our site. Thanks!




