Security Basics
Security isn't optional — it's a fundamental responsibility of every developer. When you use AI to generate code, you need to understand the security implications of what you're deploying.
Authentication
Authentication verifies who a user is. AI-generated authentication code often has subtle flaws.
What to Check
- Password hashing — Are passwords hashed with bcrypt, Argon2, or scrypt? Never store plain-text passwords.
- Rate limiting — Are login attempts limited to prevent brute force attacks?
- Session management — Are sessions secure, with proper expiration and rotation?
- Multi-factor authentication — Is MFA available for sensitive operations?
# GOOD: Proper password hashing
from werkzeug.security import generate_password_hash, check_password_hash
hashed_password = generate_password_hash(password, method='bcrypt')
# Store hashed_password in database
# To verify:
if check_password_hash(stored_hash, provided_password):
# Password is correct
# BAD: Plain text passwords (common in AI-generated code)
# NEVER do this:
user.password = password # Storing plain text!
Authorization
Authorization controls what an authenticated user can do. This is where AI-generated code most commonly fails.
What to Check
- Ownership checks — Can a user access resources that belong to another user?
- Role-based access — Are admin-only actions properly restricted?
- IDOR vulnerabilities — Are users able to access other users' data by changing IDs?
// GOOD: Ownership check
app.get('/api/orders/:id', (req, res) => {
const order = await db.orders.findById(req.params.id);
// Verify the order belongs to the authenticated user
if (order.userId !== req.user.id && req.user.role !== 'admin') {
return res.status(403).json({ error: 'Forbidden' });
}
res.json(order);
});
// BAD: No ownership check (common AI mistake)
app.get('/api/orders/:id', (req, res) => {
const order = await db.orders.findById(req.params.id);
res.json(order); // Any user can access any order!
});
SQL Injection
SQL injection occurs when user input is directly concatenated into SQL queries. AI models frequently generate vulnerable code.
What to Check
- Parameterized queries — Are all database queries using parameterized statements?
- ORM usage — Is the ORM being used correctly (not with raw string interpolation)?
- Stored procedures — Are they properly parameterized?
# GOOD: Parameterized query
cursor.execute(
"SELECT * FROM users WHERE email = %s",
(user_email,)
)
# BAD: String interpolation (vulnerable to SQL injection)
cursor.execute(
f"SELECT * FROM users WHERE email = '{user_email}'"
)
Cross-Site Scripting (XSS)
XSS allows attackers to inject malicious scripts into web pages viewed by other users.
What to Check
- Output encoding — Is user-generated content properly escaped when rendered?
- Content Security Policy — Is CSP configured to restrict script sources?
- Input sanitization — Are inputs sanitized before storage?
// GOOD: Use framework's built-in escaping
// React auto-escapes by default
function UserComment({ comment }) {
return <div>{comment}</div>; // Safe - React escapes this
}
// BAD: dangerouslySetInnerHTML without sanitization
function UserComment({ comment }) {
return <div dangerouslySetInnerHTML={{ __html: comment }} />; // XSS risk!
}
Cross-Site Request Forgery (CSRF)
CSRF tricks authenticated users into performing actions they didn't intend.
What to Check
- CSRF tokens — Are state-changing requests protected with CSRF tokens?
- SameSite cookies — Are cookies configured with SameSite=Strict or SameSite=Lax?
- Origin/Referer validation — Are requests validated against expected origins?
// GOOD: CSRF protection with tokens
fetch('/api/delete-account', {
method: 'POST',
headers: {
'X-CSRF-Token': csrfToken,
'Content-Type': 'application/json'
},
credentials: 'include'
});
Rate Limiting
Rate limiting prevents abuse of your API endpoints.
What to Check
- Login endpoints — Limited to prevent brute force
- API endpoints — Limited to prevent abuse
- File uploads — Limited to prevent storage exhaustion
// GOOD: Rate limiting with express-rate-limit
const loginLimiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 5, // 5 attempts per minute
message: 'Too many login attempts, please try again later'
});
app.post('/api/login', loginLimiter, (req, res) => {
// Login logic
});
Password Storage
What to Check
- Hashing algorithm — Use bcrypt, Argon2, or PBKDF2
- Salt — Are passwords salted automatically (modern libraries do this)
- Cost factor — Is the work factor high enough (bcrypt cost >= 10)
# GOOD: bcrypt with appropriate cost
import bcrypt
password = b"secure_password"
salt = bcrypt.gensalt(rounds=12) # Cost factor of 12
hashed = bcrypt.hashpw(password, salt)
API Secrets and Environment Variables
What to Check
- No hardcoded secrets — API keys, passwords, and tokens should never be in code
- Environment variables — Use .env files or secret management services
- .gitignore — Ensure .env files are never committed to version control
// GOOD: Environment variables
const apiKey = process.env.STRIPE_SECRET_KEY;
const dbPassword = process.env.DB_PASSWORD;
// BAD: Hardcoded secrets
const apiKey = 'sk_live_abc123def456'; // Never do this!
File Upload Validation
What to Check
- File type validation — Check MIME type, not just extension
- File size limits — Prevent storage exhaustion
- Path traversal — Sanitize filenames to prevent directory traversal
- Scan for malware — Scan uploaded files when possible
# GOOD: File upload validation
ALLOWED_EXTENSIONS = {'png', 'jpg', 'jpeg', 'gif'}
MAX_FILE_SIZE = 5 * 1024 * 1024 # 5MB
def validate_upload(file):
# Check file extension
ext = file.filename.rsplit('.', 1)[1].lower()
if ext not in ALLOWED_EXTENSIONS:
raise ValueError('Invalid file type')
# Check file size
if len(file.read()) > MAX_FILE_SIZE:
raise ValueError('File too large')
# Check MIME type
if not file.content_type.startswith('image/'):
raise ValueError('Invalid MIME type')
# Sanitize filename
safe_filename = secure_filename(file.filename)
return safe_filename
Security Checklist
Before deploying any AI-generated code, verify:
- Passwords are hashed with bcrypt or Argon2
- All database queries use parameterized statements
- User input is validated and sanitized
- Authentication has rate limiting
- Authorization checks are in place (ownership, roles)
- CSRF protection is enabled
- Output is properly encoded (XSS prevention)
- Secrets are in environment variables, not code
- File uploads are validated (type, size, path)
- HTTPS is enforced
- Security headers are configured (CSP, HSTS, etc.)
- Dependencies are up to date and vulnerability-free
Security is not a feature — it's a responsibility. Every line of AI-generated code should be reviewed with security in mind.