diff --git a/.gitignore b/.gitignore index 1a0a4227..e2b87e88 100644 --- a/.gitignore +++ b/.gitignore @@ -196,3 +196,5 @@ old/ .memory/ .migration-backup/ scripts/create-live-*.js + +pptx-env/ diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..1ee011b0 --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,643 @@ +# Contributing to Tractatus Framework + +**Status:** Alpha Research Project (October 2025) + +Thank you for your interest in contributing to architectural AI safety research. Tractatus welcomes contributions that advance our understanding of structural constraints in AI systems. + +--- + +## 🎯 What We're Building + +Tractatus explores whether **architectural constraints** can make certain AI decisions structurally impossible without human judgment. Unlike alignment-based approaches that hope AI will choose safety, we investigate whether safety can be enforced through system architecture. + +**This is active research, not production software.** + +We welcome contributions that: +- Advance research questions with empirical rigor +- Improve implementation quality and test coverage +- Document real-world failure modes and responses +- Challenge assumptions with evidence +- Replicate findings in new contexts + +--- + +## πŸ”¬ Types of Contributions + +### Research Contributions (Highest Value) + +**Empirical Studies** +- Controlled experiments testing framework effectiveness +- Comparative analysis with baseline (no framework) conditions +- Measurement of false positive/false negative rates +- Cross-LLM compatibility testing (GPT-4, Gemini, open-source models) +- Multi-domain generalization studies + +**Theoretical Work** +- Formal verification of safety properties +- Proofs of correctness for specific boundary conditions +- Extensions of value pluralism theory to AI systems +- Analysis of rule proliferation dynamics + +**Replication Studies** +- Independent validation of our findings +- Testing in different application domains +- Deployment in production contexts with documented results + +**Format**: Submit as GitHub issue tagged `research` with methodology, data, and findings. We'll work with you on publication if results are significant. + +### Implementation Contributions + +**High Priority** +1. **Fix failing tests** - We have 108 known failures that need investigation +2. **Improve test coverage** - Focus on edge cases and integration scenarios +3. **Performance optimization** - Rule validation overhead, MongoDB query efficiency +4. **Cross-platform testing** - Windows, macOS compatibility verification + +**Medium Priority** +- Language ports (Python, Rust, Go, TypeScript) +- Integration examples (Express, FastAPI, Spring Boot) +- Enhanced logging and observability +- API documentation improvements + +**Lower Priority** +- UI enhancements (currently minimal by design) +- Developer experience improvements +- Build system optimizations + +### Documentation Contributions + +**Critical Needs** +- Case studies from real deployments (with data) +- Failure mode documentation (what went wrong and why) +- Integration tutorials with working code examples +- Critical analyses of framework limitations + +**Standard Needs** +- Corrections to existing documentation +- Clarity improvements +- Code comment additions +- API reference updates + +--- + +## πŸš€ Getting Started + +### Prerequisites + +**Required** +- Node.js 18+ (tested on 18.x and 20.x) +- MongoDB 7.0+ (critical - earlier versions have compatibility issues) +- Git +- 8GB RAM minimum (for local MongoDB + tests) + +**Helpful** +- Understanding of organizational decision theory (March & Simon) +- Familiarity with value pluralism (Berlin, Chang) +- Experience with LLM-assisted development contexts + +### Local Development Setup + +```bash +# 1. Fork and clone +git clone git@github.com:YOUR_USERNAME/tractatus-framework.git +cd tractatus-framework + +# 2. Install dependencies +npm install + +# 3. Set up environment +cp .env.example .env +# Edit .env - ensure MongoDB connection string is correct + +# 4. Start MongoDB (if not running) +# macOS: brew services start mongodb-community +# Ubuntu: sudo systemctl start mongod +# Windows: net start MongoDB + +# 5. Initialize database with test data +npm run init:db + +# 6. Run tests to verify setup +npm test + +# Expected: 625 passing, 108 failing (known issues) +# If you get different numbers, something is wrong + +# 7. Start development server +npm start +# Runs on http://localhost:9000 +``` + +### Project Structure + +``` +tractatus-framework/ +β”œβ”€β”€ src/ +β”‚ β”œβ”€β”€ services/ # 6 core framework components +β”‚ β”‚ β”œβ”€β”€ InstructionPersistenceClassifier.service.js +β”‚ β”‚ β”œβ”€β”€ CrossReferenceValidator.service.js +β”‚ β”‚ β”œβ”€β”€ BoundaryEnforcer.service.js +β”‚ β”‚ β”œβ”€β”€ ContextPressureMonitor.service.js +β”‚ β”‚ β”œβ”€β”€ MetacognitiveVerifier.service.js +β”‚ β”‚ └── PluralisticDeliberationOrchestrator.service.js +β”‚ β”œβ”€β”€ models/ # MongoDB schemas +β”‚ β”œβ”€β”€ routes/ # API endpoints +β”‚ β”œβ”€β”€ controllers/ # Request handlers +β”‚ β”œβ”€β”€ middleware/ # Express middleware +β”‚ └── server.js # Application entry point +β”œβ”€β”€ tests/ +β”‚ β”œβ”€β”€ unit/ # Service unit tests +β”‚ └── integration/ # API integration tests +β”œβ”€β”€ public/ # Frontend (vanilla JS, no framework) +β”œβ”€β”€ docs/ # Research documentation +└── scripts/ # Utilities and migrations +``` + +**Key files to understand:** +- `src/services/ContextPressureMonitor.service.js` - Session health tracking (good entry point) +- `src/services/CrossReferenceValidator.service.js` - Training pattern override detection +- `tests/unit/ContextPressureMonitor.test.js` - Example test structure +- `.env.example` - Required configuration variables + +--- + +## πŸ“ Contribution Process + +### 1. Before You Start + +**For significant work** (new features, architectural changes, research studies): +1. Open a GitHub Discussion or Issue first +2. Describe your proposal with: + - Problem being addressed + - Proposed approach + - Expected outcomes + - Resource requirements +3. Wait for feedback before investing significant time + +**For minor fixes** (typos, small bugs, documentation corrections): +- Just submit a PR with clear description + +### 2. Development Workflow + +```bash +# Create feature branch +git checkout -b research/empirical-validation-study +# or +git checkout -b fix/mongodb-connection-pool +# or +git checkout -b docs/integration-tutorial + +# Make changes iteratively +# ... edit files ... + +# Run tests frequently +npm test + +# Verify no regressions +npm run test:unit +npm run test:integration + +# Commit with clear messages +git add . +git commit -m "fix(validation): resolve race condition in CrossReferenceValidator + +Issue: Concurrent validation requests caused inconsistent results +Root cause: Shared state in validator instance +Solution: Make validation stateless, pass context explicitly + +Tested with 100 concurrent requests - no failures + +Fixes #123" + +# Push to your fork +git push origin research/empirical-validation-study +``` + +### 3. Pull Request Guidelines + +**Title Format:** +``` +type(scope): brief description + +Examples: +fix(tests): resolve MongoDB connection timeout in integration tests +feat(validation): add configurable threshold for context pressure +docs(README): correct test count and clarify maturity status +research(replication): independent validation of 27027 failure mode +``` + +**Types:** +- `fix` - Bug fixes +- `feat` - New features +- `docs` - Documentation only +- `test` - Test additions/fixes +- `refactor` - Code restructuring +- `research` - Research contributions +- `chore` - Build/tooling changes + +**PR Description Must Include:** + +```markdown +## Problem +Clear description of what issue this addresses + +## Solution +How you solved it and why this approach + +## Testing +What tests were added/modified +How you verified the fix + +## Breaking Changes +List any breaking changes (or "None") + +## Research Context (if applicable) +Methodology, data, findings + +## Checklist +- [ ] Tests added/updated +- [ ] All tests passing locally +- [ ] Documentation updated +- [ ] No unintended breaking changes +- [ ] Commit messages follow conventions +``` + +### 4. Code Review Process + +1. **Automated checks** run first (tests, linting) +2. **Maintainer review** for: + - Alignment with research goals + - Code quality and test coverage + - Documentation completeness + - Architectural consistency +3. **Feedback** provided within 7 days (usually faster) +4. **Iteration** if changes needed +5. **Merge** when approved + +**Review criteria:** +- Does this advance research questions? +- Is it tested thoroughly? +- Is documentation clear and honest? +- Does it maintain architectural integrity? + +--- + +## πŸ§ͺ Testing Standards + +### Unit Tests (Required) + +**Every new function/method must have unit tests.** + +```javascript +// tests/unit/NewService.test.js +const { NewService } = require('../../src/services/NewService.service'); + +describe('NewService', () => { + describe('criticalFunction', () => { + it('should handle normal case correctly', () => { + const service = new NewService(); + const result = service.criticalFunction({ input: 'test' }); + + expect(result.status).toBe('success'); + expect(result.data).toBeDefined(); + }); + + it('should handle edge case: empty input', () => { + const service = new NewService(); + expect(() => service.criticalFunction({})) + .toThrow('Input required'); + }); + + it('should handle edge case: invalid input type', () => { + const service = new NewService(); + const result = service.criticalFunction({ input: 123 }); + + expect(result.status).toBe('error'); + expect(result.error).toContain('Expected string'); + }); + }); +}); +``` + +**Testing requirements:** +- Test normal operation +- Test edge cases (empty, null, invalid types) +- Test error conditions +- Mock external dependencies (MongoDB, APIs) +- Use descriptive test names +- One assertion per test (generally) + +### Integration Tests (For API Changes) + +```javascript +// tests/integration/api.newEndpoint.test.js +const request = require('supertest'); +const app = require('../../src/server'); +const db = require('../helpers/db-test-helper'); + +describe('POST /api/new-endpoint', () => { + beforeAll(async () => { + await db.connect(); + }); + + afterAll(async () => { + await db.cleanup(); + await db.disconnect(); + }); + + it('should create resource successfully', async () => { + const response = await request(app) + .post('/api/new-endpoint') + .send({ data: 'test' }) + .expect(201); + + expect(response.body.id).toBeDefined(); + + // Verify database state + const saved = await db.findById(response.body.id); + expect(saved.data).toBe('test'); + }); +}); +``` + +### Running Tests + +```bash +# All tests (current status: 625 pass, 108 fail) +npm test + +# Unit tests only +npm run test:unit + +# Integration tests only +npm run test:integration + +# Watch mode (auto-rerun on changes) +npm run test:watch + +# Coverage report +npm run test:coverage +``` + +**Expectations:** +- New code: 100% coverage required +- Bug fixes: Add test that would have caught the bug +- Integration tests: Must use test database, not production + +--- + +## πŸ“š Documentation Standards + +### Code Documentation + +**Use JSDoc for all public functions:** + +```javascript +/** + * Validates a proposed action against stored instruction history + * + * This prevents the "27027 failure mode" where LLM training patterns + * override explicit user instructions (e.g., MongoDB port 27017 vs + * user's explicit instruction to use 27027). + * + * @param {Object} action - Proposed action to validate + * @param {string} action.type - Action type (e.g., 'database_config') + * @param {Object} action.parameters - Action-specific parameters + * @param {Array} instructionHistory - Active instructions + * @returns {Promise} Validation outcome + * @throws {ValidationError} If action type is unsupported + * + * @example + * const result = await validator.validate({ + * type: 'database_config', + * parameters: { port: 27017 } + * }, instructionHistory); + * + * if (result.status === 'REJECTED') { + * console.log(result.reason); // "Training pattern override detected" + * } + */ +async validate(action, instructionHistory) { + // Implementation... +} +``` + +**Comment complex logic:** + +```javascript +// Edge case: When context window is 95%+ full, quality degrades rapidly. +// Empirical observation across 50+ sessions suggests threshold should be +// 60% for ELEVATED, 75% for HIGH. These values are NOT proven optimal. +if (tokenUsage > 0.60) { + // ... +} +``` + +### Research Documentation + +For research contributions, include: + +1. **Methodology** - How the study was conducted +2. **Data** - Sample sizes, measurements, statistical methods +3. **Findings** - What was discovered (with error bars/confidence intervals) +4. **Limitations** - What the study didn't prove +5. **Replication** - Enough detail for others to replicate + +**Example structure:** + +```markdown +# Empirical Validation of CrossReferenceValidator + +## Research Question +Does the CrossReferenceValidator reduce training pattern override frequency? + +## Methodology +- Controlled experiment: 100 test cases with known override patterns +- Conditions: (A) No validator, (B) Validator enabled +- LLM: Claude 3.5 Sonnet +- Measurement: Override rate per 100 interactions +- Statistical test: Chi-square test for independence + +## Results +- Condition A (no validator): 23/100 overrides (23%) +- Condition B (validator enabled): 3/100 overrides (3%) +- p < 0.001, effect size: large (CramΓ©r's V = 0.42) + +## Limitations +- Single LLM tested (generalization unclear) +- Synthetic test cases (may not reflect real usage) +- Short sessions (long-term drift not measured) +- Observer bias (researcher knew test purpose) + +## Conclusion +Strong evidence that validator reduces training pattern overrides in +controlled conditions with Claude 3.5. Replication with other LLMs +and real-world deployments needed. + +## Data & Code +- Raw data: [link to CSV] +- Analysis script: [link to R/Python script] +- Test prompts: [link to test suite] +``` + +--- + +## βš–οΈ Research Ethics & Integrity + +### Required Standards + +**Transparency** +- Acknowledge all limitations +- Report negative results (what didn't work) +- Disclose conflicts of interest +- Share data and methodology + +**Accuracy** +- No fabricated statistics or results +- Clearly distinguish observation from proof +- Use appropriate statistical methods +- Acknowledge uncertainty + +**Attribution** +- Cite all sources +- Credit collaborators +- Acknowledge AI assistance in implementation +- Reference prior work + +### What We Reject + +- ❌ Fabricated data or statistics +- ❌ Selective reporting (hiding negative results) +- ❌ Plagiarism or insufficient attribution +- ❌ Overclaiming ("proves", "guarantees" without rigorous evidence) +- ❌ Undisclosed conflicts of interest + +### AI-Assisted Contributions + +**We welcome AI-assisted contributions** with proper disclosure: + +``` +This code was generated with assistance from [Claude/GPT-4/etc] and +subsequently reviewed and tested by [human contributor name]. + +Testing: [description of validation performed] +``` + +Be honest about: +- What the AI generated vs. what you wrote +- What testing/validation you performed +- Any limitations you're aware of + +--- + +## 🚫 What We Don't Accept + +### Technical + +- Code without tests +- Breaking changes without migration path +- Commits that reduce test coverage +- Violations of existing architectural patterns +- Features that bypass safety constraints + +### Process + +- PRs without description or context +- Unconstructive criticism without alternatives +- Ignoring review feedback +- Force-pushing over maintainer commits + +### Content + +- Disrespectful or discriminatory language +- Marketing hyperbole or unsubstantiated claims +- Promises of features/capabilities that don't exist +- Plagiarized content + +--- + +## πŸ“ž Getting Help + +**Technical Questions** +- Open a GitHub Discussion (preferred) +- Tag with appropriate label (`question`, `help-wanted`) + +**Research Collaboration** +- Email: research@agenticgovernance.digital +- Include: Research question, proposed methodology, timeline + +**Bug Reports** +- Open GitHub Issue +- Include: Steps to reproduce, expected vs actual behavior, environment + +**Security Issues** +- Email: research@agenticgovernance.digital +- Do NOT open public issue for security vulnerabilities + +--- + +## πŸ† Recognition + +Contributors are acknowledged through: + +**Code Contributors** +- GitHub contributors list (automatic) +- Release notes for significant contributions +- In-code attribution for major features + +**Research Contributors** +- Co-authorship on papers (if applicable) +- Citation in research documentation +- Acknowledgment in published materials + +**All forms of contribution are valued** - code, documentation, research, community support, and critical feedback all advance the project. + +--- + +## πŸ“œ License + +By contributing, you agree that your contributions will be licensed under Apache License 2.0 (see LICENSE file). + +You retain copyright to your contributions. The Apache 2.0 license grants the project and users broad permissions while protecting contributors from liability. + +--- + +## πŸŽ“ Learning Resources + +### For New Contributors + +**Start here:** +1. Read [README.md](README.md) - Understand project goals and current state +2. Browse [existing issues](https://github.com/AgenticGovernance/tractatus-framework/issues) - See what needs work +3. Review [test files](tests/) - Understand code patterns +4. Try [local setup](#local-development-setup) - Get environment working + +**Recommended reading:** +- March & Simon - *Organizations* (1958) - Organizational decision theory foundations +- Isaiah Berlin - *Two Concepts of Liberty* (1958) - Value pluralism +- Ruth Chang - *Hard Choices* (2013) - Incommensurability theory + +**Project-specific:** +- [Case Studies](https://agenticgovernance.digital/docs.html) - Real-world examples +- [API Documentation](https://agenticgovernance.digital/docs.html) - Technical reference +- Existing tests - Best way to understand how code works + +### For Researchers + +**Academic context:** +- AI safety through architectural constraints (vs. alignment) +- Value pluralism in AI system design +- Organizational theory applied to AI governance +- Empirical validation of governance frameworks + +**Open research questions:** +- What is the optimal rule count before brittleness? +- Can boundary detection be made more precise? +- Does this generalize beyond software development contexts? +- How to measure framework effectiveness rigorously? + +--- + +**Thank you for contributing to architectural AI safety research.** + +*Last updated: 2025-10-21* diff --git a/pptx-env/bin/Activate.ps1 b/pptx-env/bin/Activate.ps1 deleted file mode 100644 index b49d77ba..00000000 --- a/pptx-env/bin/Activate.ps1 +++ /dev/null @@ -1,247 +0,0 @@ -<# -.Synopsis -Activate a Python virtual environment for the current PowerShell session. - -.Description -Pushes the python executable for a virtual environment to the front of the -$Env:PATH environment variable and sets the prompt to signify that you are -in a Python virtual environment. Makes use of the command line switches as -well as the `pyvenv.cfg` file values present in the virtual environment. - -.Parameter VenvDir -Path to the directory that contains the virtual environment to activate. The -default value for this is the parent of the directory that the Activate.ps1 -script is located within. - -.Parameter Prompt -The prompt prefix to display when this virtual environment is activated. By -default, this prompt is the name of the virtual environment folder (VenvDir) -surrounded by parentheses and followed by a single space (ie. '(.venv) '). - -.Example -Activate.ps1 -Activates the Python virtual environment that contains the Activate.ps1 script. - -.Example -Activate.ps1 -Verbose -Activates the Python virtual environment that contains the Activate.ps1 script, -and shows extra information about the activation as it executes. - -.Example -Activate.ps1 -VenvDir C:\Users\MyUser\Common\.venv -Activates the Python virtual environment located in the specified location. - -.Example -Activate.ps1 -Prompt "MyPython" -Activates the Python virtual environment that contains the Activate.ps1 script, -and prefixes the current prompt with the specified string (surrounded in -parentheses) while the virtual environment is active. - -.Notes -On Windows, it may be required to enable this Activate.ps1 script by setting the -execution policy for the user. You can do this by issuing the following PowerShell -command: - -PS C:\> Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser - -For more information on Execution Policies: -https://go.microsoft.com/fwlink/?LinkID=135170 - -#> -Param( - [Parameter(Mandatory = $false)] - [String] - $VenvDir, - [Parameter(Mandatory = $false)] - [String] - $Prompt -) - -<# Function declarations --------------------------------------------------- #> - -<# -.Synopsis -Remove all shell session elements added by the Activate script, including the -addition of the virtual environment's Python executable from the beginning of -the PATH variable. - -.Parameter NonDestructive -If present, do not remove this function from the global namespace for the -session. - -#> -function global:deactivate ([switch]$NonDestructive) { - # Revert to original values - - # The prior prompt: - if (Test-Path -Path Function:_OLD_VIRTUAL_PROMPT) { - Copy-Item -Path Function:_OLD_VIRTUAL_PROMPT -Destination Function:prompt - Remove-Item -Path Function:_OLD_VIRTUAL_PROMPT - } - - # The prior PYTHONHOME: - if (Test-Path -Path Env:_OLD_VIRTUAL_PYTHONHOME) { - Copy-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME -Destination Env:PYTHONHOME - Remove-Item -Path Env:_OLD_VIRTUAL_PYTHONHOME - } - - # The prior PATH: - if (Test-Path -Path Env:_OLD_VIRTUAL_PATH) { - Copy-Item -Path Env:_OLD_VIRTUAL_PATH -Destination Env:PATH - Remove-Item -Path Env:_OLD_VIRTUAL_PATH - } - - # Just remove the VIRTUAL_ENV altogether: - if (Test-Path -Path Env:VIRTUAL_ENV) { - Remove-Item -Path env:VIRTUAL_ENV - } - - # Just remove VIRTUAL_ENV_PROMPT altogether. - if (Test-Path -Path Env:VIRTUAL_ENV_PROMPT) { - Remove-Item -Path env:VIRTUAL_ENV_PROMPT - } - - # Just remove the _PYTHON_VENV_PROMPT_PREFIX altogether: - if (Get-Variable -Name "_PYTHON_VENV_PROMPT_PREFIX" -ErrorAction SilentlyContinue) { - Remove-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Scope Global -Force - } - - # Leave deactivate function in the global namespace if requested: - if (-not $NonDestructive) { - Remove-Item -Path function:deactivate - } -} - -<# -.Description -Get-PyVenvConfig parses the values from the pyvenv.cfg file located in the -given folder, and returns them in a map. - -For each line in the pyvenv.cfg file, if that line can be parsed into exactly -two strings separated by `=` (with any amount of whitespace surrounding the =) -then it is considered a `key = value` line. The left hand string is the key, -the right hand is the value. - -If the value starts with a `'` or a `"` then the first and last character is -stripped from the value before being captured. - -.Parameter ConfigDir -Path to the directory that contains the `pyvenv.cfg` file. -#> -function Get-PyVenvConfig( - [String] - $ConfigDir -) { - Write-Verbose "Given ConfigDir=$ConfigDir, obtain values in pyvenv.cfg" - - # Ensure the file exists, and issue a warning if it doesn't (but still allow the function to continue). - $pyvenvConfigPath = Join-Path -Resolve -Path $ConfigDir -ChildPath 'pyvenv.cfg' -ErrorAction Continue - - # An empty map will be returned if no config file is found. - $pyvenvConfig = @{ } - - if ($pyvenvConfigPath) { - - Write-Verbose "File exists, parse `key = value` lines" - $pyvenvConfigContent = Get-Content -Path $pyvenvConfigPath - - $pyvenvConfigContent | ForEach-Object { - $keyval = $PSItem -split "\s*=\s*", 2 - if ($keyval[0] -and $keyval[1]) { - $val = $keyval[1] - - # Remove extraneous quotations around a string value. - if ("'""".Contains($val.Substring(0, 1))) { - $val = $val.Substring(1, $val.Length - 2) - } - - $pyvenvConfig[$keyval[0]] = $val - Write-Verbose "Adding Key: '$($keyval[0])'='$val'" - } - } - } - return $pyvenvConfig -} - - -<# Begin Activate script --------------------------------------------------- #> - -# Determine the containing directory of this script -$VenvExecPath = Split-Path -Parent $MyInvocation.MyCommand.Definition -$VenvExecDir = Get-Item -Path $VenvExecPath - -Write-Verbose "Activation script is located in path: '$VenvExecPath'" -Write-Verbose "VenvExecDir Fullname: '$($VenvExecDir.FullName)" -Write-Verbose "VenvExecDir Name: '$($VenvExecDir.Name)" - -# Set values required in priority: CmdLine, ConfigFile, Default -# First, get the location of the virtual environment, it might not be -# VenvExecDir if specified on the command line. -if ($VenvDir) { - Write-Verbose "VenvDir given as parameter, using '$VenvDir' to determine values" -} -else { - Write-Verbose "VenvDir not given as a parameter, using parent directory name as VenvDir." - $VenvDir = $VenvExecDir.Parent.FullName.TrimEnd("\\/") - Write-Verbose "VenvDir=$VenvDir" -} - -# Next, read the `pyvenv.cfg` file to determine any required value such -# as `prompt`. -$pyvenvCfg = Get-PyVenvConfig -ConfigDir $VenvDir - -# Next, set the prompt from the command line, or the config file, or -# just use the name of the virtual environment folder. -if ($Prompt) { - Write-Verbose "Prompt specified as argument, using '$Prompt'" -} -else { - Write-Verbose "Prompt not specified as argument to script, checking pyvenv.cfg value" - if ($pyvenvCfg -and $pyvenvCfg['prompt']) { - Write-Verbose " Setting based on value in pyvenv.cfg='$($pyvenvCfg['prompt'])'" - $Prompt = $pyvenvCfg['prompt']; - } - else { - Write-Verbose " Setting prompt based on parent's directory's name. (Is the directory name passed to venv module when creating the virtual environment)" - Write-Verbose " Got leaf-name of $VenvDir='$(Split-Path -Path $venvDir -Leaf)'" - $Prompt = Split-Path -Path $venvDir -Leaf - } -} - -Write-Verbose "Prompt = '$Prompt'" -Write-Verbose "VenvDir='$VenvDir'" - -# Deactivate any currently active virtual environment, but leave the -# deactivate function in place. -deactivate -nondestructive - -# Now set the environment variable VIRTUAL_ENV, used by many tools to determine -# that there is an activated venv. -$env:VIRTUAL_ENV = $VenvDir - -if (-not $Env:VIRTUAL_ENV_DISABLE_PROMPT) { - - Write-Verbose "Setting prompt to '$Prompt'" - - # Set the prompt to include the env name - # Make sure _OLD_VIRTUAL_PROMPT is global - function global:_OLD_VIRTUAL_PROMPT { "" } - Copy-Item -Path function:prompt -Destination function:_OLD_VIRTUAL_PROMPT - New-Variable -Name _PYTHON_VENV_PROMPT_PREFIX -Description "Python virtual environment prompt prefix" -Scope Global -Option ReadOnly -Visibility Public -Value $Prompt - - function global:prompt { - Write-Host -NoNewline -ForegroundColor Green "($_PYTHON_VENV_PROMPT_PREFIX) " - _OLD_VIRTUAL_PROMPT - } - $env:VIRTUAL_ENV_PROMPT = $Prompt -} - -# Clear PYTHONHOME -if (Test-Path -Path Env:PYTHONHOME) { - Copy-Item -Path Env:PYTHONHOME -Destination Env:_OLD_VIRTUAL_PYTHONHOME - Remove-Item -Path Env:PYTHONHOME -} - -# Add the venv to the PATH -Copy-Item -Path Env:PATH -Destination Env:_OLD_VIRTUAL_PATH -$Env:PATH = "$VenvExecDir$([System.IO.Path]::PathSeparator)$Env:PATH" diff --git a/pptx-env/bin/__pycache__/vba_extract.cpython-312.pyc b/pptx-env/bin/__pycache__/vba_extract.cpython-312.pyc deleted file mode 100644 index 1e7b2ee8..00000000 Binary files a/pptx-env/bin/__pycache__/vba_extract.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/bin/activate b/pptx-env/bin/activate deleted file mode 100644 index a6ac7737..00000000 --- a/pptx-env/bin/activate +++ /dev/null @@ -1,70 +0,0 @@ -# This file must be used with "source bin/activate" *from bash* -# You cannot run it directly - -deactivate () { - # reset old environment variables - if [ -n "${_OLD_VIRTUAL_PATH:-}" ] ; then - PATH="${_OLD_VIRTUAL_PATH:-}" - export PATH - unset _OLD_VIRTUAL_PATH - fi - if [ -n "${_OLD_VIRTUAL_PYTHONHOME:-}" ] ; then - PYTHONHOME="${_OLD_VIRTUAL_PYTHONHOME:-}" - export PYTHONHOME - unset _OLD_VIRTUAL_PYTHONHOME - fi - - # Call hash to forget past commands. Without forgetting - # past commands the $PATH changes we made may not be respected - hash -r 2> /dev/null - - if [ -n "${_OLD_VIRTUAL_PS1:-}" ] ; then - PS1="${_OLD_VIRTUAL_PS1:-}" - export PS1 - unset _OLD_VIRTUAL_PS1 - fi - - unset VIRTUAL_ENV - unset VIRTUAL_ENV_PROMPT - if [ ! "${1:-}" = "nondestructive" ] ; then - # Self destruct! - unset -f deactivate - fi -} - -# unset irrelevant variables -deactivate nondestructive - -# on Windows, a path can contain colons and backslashes and has to be converted: -if [ "${OSTYPE:-}" = "cygwin" ] || [ "${OSTYPE:-}" = "msys" ] ; then - # transform D:\path\to\venv to /d/path/to/venv on MSYS - # and to /cygdrive/d/path/to/venv on Cygwin - export VIRTUAL_ENV=$(cygpath /home/theflow/projects/tractatus/pptx-env) -else - # use the path as-is - export VIRTUAL_ENV=/home/theflow/projects/tractatus/pptx-env -fi - -_OLD_VIRTUAL_PATH="$PATH" -PATH="$VIRTUAL_ENV/"bin":$PATH" -export PATH - -# unset PYTHONHOME if set -# this will fail if PYTHONHOME is set to the empty string (which is bad anyway) -# could use `if (set -u; : $PYTHONHOME) ;` in bash -if [ -n "${PYTHONHOME:-}" ] ; then - _OLD_VIRTUAL_PYTHONHOME="${PYTHONHOME:-}" - unset PYTHONHOME -fi - -if [ -z "${VIRTUAL_ENV_DISABLE_PROMPT:-}" ] ; then - _OLD_VIRTUAL_PS1="${PS1:-}" - PS1='(pptx-env) '"${PS1:-}" - export PS1 - VIRTUAL_ENV_PROMPT='(pptx-env) ' - export VIRTUAL_ENV_PROMPT -fi - -# Call hash to forget past commands. Without forgetting -# past commands the $PATH changes we made may not be respected -hash -r 2> /dev/null diff --git a/pptx-env/bin/activate.csh b/pptx-env/bin/activate.csh deleted file mode 100644 index 51000c63..00000000 --- a/pptx-env/bin/activate.csh +++ /dev/null @@ -1,27 +0,0 @@ -# This file must be used with "source bin/activate.csh" *from csh*. -# You cannot run it directly. - -# Created by Davide Di Blasi . -# Ported to Python 3.3 venv by Andrew Svetlov - -alias deactivate 'test $?_OLD_VIRTUAL_PATH != 0 && setenv PATH "$_OLD_VIRTUAL_PATH" && unset _OLD_VIRTUAL_PATH; rehash; test $?_OLD_VIRTUAL_PROMPT != 0 && set prompt="$_OLD_VIRTUAL_PROMPT" && unset _OLD_VIRTUAL_PROMPT; unsetenv VIRTUAL_ENV; unsetenv VIRTUAL_ENV_PROMPT; test "\!:*" != "nondestructive" && unalias deactivate' - -# Unset irrelevant variables. -deactivate nondestructive - -setenv VIRTUAL_ENV /home/theflow/projects/tractatus/pptx-env - -set _OLD_VIRTUAL_PATH="$PATH" -setenv PATH "$VIRTUAL_ENV/"bin":$PATH" - - -set _OLD_VIRTUAL_PROMPT="$prompt" - -if (! "$?VIRTUAL_ENV_DISABLE_PROMPT") then - set prompt = '(pptx-env) '"$prompt" - setenv VIRTUAL_ENV_PROMPT '(pptx-env) ' -endif - -alias pydoc python -m pydoc - -rehash diff --git a/pptx-env/bin/activate.fish b/pptx-env/bin/activate.fish deleted file mode 100644 index 5c710041..00000000 --- a/pptx-env/bin/activate.fish +++ /dev/null @@ -1,69 +0,0 @@ -# This file must be used with "source /bin/activate.fish" *from fish* -# (https://fishshell.com/). You cannot run it directly. - -function deactivate -d "Exit virtual environment and return to normal shell environment" - # reset old environment variables - if test -n "$_OLD_VIRTUAL_PATH" - set -gx PATH $_OLD_VIRTUAL_PATH - set -e _OLD_VIRTUAL_PATH - end - if test -n "$_OLD_VIRTUAL_PYTHONHOME" - set -gx PYTHONHOME $_OLD_VIRTUAL_PYTHONHOME - set -e _OLD_VIRTUAL_PYTHONHOME - end - - if test -n "$_OLD_FISH_PROMPT_OVERRIDE" - set -e _OLD_FISH_PROMPT_OVERRIDE - # prevents error when using nested fish instances (Issue #93858) - if functions -q _old_fish_prompt - functions -e fish_prompt - functions -c _old_fish_prompt fish_prompt - functions -e _old_fish_prompt - end - end - - set -e VIRTUAL_ENV - set -e VIRTUAL_ENV_PROMPT - if test "$argv[1]" != "nondestructive" - # Self-destruct! - functions -e deactivate - end -end - -# Unset irrelevant variables. -deactivate nondestructive - -set -gx VIRTUAL_ENV /home/theflow/projects/tractatus/pptx-env - -set -gx _OLD_VIRTUAL_PATH $PATH -set -gx PATH "$VIRTUAL_ENV/"bin $PATH - -# Unset PYTHONHOME if set. -if set -q PYTHONHOME - set -gx _OLD_VIRTUAL_PYTHONHOME $PYTHONHOME - set -e PYTHONHOME -end - -if test -z "$VIRTUAL_ENV_DISABLE_PROMPT" - # fish uses a function instead of an env var to generate the prompt. - - # Save the current fish_prompt function as the function _old_fish_prompt. - functions -c fish_prompt _old_fish_prompt - - # With the original prompt function renamed, we can override with our own. - function fish_prompt - # Save the return status of the last command. - set -l old_status $status - - # Output the venv prompt; color taken from the blue of the Python logo. - printf "%s%s%s" (set_color 4B8BBE) '(pptx-env) ' (set_color normal) - - # Restore the return status of the previous command. - echo "exit $old_status" | . - # Output the original/"old" prompt. - _old_fish_prompt - end - - set -gx _OLD_FISH_PROMPT_OVERRIDE "$VIRTUAL_ENV" - set -gx VIRTUAL_ENV_PROMPT '(pptx-env) ' -end diff --git a/pptx-env/bin/fonttools b/pptx-env/bin/fonttools deleted file mode 100755 index c13967fa..00000000 --- a/pptx-env/bin/fonttools +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from fontTools.__main__ import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/markdown_py b/pptx-env/bin/markdown_py deleted file mode 100755 index f3859fd3..00000000 --- a/pptx-env/bin/markdown_py +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from markdown.__main__ import run -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(run()) diff --git a/pptx-env/bin/pip b/pptx-env/bin/pip deleted file mode 100755 index 0d629a5a..00000000 --- a/pptx-env/bin/pip +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from pip._internal.cli.main import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/pip3 b/pptx-env/bin/pip3 deleted file mode 100755 index 0d629a5a..00000000 --- a/pptx-env/bin/pip3 +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from pip._internal.cli.main import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/pip3.12 b/pptx-env/bin/pip3.12 deleted file mode 100755 index 0d629a5a..00000000 --- a/pptx-env/bin/pip3.12 +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from pip._internal.cli.main import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/pyftmerge b/pptx-env/bin/pyftmerge deleted file mode 100755 index 4a8401ba..00000000 --- a/pptx-env/bin/pyftmerge +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from fontTools.merge import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/pyftsubset b/pptx-env/bin/pyftsubset deleted file mode 100755 index 5c3487c9..00000000 --- a/pptx-env/bin/pyftsubset +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from fontTools.subset import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/python b/pptx-env/bin/python deleted file mode 120000 index b8a0adbb..00000000 --- a/pptx-env/bin/python +++ /dev/null @@ -1 +0,0 @@ -python3 \ No newline at end of file diff --git a/pptx-env/bin/python3 b/pptx-env/bin/python3 deleted file mode 120000 index ae65fdaa..00000000 --- a/pptx-env/bin/python3 +++ /dev/null @@ -1 +0,0 @@ -/usr/bin/python3 \ No newline at end of file diff --git a/pptx-env/bin/python3.12 b/pptx-env/bin/python3.12 deleted file mode 120000 index b8a0adbb..00000000 --- a/pptx-env/bin/python3.12 +++ /dev/null @@ -1 +0,0 @@ -python3 \ No newline at end of file diff --git a/pptx-env/bin/ttx b/pptx-env/bin/ttx deleted file mode 100755 index 5004fa4a..00000000 --- a/pptx-env/bin/ttx +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from fontTools.ttx import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/bin/vba_extract.py b/pptx-env/bin/vba_extract.py deleted file mode 100755 index 2c928ac8..00000000 --- a/pptx-env/bin/vba_extract.py +++ /dev/null @@ -1,79 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 - -############################################################################## -# -# vba_extract - A simple utility to extract a vbaProject.bin binary from an -# Excel 2007+ xlsm file for insertion into an XlsxWriter file. -# -# SPDX-License-Identifier: BSD-2-Clause -# -# Copyright (c) 2013-2025, John McNamara, jmcnamara@cpan.org -# - -import sys -from zipfile import BadZipFile, ZipFile - - -def extract_file(xlsm_zip, filename): - # Extract a single file from an Excel xlsm macro file. - data = xlsm_zip.read("xl/" + filename) - - # Write the data to a local file. - file = open(filename, "wb") - file.write(data) - file.close() - - -# The VBA project file and project signature file we want to extract. -vba_filename = "vbaProject.bin" -vba_signature_filename = "vbaProjectSignature.bin" - -# Get the xlsm file name from the commandline. -if len(sys.argv) > 1: - xlsm_file = sys.argv[1] -else: - print( - "\nUtility to extract a vbaProject.bin binary from an Excel 2007+ " - "xlsm macro file for insertion into an XlsxWriter file.\n" - "If the macros are digitally signed, extracts also a vbaProjectSignature.bin " - "file.\n" - "\n" - "See: https://xlsxwriter.readthedocs.io/working_with_macros.html\n" - "\n" - "Usage: vba_extract file.xlsm\n" - ) - sys.exit() - -try: - # Open the Excel xlsm file as a zip file. - xlsm_zip = ZipFile(xlsm_file, "r") - - # Read the xl/vbaProject.bin file. - extract_file(xlsm_zip, vba_filename) - print(f"Extracted: {vba_filename}") - - if "xl/" + vba_signature_filename in xlsm_zip.namelist(): - extract_file(xlsm_zip, vba_signature_filename) - print(f"Extracted: {vba_signature_filename}") - - -except IOError as e: - print(f"File error: {str(e)}") - sys.exit() - -except KeyError as e: - # Usually when there isn't a xl/vbaProject.bin member in the file. - print(f"File error: {str(e)}") - print(f"File may not be an Excel xlsm macro file: '{xlsm_file}'") - sys.exit() - -except BadZipFile as e: - # Usually if the file is an xls file and not an xlsm file. - print(f"File error: {str(e)}: '{xlsm_file}'") - print("File may not be an Excel xlsm macro file.") - sys.exit() - -except Exception as e: - # Catch any other exceptions. - print(f"File error: {str(e)}") - sys.exit() diff --git a/pptx-env/bin/weasyprint b/pptx-env/bin/weasyprint deleted file mode 100755 index a0c39b96..00000000 --- a/pptx-env/bin/weasyprint +++ /dev/null @@ -1,8 +0,0 @@ -#!/home/theflow/projects/tractatus/pptx-env/bin/python3 -# -*- coding: utf-8 -*- -import re -import sys -from weasyprint.__main__ import main -if __name__ == '__main__': - sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) - sys.exit(main()) diff --git a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/INSTALLER b/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/INSTALLER deleted file mode 100644 index a1b589e3..00000000 --- a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/INSTALLER +++ /dev/null @@ -1 +0,0 @@ -pip diff --git a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/LICENSE b/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/LICENSE deleted file mode 100644 index 33b7cdd2..00000000 --- a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/LICENSE +++ /dev/null @@ -1,19 +0,0 @@ -Copyright (c) 2009, 2010, 2013-2016 by the Brotli Authors. - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in -all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN -THE SOFTWARE. diff --git a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/METADATA b/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/METADATA deleted file mode 100644 index 0f9bd0cc..00000000 --- a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/METADATA +++ /dev/null @@ -1,131 +0,0 @@ -Metadata-Version: 2.1 -Name: Brotli -Version: 1.1.0 -Summary: Python bindings for the Brotli compression library -Home-page: https://github.com/google/brotli -Author: The Brotli Authors -License: MIT -Platform: Posix -Platform: MacOS X -Platform: Windows -Classifier: Development Status :: 4 - Beta -Classifier: Environment :: Console -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: MIT License -Classifier: Operating System :: MacOS :: MacOS X -Classifier: Operating System :: Microsoft :: Windows -Classifier: Operating System :: POSIX :: Linux -Classifier: Programming Language :: C -Classifier: Programming Language :: C++ -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 2 -Classifier: Programming Language :: Python :: 2.7 -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.3 -Classifier: Programming Language :: Python :: 3.4 -Classifier: Programming Language :: Python :: 3.5 -Classifier: Programming Language :: Unix Shell -Classifier: Topic :: Software Development :: Libraries -Classifier: Topic :: Software Development :: Libraries :: Python Modules -Classifier: Topic :: System :: Archiving -Classifier: Topic :: System :: Archiving :: Compression -Classifier: Topic :: Text Processing :: Fonts -Classifier: Topic :: Utilities -Description-Content-Type: text/markdown -License-File: LICENSE - -

- GitHub Actions Build Status - Fuzzing Status -

-

Brotli

- -### Introduction - -Brotli is a generic-purpose lossless compression algorithm that compresses data -using a combination of a modern variant of the LZ77 algorithm, Huffman coding -and 2nd order context modeling, with a compression ratio comparable to the best -currently available general-purpose compression methods. It is similar in speed -with deflate but offers more dense compression. - -The specification of the Brotli Compressed Data Format is defined in [RFC 7932](https://tools.ietf.org/html/rfc7932). - -Brotli is open-sourced under the MIT License, see the LICENSE file. - -> **Please note:** brotli is a "stream" format; it does not contain -> meta-information, like checksums or uncompresssed data length. It is possible -> to modify "raw" ranges of the compressed stream and the decoder will not -> notice that. - -### Build instructions - -#### Vcpkg - -You can download and install brotli using the [vcpkg](https://github.com/Microsoft/vcpkg/) dependency manager: - - git clone https://github.com/Microsoft/vcpkg.git - cd vcpkg - ./bootstrap-vcpkg.sh - ./vcpkg integrate install - ./vcpkg install brotli - -The brotli port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please [create an issue or pull request](https://github.com/Microsoft/vcpkg) on the vcpkg repository. - -#### Bazel - -See [Bazel](http://www.bazel.build/) - -#### CMake - -The basic commands to build and install brotli are: - - $ mkdir out && cd out - $ cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=./installed .. - $ cmake --build . --config Release --target install - -You can use other [CMake](https://cmake.org/) configuration. - -#### Python - -To install the latest release of the Python module, run the following: - - $ pip install brotli - -To install the tip-of-the-tree version, run: - - $ pip install --upgrade git+https://github.com/google/brotli - -See the [Python readme](python/README.md) for more details on installing -from source, development, and testing. - -### Contributing - -We glad to answer/library related questions in -[brotli mailing list](https://groups.google.com/forum/#!forum/brotli). - -Regular issues / feature requests should be reported in -[issue tracker](https://github.com/google/brotli/issues). - -For reporting vulnerability please read [SECURITY](SECURITY.md). - -For contributing changes please read [CONTRIBUTING](CONTRIBUTING.md). - -### Benchmarks -* [Squash Compression Benchmark](https://quixdb.github.io/squash-benchmark/) / [Unstable Squash Compression Benchmark](https://quixdb.github.io/squash-benchmark/unstable/) -* [Large Text Compression Benchmark](http://mattmahoney.net/dc/text.html) -* [Lzturbo Benchmark](https://sites.google.com/site/powturbo/home/benchmark) - -### Related projects -> **Disclaimer:** Brotli authors take no responsibility for the third party projects mentioned in this section. - -Independent [decoder](https://github.com/madler/brotli) implementation by Mark Adler, based entirely on format specification. - -JavaScript port of brotli [decoder](https://github.com/devongovett/brotli.js). Could be used directly via `npm install brotli` - -Hand ported [decoder / encoder](https://github.com/dominikhlbg/BrotliHaxe) in haxe by Dominik Homberger. Output source code: JavaScript, PHP, Python, Java and C# - -7Zip [plugin](https://github.com/mcmilk/7-Zip-Zstd) - -Dart [native bindings](https://github.com/thosakwe/brotli) - -Dart compression framework with [fast FFI-based Brotli implementation](https://pub.dev/documentation/es_compression/latest/brotli/brotli-library.html) with ready-to-use prebuilt binaries for Win/Linux/Mac diff --git a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/RECORD b/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/RECORD deleted file mode 100644 index d4965507..00000000 --- a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/RECORD +++ /dev/null @@ -1,9 +0,0 @@ -Brotli-1.1.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 -Brotli-1.1.0.dist-info/LICENSE,sha256=PRgACONpIqTo2uwRw0x68mT-1ZYtB5JK6pKMOOhmPJQ,1084 -Brotli-1.1.0.dist-info/METADATA,sha256=LJosR9yLMqr7jATPCPS3pSzTdjqqD08KpM8lk9ZuUrs,5496 -Brotli-1.1.0.dist-info/RECORD,, -Brotli-1.1.0.dist-info/WHEEL,sha256=4ZiCdXIWMxJyEClivrQv1QAHZpQh8kVYU92_ZAVwaok,152 -Brotli-1.1.0.dist-info/top_level.txt,sha256=gsS54HrhO3ZveFxeMrKo_7qH4Sm4TbQ7jGLVBEqJ4NI,15 -__pycache__/brotli.cpython-312.pyc,, -_brotli.cpython-312-x86_64-linux-gnu.so,sha256=heG65RZKWcoNLgOOe4_gCQnMVwCaGOo8031-EQJQ-cc,7458584 -brotli.py,sha256=PnGIVmeFGFHSOwermGwohhd2Fyr44FhgejLQkkIIFiA,1866 diff --git a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/WHEEL b/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/WHEEL deleted file mode 100644 index d1b3f1da..00000000 --- a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/WHEEL +++ /dev/null @@ -1,6 +0,0 @@ -Wheel-Version: 1.0 -Generator: bdist_wheel (0.41.2) -Root-Is-Purelib: false -Tag: cp312-cp312-manylinux_2_17_x86_64 -Tag: cp312-cp312-manylinux2014_x86_64 - diff --git a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/top_level.txt b/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/top_level.txt deleted file mode 100644 index a111e9cc..00000000 --- a/pptx-env/lib/python3.12/site-packages/Brotli-1.1.0.dist-info/top_level.txt +++ /dev/null @@ -1,2 +0,0 @@ -_brotli -brotli diff --git a/pptx-env/lib/python3.12/site-packages/PIL/AvifImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/AvifImagePlugin.py deleted file mode 100644 index 366e0c86..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/AvifImagePlugin.py +++ /dev/null @@ -1,291 +0,0 @@ -from __future__ import annotations - -import os -from io import BytesIO -from typing import IO - -from . import ExifTags, Image, ImageFile - -try: - from . import _avif - - SUPPORTED = True -except ImportError: - SUPPORTED = False - -# Decoder options as module globals, until there is a way to pass parameters -# to Image.open (see https://github.com/python-pillow/Pillow/issues/569) -DECODE_CODEC_CHOICE = "auto" -DEFAULT_MAX_THREADS = 0 - - -def get_codec_version(codec_name: str) -> str | None: - versions = _avif.codec_versions() - for version in versions.split(", "): - if version.split(" [")[0] == codec_name: - return version.split(":")[-1].split(" ")[0] - return None - - -def _accept(prefix: bytes) -> bool | str: - if prefix[4:8] != b"ftyp": - return False - major_brand = prefix[8:12] - if major_brand in ( - # coding brands - b"avif", - b"avis", - # We accept files with AVIF container brands; we can't yet know if - # the ftyp box has the correct compatible brands, but if it doesn't - # then the plugin will raise a SyntaxError which Pillow will catch - # before moving on to the next plugin that accepts the file. - # - # Also, because this file might not actually be an AVIF file, we - # don't raise an error if AVIF support isn't properly compiled. - b"mif1", - b"msf1", - ): - if not SUPPORTED: - return ( - "image file could not be identified because AVIF support not installed" - ) - return True - return False - - -def _get_default_max_threads() -> int: - if DEFAULT_MAX_THREADS: - return DEFAULT_MAX_THREADS - if hasattr(os, "sched_getaffinity"): - return len(os.sched_getaffinity(0)) - else: - return os.cpu_count() or 1 - - -class AvifImageFile(ImageFile.ImageFile): - format = "AVIF" - format_description = "AVIF image" - __frame = -1 - - def _open(self) -> None: - if not SUPPORTED: - msg = "image file could not be opened because AVIF support not installed" - raise SyntaxError(msg) - - if DECODE_CODEC_CHOICE != "auto" and not _avif.decoder_codec_available( - DECODE_CODEC_CHOICE - ): - msg = "Invalid opening codec" - raise ValueError(msg) - self._decoder = _avif.AvifDecoder( - self.fp.read(), - DECODE_CODEC_CHOICE, - _get_default_max_threads(), - ) - - # Get info from decoder - self._size, self.n_frames, self._mode, icc, exif, exif_orientation, xmp = ( - self._decoder.get_info() - ) - self.is_animated = self.n_frames > 1 - - if icc: - self.info["icc_profile"] = icc - if xmp: - self.info["xmp"] = xmp - - if exif_orientation != 1 or exif: - exif_data = Image.Exif() - if exif: - exif_data.load(exif) - original_orientation = exif_data.get(ExifTags.Base.Orientation, 1) - else: - original_orientation = 1 - if exif_orientation != original_orientation: - exif_data[ExifTags.Base.Orientation] = exif_orientation - exif = exif_data.tobytes() - if exif: - self.info["exif"] = exif - self.seek(0) - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - - # Set tile - self.__frame = frame - self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, 0, self.mode)] - - def load(self) -> Image.core.PixelAccess | None: - if self.tile: - # We need to load the image data for this frame - data, timescale, pts_in_timescales, duration_in_timescales = ( - self._decoder.get_frame(self.__frame) - ) - self.info["timestamp"] = round(1000 * (pts_in_timescales / timescale)) - self.info["duration"] = round(1000 * (duration_in_timescales / timescale)) - - if self.fp and self._exclusive_fp: - self.fp.close() - self.fp = BytesIO(data) - - return super().load() - - def load_seek(self, pos: int) -> None: - pass - - def tell(self) -> int: - return self.__frame - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - _save(im, fp, filename, save_all=True) - - -def _save( - im: Image.Image, fp: IO[bytes], filename: str | bytes, save_all: bool = False -) -> None: - info = im.encoderinfo.copy() - if save_all: - append_images = list(info.get("append_images", [])) - else: - append_images = [] - - total = 0 - for ims in [im] + append_images: - total += getattr(ims, "n_frames", 1) - - quality = info.get("quality", 75) - if not isinstance(quality, int) or quality < 0 or quality > 100: - msg = "Invalid quality setting" - raise ValueError(msg) - - duration = info.get("duration", 0) - subsampling = info.get("subsampling", "4:2:0") - speed = info.get("speed", 6) - max_threads = info.get("max_threads", _get_default_max_threads()) - codec = info.get("codec", "auto") - if codec != "auto" and not _avif.encoder_codec_available(codec): - msg = "Invalid saving codec" - raise ValueError(msg) - range_ = info.get("range", "full") - tile_rows_log2 = info.get("tile_rows", 0) - tile_cols_log2 = info.get("tile_cols", 0) - alpha_premultiplied = bool(info.get("alpha_premultiplied", False)) - autotiling = bool(info.get("autotiling", tile_rows_log2 == tile_cols_log2 == 0)) - - icc_profile = info.get("icc_profile", im.info.get("icc_profile")) - exif_orientation = 1 - if exif := info.get("exif"): - if isinstance(exif, Image.Exif): - exif_data = exif - else: - exif_data = Image.Exif() - exif_data.load(exif) - if ExifTags.Base.Orientation in exif_data: - exif_orientation = exif_data.pop(ExifTags.Base.Orientation) - exif = exif_data.tobytes() if exif_data else b"" - elif isinstance(exif, Image.Exif): - exif = exif_data.tobytes() - - xmp = info.get("xmp") - - if isinstance(xmp, str): - xmp = xmp.encode("utf-8") - - advanced = info.get("advanced") - if advanced is not None: - if isinstance(advanced, dict): - advanced = advanced.items() - try: - advanced = tuple(advanced) - except TypeError: - invalid = True - else: - invalid = any(not isinstance(v, tuple) or len(v) != 2 for v in advanced) - if invalid: - msg = ( - "advanced codec options must be a dict of key-value string " - "pairs or a series of key-value two-tuples" - ) - raise ValueError(msg) - - # Setup the AVIF encoder - enc = _avif.AvifEncoder( - im.size, - subsampling, - quality, - speed, - max_threads, - codec, - range_, - tile_rows_log2, - tile_cols_log2, - alpha_premultiplied, - autotiling, - icc_profile or b"", - exif or b"", - exif_orientation, - xmp or b"", - advanced, - ) - - # Add each frame - frame_idx = 0 - frame_duration = 0 - cur_idx = im.tell() - is_single_frame = total == 1 - try: - for ims in [im] + append_images: - # Get number of frames in this image - nfr = getattr(ims, "n_frames", 1) - - for idx in range(nfr): - ims.seek(idx) - - # Make sure image mode is supported - frame = ims - rawmode = ims.mode - if ims.mode not in {"RGB", "RGBA"}: - rawmode = "RGBA" if ims.has_transparency_data else "RGB" - frame = ims.convert(rawmode) - - # Update frame duration - if isinstance(duration, (list, tuple)): - frame_duration = duration[frame_idx] - else: - frame_duration = duration - - # Append the frame to the animation encoder - enc.add( - frame.tobytes("raw", rawmode), - frame_duration, - frame.size, - rawmode, - is_single_frame, - ) - - # Update frame index - frame_idx += 1 - - if not save_all: - break - - finally: - im.seek(cur_idx) - - # Get the final output from the encoder - data = enc.finish() - if data is None: - msg = "cannot write file as AVIF (encoder returned None)" - raise OSError(msg) - - fp.write(data) - - -Image.register_open(AvifImageFile.format, AvifImageFile, _accept) -if SUPPORTED: - Image.register_save(AvifImageFile.format, _save) - Image.register_save_all(AvifImageFile.format, _save_all) - Image.register_extensions(AvifImageFile.format, [".avif", ".avifs"]) - Image.register_mime(AvifImageFile.format, "image/avif") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/BdfFontFile.py b/pptx-env/lib/python3.12/site-packages/PIL/BdfFontFile.py deleted file mode 100644 index f175e2f4..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/BdfFontFile.py +++ /dev/null @@ -1,122 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# bitmap distribution font (bdf) file parser -# -# history: -# 1996-05-16 fl created (as bdf2pil) -# 1997-08-25 fl converted to FontFile driver -# 2001-05-25 fl removed bogus __init__ call -# 2002-11-20 fl robustification (from Kevin Cazabon, Dmitry Vasiliev) -# 2003-04-22 fl more robustification (from Graham Dumpleton) -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1997-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -""" -Parse X Bitmap Distribution Format (BDF) -""" -from __future__ import annotations - -from typing import BinaryIO - -from . import FontFile, Image - - -def bdf_char( - f: BinaryIO, -) -> ( - tuple[ - str, - int, - tuple[tuple[int, int], tuple[int, int, int, int], tuple[int, int, int, int]], - Image.Image, - ] - | None -): - # skip to STARTCHAR - while True: - s = f.readline() - if not s: - return None - if s.startswith(b"STARTCHAR"): - break - id = s[9:].strip().decode("ascii") - - # load symbol properties - props = {} - while True: - s = f.readline() - if not s or s.startswith(b"BITMAP"): - break - i = s.find(b" ") - props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii") - - # load bitmap - bitmap = bytearray() - while True: - s = f.readline() - if not s or s.startswith(b"ENDCHAR"): - break - bitmap += s[:-1] - - # The word BBX - # followed by the width in x (BBw), height in y (BBh), - # and x and y displacement (BBxoff0, BByoff0) - # of the lower left corner from the origin of the character. - width, height, x_disp, y_disp = (int(p) for p in props["BBX"].split()) - - # The word DWIDTH - # followed by the width in x and y of the character in device pixels. - dwx, dwy = (int(p) for p in props["DWIDTH"].split()) - - bbox = ( - (dwx, dwy), - (x_disp, -y_disp - height, width + x_disp, -y_disp), - (0, 0, width, height), - ) - - try: - im = Image.frombytes("1", (width, height), bitmap, "hex", "1") - except ValueError: - # deal with zero-width characters - im = Image.new("1", (width, height)) - - return id, int(props["ENCODING"]), bbox, im - - -class BdfFontFile(FontFile.FontFile): - """Font file plugin for the X11 BDF format.""" - - def __init__(self, fp: BinaryIO) -> None: - super().__init__() - - s = fp.readline() - if not s.startswith(b"STARTFONT 2.1"): - msg = "not a valid BDF file" - raise SyntaxError(msg) - - props = {} - comments = [] - - while True: - s = fp.readline() - if not s or s.startswith(b"ENDPROPERTIES"): - break - i = s.find(b" ") - props[s[:i].decode("ascii")] = s[i + 1 : -1].decode("ascii") - if s[:i] in [b"COMMENT", b"COPYRIGHT"]: - if s.find(b"LogicalFontDescription") < 0: - comments.append(s[i + 1 : -1].decode("ascii")) - - while True: - c = bdf_char(fp) - if not c: - break - id, ch, (xy, dst, src), im = c - if 0 <= ch < len(self.glyph): - self.glyph[ch] = xy, dst, src, im diff --git a/pptx-env/lib/python3.12/site-packages/PIL/BlpImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/BlpImagePlugin.py deleted file mode 100644 index f7be7746..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/BlpImagePlugin.py +++ /dev/null @@ -1,497 +0,0 @@ -""" -Blizzard Mipmap Format (.blp) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -BLP1 files, used mostly in Warcraft III, are not fully supported. -All types of BLP2 files used in World of Warcraft are supported. - -The BLP file structure consists of a header, up to 16 mipmaps of the -texture - -Texture sizes must be powers of two, though the two dimensions do -not have to be equal; 512x256 is valid, but 512x200 is not. -The first mipmap (mipmap #0) is the full size image; each subsequent -mipmap halves both dimensions. The final mipmap should be 1x1. - -BLP files come in many different flavours: -* JPEG-compressed (type == 0) - only supported for BLP1. -* RAW images (type == 1, encoding == 1). Each mipmap is stored as an - array of 8-bit values, one per pixel, left to right, top to bottom. - Each value is an index to the palette. -* DXT-compressed (type == 1, encoding == 2): -- DXT1 compression is used if alpha_encoding == 0. - - An additional alpha bit is used if alpha_depth == 1. - - DXT3 compression is used if alpha_encoding == 1. - - DXT5 compression is used if alpha_encoding == 7. -""" - -from __future__ import annotations - -import abc -import os -import struct -from enum import IntEnum -from io import BytesIO -from typing import IO - -from . import Image, ImageFile - - -class Format(IntEnum): - JPEG = 0 - - -class Encoding(IntEnum): - UNCOMPRESSED = 1 - DXT = 2 - UNCOMPRESSED_RAW_BGRA = 3 - - -class AlphaEncoding(IntEnum): - DXT1 = 0 - DXT3 = 1 - DXT5 = 7 - - -def unpack_565(i: int) -> tuple[int, int, int]: - return ((i >> 11) & 0x1F) << 3, ((i >> 5) & 0x3F) << 2, (i & 0x1F) << 3 - - -def decode_dxt1( - data: bytes, alpha: bool = False -) -> tuple[bytearray, bytearray, bytearray, bytearray]: - """ - input: one "row" of data (i.e. will produce 4*width pixels) - """ - - blocks = len(data) // 8 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block_index in range(blocks): - # Decode next 8-byte block. - idx = block_index * 8 - color0, color1, bits = struct.unpack_from("> 2 - - a = 0xFF - if control == 0: - r, g, b = r0, g0, b0 - elif control == 1: - r, g, b = r1, g1, b1 - elif control == 2: - if color0 > color1: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - else: - r = (r0 + r1) // 2 - g = (g0 + g1) // 2 - b = (b0 + b1) // 2 - elif control == 3: - if color0 > color1: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - else: - r, g, b, a = 0, 0, 0, 0 - - if alpha: - ret[j].extend([r, g, b, a]) - else: - ret[j].extend([r, g, b]) - - return ret - - -def decode_dxt3(data: bytes) -> tuple[bytearray, bytearray, bytearray, bytearray]: - """ - input: one "row" of data (i.e. will produce 4*width pixels) - """ - - blocks = len(data) // 16 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block_index in range(blocks): - idx = block_index * 16 - block = data[idx : idx + 16] - # Decode next 16-byte block. - bits = struct.unpack_from("<8B", block) - color0, color1 = struct.unpack_from(">= 4 - else: - high = True - a &= 0xF - a *= 17 # We get a value between 0 and 15 - - color_code = (code >> 2 * (4 * j + i)) & 0x03 - - if color_code == 0: - r, g, b = r0, g0, b0 - elif color_code == 1: - r, g, b = r1, g1, b1 - elif color_code == 2: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - elif color_code == 3: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - - ret[j].extend([r, g, b, a]) - - return ret - - -def decode_dxt5(data: bytes) -> tuple[bytearray, bytearray, bytearray, bytearray]: - """ - input: one "row" of data (i.e. will produce 4 * width pixels) - """ - - blocks = len(data) // 16 # number of blocks in row - ret = (bytearray(), bytearray(), bytearray(), bytearray()) - - for block_index in range(blocks): - idx = block_index * 16 - block = data[idx : idx + 16] - # Decode next 16-byte block. - a0, a1 = struct.unpack_from("> alphacode_index) & 0x07 - elif alphacode_index == 15: - alphacode = (alphacode2 >> 15) | ((alphacode1 << 1) & 0x06) - else: # alphacode_index >= 18 and alphacode_index <= 45 - alphacode = (alphacode1 >> (alphacode_index - 16)) & 0x07 - - if alphacode == 0: - a = a0 - elif alphacode == 1: - a = a1 - elif a0 > a1: - a = ((8 - alphacode) * a0 + (alphacode - 1) * a1) // 7 - elif alphacode == 6: - a = 0 - elif alphacode == 7: - a = 255 - else: - a = ((6 - alphacode) * a0 + (alphacode - 1) * a1) // 5 - - color_code = (code >> 2 * (4 * j + i)) & 0x03 - - if color_code == 0: - r, g, b = r0, g0, b0 - elif color_code == 1: - r, g, b = r1, g1, b1 - elif color_code == 2: - r = (2 * r0 + r1) // 3 - g = (2 * g0 + g1) // 3 - b = (2 * b0 + b1) // 3 - elif color_code == 3: - r = (2 * r1 + r0) // 3 - g = (2 * g1 + g0) // 3 - b = (2 * b1 + b0) // 3 - - ret[j].extend([r, g, b, a]) - - return ret - - -class BLPFormatError(NotImplementedError): - pass - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith((b"BLP1", b"BLP2")) - - -class BlpImageFile(ImageFile.ImageFile): - """ - Blizzard Mipmap Format - """ - - format = "BLP" - format_description = "Blizzard Mipmap Format" - - def _open(self) -> None: - self.magic = self.fp.read(4) - if not _accept(self.magic): - msg = f"Bad BLP magic {repr(self.magic)}" - raise BLPFormatError(msg) - - compression = struct.unpack(" tuple[int, int]: - try: - self._read_header() - self._load() - except struct.error as e: - msg = "Truncated BLP file" - raise OSError(msg) from e - return -1, 0 - - @abc.abstractmethod - def _load(self) -> None: - pass - - def _read_header(self) -> None: - self._offsets = struct.unpack("<16I", self._safe_read(16 * 4)) - self._lengths = struct.unpack("<16I", self._safe_read(16 * 4)) - - def _safe_read(self, length: int) -> bytes: - assert self.fd is not None - return ImageFile._safe_read(self.fd, length) - - def _read_palette(self) -> list[tuple[int, int, int, int]]: - ret = [] - for i in range(256): - try: - b, g, r, a = struct.unpack("<4B", self._safe_read(4)) - except struct.error: - break - ret.append((b, g, r, a)) - return ret - - def _read_bgra( - self, palette: list[tuple[int, int, int, int]], alpha: bool - ) -> bytearray: - data = bytearray() - _data = BytesIO(self._safe_read(self._lengths[0])) - while True: - try: - (offset,) = struct.unpack(" None: - self._compression, self._encoding, alpha = self.args - - if self._compression == Format.JPEG: - self._decode_jpeg_stream() - - elif self._compression == 1: - if self._encoding in (4, 5): - palette = self._read_palette() - data = self._read_bgra(palette, alpha) - self.set_as_raw(data) - else: - msg = f"Unsupported BLP encoding {repr(self._encoding)}" - raise BLPFormatError(msg) - else: - msg = f"Unsupported BLP compression {repr(self._encoding)}" - raise BLPFormatError(msg) - - def _decode_jpeg_stream(self) -> None: - from .JpegImagePlugin import JpegImageFile - - (jpeg_header_size,) = struct.unpack(" None: - self._compression, self._encoding, alpha, self._alpha_encoding = self.args - - palette = self._read_palette() - - assert self.fd is not None - self.fd.seek(self._offsets[0]) - - if self._compression == 1: - # Uncompressed or DirectX compression - - if self._encoding == Encoding.UNCOMPRESSED: - data = self._read_bgra(palette, alpha) - - elif self._encoding == Encoding.DXT: - data = bytearray() - if self._alpha_encoding == AlphaEncoding.DXT1: - linesize = (self.state.xsize + 3) // 4 * 8 - for yb in range((self.state.ysize + 3) // 4): - for d in decode_dxt1(self._safe_read(linesize), alpha): - data += d - - elif self._alpha_encoding == AlphaEncoding.DXT3: - linesize = (self.state.xsize + 3) // 4 * 16 - for yb in range((self.state.ysize + 3) // 4): - for d in decode_dxt3(self._safe_read(linesize)): - data += d - - elif self._alpha_encoding == AlphaEncoding.DXT5: - linesize = (self.state.xsize + 3) // 4 * 16 - for yb in range((self.state.ysize + 3) // 4): - for d in decode_dxt5(self._safe_read(linesize)): - data += d - else: - msg = f"Unsupported alpha encoding {repr(self._alpha_encoding)}" - raise BLPFormatError(msg) - else: - msg = f"Unknown BLP encoding {repr(self._encoding)}" - raise BLPFormatError(msg) - - else: - msg = f"Unknown BLP compression {repr(self._compression)}" - raise BLPFormatError(msg) - - self.set_as_raw(data) - - -class BLPEncoder(ImageFile.PyEncoder): - _pushes_fd = True - - def _write_palette(self) -> bytes: - data = b"" - assert self.im is not None - palette = self.im.getpalette("RGBA", "RGBA") - for i in range(len(palette) // 4): - r, g, b, a = palette[i * 4 : (i + 1) * 4] - data += struct.pack("<4B", b, g, r, a) - while len(data) < 256 * 4: - data += b"\x00" * 4 - return data - - def encode(self, bufsize: int) -> tuple[int, int, bytes]: - palette_data = self._write_palette() - - offset = 20 + 16 * 4 * 2 + len(palette_data) - data = struct.pack("<16I", offset, *((0,) * 15)) - - assert self.im is not None - w, h = self.im.size - data += struct.pack("<16I", w * h, *((0,) * 15)) - - data += palette_data - - for y in range(h): - for x in range(w): - data += struct.pack(" None: - if im.mode != "P": - msg = "Unsupported BLP image mode" - raise ValueError(msg) - - magic = b"BLP1" if im.encoderinfo.get("blp_version") == "BLP1" else b"BLP2" - fp.write(magic) - - assert im.palette is not None - fp.write(struct.pack(" mode, rawmode - 1: ("P", "P;1"), - 4: ("P", "P;4"), - 8: ("P", "P"), - 16: ("RGB", "BGR;15"), - 24: ("RGB", "BGR"), - 32: ("RGB", "BGRX"), -} - -USE_RAW_ALPHA = False - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"BM") - - -def _dib_accept(prefix: bytes) -> bool: - return i32(prefix) in [12, 40, 52, 56, 64, 108, 124] - - -# ============================================================================= -# Image plugin for the Windows BMP format. -# ============================================================================= -class BmpImageFile(ImageFile.ImageFile): - """Image plugin for the Windows Bitmap format (BMP)""" - - # ------------------------------------------------------------- Description - format_description = "Windows Bitmap" - format = "BMP" - - # -------------------------------------------------- BMP Compression values - COMPRESSIONS = {"RAW": 0, "RLE8": 1, "RLE4": 2, "BITFIELDS": 3, "JPEG": 4, "PNG": 5} - for k, v in COMPRESSIONS.items(): - vars()[k] = v - - def _bitmap(self, header: int = 0, offset: int = 0) -> None: - """Read relevant info about the BMP""" - read, seek = self.fp.read, self.fp.seek - if header: - seek(header) - # read bmp header size @offset 14 (this is part of the header size) - file_info: dict[str, bool | int | tuple[int, ...]] = { - "header_size": i32(read(4)), - "direction": -1, - } - - # -------------------- If requested, read header at a specific position - # read the rest of the bmp header, without its size - assert isinstance(file_info["header_size"], int) - header_data = ImageFile._safe_read(self.fp, file_info["header_size"] - 4) - - # ------------------------------- Windows Bitmap v2, IBM OS/2 Bitmap v1 - # ----- This format has different offsets because of width/height types - # 12: BITMAPCOREHEADER/OS21XBITMAPHEADER - if file_info["header_size"] == 12: - file_info["width"] = i16(header_data, 0) - file_info["height"] = i16(header_data, 2) - file_info["planes"] = i16(header_data, 4) - file_info["bits"] = i16(header_data, 6) - file_info["compression"] = self.COMPRESSIONS["RAW"] - file_info["palette_padding"] = 3 - - # --------------------------------------------- Windows Bitmap v3 to v5 - # 40: BITMAPINFOHEADER - # 52: BITMAPV2HEADER - # 56: BITMAPV3HEADER - # 64: BITMAPCOREHEADER2/OS22XBITMAPHEADER - # 108: BITMAPV4HEADER - # 124: BITMAPV5HEADER - elif file_info["header_size"] in (40, 52, 56, 64, 108, 124): - file_info["y_flip"] = header_data[7] == 0xFF - file_info["direction"] = 1 if file_info["y_flip"] else -1 - file_info["width"] = i32(header_data, 0) - file_info["height"] = ( - i32(header_data, 4) - if not file_info["y_flip"] - else 2**32 - i32(header_data, 4) - ) - file_info["planes"] = i16(header_data, 8) - file_info["bits"] = i16(header_data, 10) - file_info["compression"] = i32(header_data, 12) - # byte size of pixel data - file_info["data_size"] = i32(header_data, 16) - file_info["pixels_per_meter"] = ( - i32(header_data, 20), - i32(header_data, 24), - ) - file_info["colors"] = i32(header_data, 28) - file_info["palette_padding"] = 4 - assert isinstance(file_info["pixels_per_meter"], tuple) - self.info["dpi"] = tuple(x / 39.3701 for x in file_info["pixels_per_meter"]) - if file_info["compression"] == self.COMPRESSIONS["BITFIELDS"]: - masks = ["r_mask", "g_mask", "b_mask"] - if len(header_data) >= 48: - if len(header_data) >= 52: - masks.append("a_mask") - else: - file_info["a_mask"] = 0x0 - for idx, mask in enumerate(masks): - file_info[mask] = i32(header_data, 36 + idx * 4) - else: - # 40 byte headers only have the three components in the - # bitfields masks, ref: - # https://msdn.microsoft.com/en-us/library/windows/desktop/dd183376(v=vs.85).aspx - # See also - # https://github.com/python-pillow/Pillow/issues/1293 - # There is a 4th component in the RGBQuad, in the alpha - # location, but it is listed as a reserved component, - # and it is not generally an alpha channel - file_info["a_mask"] = 0x0 - for mask in masks: - file_info[mask] = i32(read(4)) - assert isinstance(file_info["r_mask"], int) - assert isinstance(file_info["g_mask"], int) - assert isinstance(file_info["b_mask"], int) - assert isinstance(file_info["a_mask"], int) - file_info["rgb_mask"] = ( - file_info["r_mask"], - file_info["g_mask"], - file_info["b_mask"], - ) - file_info["rgba_mask"] = ( - file_info["r_mask"], - file_info["g_mask"], - file_info["b_mask"], - file_info["a_mask"], - ) - else: - msg = f"Unsupported BMP header type ({file_info['header_size']})" - raise OSError(msg) - - # ------------------ Special case : header is reported 40, which - # ---------------------- is shorter than real size for bpp >= 16 - assert isinstance(file_info["width"], int) - assert isinstance(file_info["height"], int) - self._size = file_info["width"], file_info["height"] - - # ------- If color count was not found in the header, compute from bits - assert isinstance(file_info["bits"], int) - file_info["colors"] = ( - file_info["colors"] - if file_info.get("colors", 0) - else (1 << file_info["bits"]) - ) - assert isinstance(file_info["colors"], int) - if offset == 14 + file_info["header_size"] and file_info["bits"] <= 8: - offset += 4 * file_info["colors"] - - # ---------------------- Check bit depth for unusual unsupported values - self._mode, raw_mode = BIT2MODE.get(file_info["bits"], ("", "")) - if not self.mode: - msg = f"Unsupported BMP pixel depth ({file_info['bits']})" - raise OSError(msg) - - # ---------------- Process BMP with Bitfields compression (not palette) - decoder_name = "raw" - if file_info["compression"] == self.COMPRESSIONS["BITFIELDS"]: - SUPPORTED: dict[int, list[tuple[int, ...]]] = { - 32: [ - (0xFF0000, 0xFF00, 0xFF, 0x0), - (0xFF000000, 0xFF0000, 0xFF00, 0x0), - (0xFF000000, 0xFF00, 0xFF, 0x0), - (0xFF000000, 0xFF0000, 0xFF00, 0xFF), - (0xFF, 0xFF00, 0xFF0000, 0xFF000000), - (0xFF0000, 0xFF00, 0xFF, 0xFF000000), - (0xFF000000, 0xFF00, 0xFF, 0xFF0000), - (0x0, 0x0, 0x0, 0x0), - ], - 24: [(0xFF0000, 0xFF00, 0xFF)], - 16: [(0xF800, 0x7E0, 0x1F), (0x7C00, 0x3E0, 0x1F)], - } - MASK_MODES = { - (32, (0xFF0000, 0xFF00, 0xFF, 0x0)): "BGRX", - (32, (0xFF000000, 0xFF0000, 0xFF00, 0x0)): "XBGR", - (32, (0xFF000000, 0xFF00, 0xFF, 0x0)): "BGXR", - (32, (0xFF000000, 0xFF0000, 0xFF00, 0xFF)): "ABGR", - (32, (0xFF, 0xFF00, 0xFF0000, 0xFF000000)): "RGBA", - (32, (0xFF0000, 0xFF00, 0xFF, 0xFF000000)): "BGRA", - (32, (0xFF000000, 0xFF00, 0xFF, 0xFF0000)): "BGAR", - (32, (0x0, 0x0, 0x0, 0x0)): "BGRA", - (24, (0xFF0000, 0xFF00, 0xFF)): "BGR", - (16, (0xF800, 0x7E0, 0x1F)): "BGR;16", - (16, (0x7C00, 0x3E0, 0x1F)): "BGR;15", - } - if file_info["bits"] in SUPPORTED: - if ( - file_info["bits"] == 32 - and file_info["rgba_mask"] in SUPPORTED[file_info["bits"]] - ): - assert isinstance(file_info["rgba_mask"], tuple) - raw_mode = MASK_MODES[(file_info["bits"], file_info["rgba_mask"])] - self._mode = "RGBA" if "A" in raw_mode else self.mode - elif ( - file_info["bits"] in (24, 16) - and file_info["rgb_mask"] in SUPPORTED[file_info["bits"]] - ): - assert isinstance(file_info["rgb_mask"], tuple) - raw_mode = MASK_MODES[(file_info["bits"], file_info["rgb_mask"])] - else: - msg = "Unsupported BMP bitfields layout" - raise OSError(msg) - else: - msg = "Unsupported BMP bitfields layout" - raise OSError(msg) - elif file_info["compression"] == self.COMPRESSIONS["RAW"]: - if file_info["bits"] == 32 and ( - header == 22 or USE_RAW_ALPHA # 32-bit .cur offset - ): - raw_mode, self._mode = "BGRA", "RGBA" - elif file_info["compression"] in ( - self.COMPRESSIONS["RLE8"], - self.COMPRESSIONS["RLE4"], - ): - decoder_name = "bmp_rle" - else: - msg = f"Unsupported BMP compression ({file_info['compression']})" - raise OSError(msg) - - # --------------- Once the header is processed, process the palette/LUT - if self.mode == "P": # Paletted for 1, 4 and 8 bit images - # ---------------------------------------------------- 1-bit images - if not (0 < file_info["colors"] <= 65536): - msg = f"Unsupported BMP Palette size ({file_info['colors']})" - raise OSError(msg) - else: - assert isinstance(file_info["palette_padding"], int) - padding = file_info["palette_padding"] - palette = read(padding * file_info["colors"]) - grayscale = True - indices = ( - (0, 255) - if file_info["colors"] == 2 - else list(range(file_info["colors"])) - ) - - # ----------------- Check if grayscale and ignore palette if so - for ind, val in enumerate(indices): - rgb = palette[ind * padding : ind * padding + 3] - if rgb != o8(val) * 3: - grayscale = False - - # ------- If all colors are gray, white or black, ditch palette - if grayscale: - self._mode = "1" if file_info["colors"] == 2 else "L" - raw_mode = self.mode - else: - self._mode = "P" - self.palette = ImagePalette.raw( - "BGRX" if padding == 4 else "BGR", palette - ) - - # ---------------------------- Finally set the tile data for the plugin - self.info["compression"] = file_info["compression"] - args: list[Any] = [raw_mode] - if decoder_name == "bmp_rle": - args.append(file_info["compression"] == self.COMPRESSIONS["RLE4"]) - else: - assert isinstance(file_info["width"], int) - args.append(((file_info["width"] * file_info["bits"] + 31) >> 3) & (~3)) - args.append(file_info["direction"]) - self.tile = [ - ImageFile._Tile( - decoder_name, - (0, 0, file_info["width"], file_info["height"]), - offset or self.fp.tell(), - tuple(args), - ) - ] - - def _open(self) -> None: - """Open file, check magic number and read header""" - # read 14 bytes: magic number, filesize, reserved, header final offset - head_data = self.fp.read(14) - # choke if the file does not have the required magic bytes - if not _accept(head_data): - msg = "Not a BMP file" - raise SyntaxError(msg) - # read the start position of the BMP image data (u32) - offset = i32(head_data, 10) - # load bitmap information (offset=raster info) - self._bitmap(offset=offset) - - -class BmpRleDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - rle4 = self.args[1] - data = bytearray() - x = 0 - dest_length = self.state.xsize * self.state.ysize - while len(data) < dest_length: - pixels = self.fd.read(1) - byte = self.fd.read(1) - if not pixels or not byte: - break - num_pixels = pixels[0] - if num_pixels: - # encoded mode - if x + num_pixels > self.state.xsize: - # Too much data for row - num_pixels = max(0, self.state.xsize - x) - if rle4: - first_pixel = o8(byte[0] >> 4) - second_pixel = o8(byte[0] & 0x0F) - for index in range(num_pixels): - if index % 2 == 0: - data += first_pixel - else: - data += second_pixel - else: - data += byte * num_pixels - x += num_pixels - else: - if byte[0] == 0: - # end of line - while len(data) % self.state.xsize != 0: - data += b"\x00" - x = 0 - elif byte[0] == 1: - # end of bitmap - break - elif byte[0] == 2: - # delta - bytes_read = self.fd.read(2) - if len(bytes_read) < 2: - break - right, up = self.fd.read(2) - data += b"\x00" * (right + up * self.state.xsize) - x = len(data) % self.state.xsize - else: - # absolute mode - if rle4: - # 2 pixels per byte - byte_count = byte[0] // 2 - bytes_read = self.fd.read(byte_count) - for byte_read in bytes_read: - data += o8(byte_read >> 4) - data += o8(byte_read & 0x0F) - else: - byte_count = byte[0] - bytes_read = self.fd.read(byte_count) - data += bytes_read - if len(bytes_read) < byte_count: - break - x += byte[0] - - # align to 16-bit word boundary - if self.fd.tell() % 2 != 0: - self.fd.seek(1, os.SEEK_CUR) - rawmode = "L" if self.mode == "L" else "P" - self.set_as_raw(bytes(data), rawmode, (0, self.args[-1])) - return -1, 0 - - -# ============================================================================= -# Image plugin for the DIB format (BMP alias) -# ============================================================================= -class DibImageFile(BmpImageFile): - format = "DIB" - format_description = "Windows Bitmap" - - def _open(self) -> None: - self._bitmap() - - -# -# -------------------------------------------------------------------- -# Write BMP file - - -SAVE = { - "1": ("1", 1, 2), - "L": ("L", 8, 256), - "P": ("P", 8, 256), - "RGB": ("BGR", 24, 0), - "RGBA": ("BGRA", 32, 0), -} - - -def _dib_save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - _save(im, fp, filename, False) - - -def _save( - im: Image.Image, fp: IO[bytes], filename: str | bytes, bitmap_header: bool = True -) -> None: - try: - rawmode, bits, colors = SAVE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as BMP" - raise OSError(msg) from e - - info = im.encoderinfo - - dpi = info.get("dpi", (96, 96)) - - # 1 meter == 39.3701 inches - ppm = tuple(int(x * 39.3701 + 0.5) for x in dpi) - - stride = ((im.size[0] * bits + 7) // 8 + 3) & (~3) - header = 40 # or 64 for OS/2 version 2 - image = stride * im.size[1] - - if im.mode == "1": - palette = b"".join(o8(i) * 3 + b"\x00" for i in (0, 255)) - elif im.mode == "L": - palette = b"".join(o8(i) * 3 + b"\x00" for i in range(256)) - elif im.mode == "P": - palette = im.im.getpalette("RGB", "BGRX") - colors = len(palette) // 4 - else: - palette = None - - # bitmap header - if bitmap_header: - offset = 14 + header + colors * 4 - file_size = offset + image - if file_size > 2**32 - 1: - msg = "File size is too large for the BMP format" - raise ValueError(msg) - fp.write( - b"BM" # file type (magic) - + o32(file_size) # file size - + o32(0) # reserved - + o32(offset) # image data offset - ) - - # bitmap info header - fp.write( - o32(header) # info header size - + o32(im.size[0]) # width - + o32(im.size[1]) # height - + o16(1) # planes - + o16(bits) # depth - + o32(0) # compression (0=uncompressed) - + o32(image) # size of bitmap - + o32(ppm[0]) # resolution - + o32(ppm[1]) # resolution - + o32(colors) # colors used - + o32(colors) # colors important - ) - - fp.write(b"\0" * (header - 40)) # padding (for OS/2 format) - - if palette: - fp.write(palette) - - ImageFile._save( - im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 0, (rawmode, stride, -1))] - ) - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(BmpImageFile.format, BmpImageFile, _accept) -Image.register_save(BmpImageFile.format, _save) - -Image.register_extension(BmpImageFile.format, ".bmp") - -Image.register_mime(BmpImageFile.format, "image/bmp") - -Image.register_decoder("bmp_rle", BmpRleDecoder) - -Image.register_open(DibImageFile.format, DibImageFile, _dib_accept) -Image.register_save(DibImageFile.format, _dib_save) - -Image.register_extension(DibImageFile.format, ".dib") - -Image.register_mime(DibImageFile.format, "image/bmp") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/BufrStubImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/BufrStubImagePlugin.py deleted file mode 100644 index 8c5da14f..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/BufrStubImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# BUFR stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -from typing import IO - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler: ImageFile.StubHandler | None) -> None: - """ - Install application-specific BUFR image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith((b"BUFR", b"ZCZC")) - - -class BufrStubImageFile(ImageFile.StubImageFile): - format = "BUFR" - format_description = "BUFR" - - def _open(self) -> None: - if not _accept(self.fp.read(4)): - msg = "Not a BUFR file" - raise SyntaxError(msg) - - self.fp.seek(-4, os.SEEK_CUR) - - # make something up - self._mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self) -> ImageFile.StubHandler | None: - return _handler - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if _handler is None or not hasattr(_handler, "save"): - msg = "BUFR save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(BufrStubImageFile.format, BufrStubImageFile, _accept) -Image.register_save(BufrStubImageFile.format, _save) - -Image.register_extension(BufrStubImageFile.format, ".bufr") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ContainerIO.py b/pptx-env/lib/python3.12/site-packages/PIL/ContainerIO.py deleted file mode 100644 index ec9e66c7..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ContainerIO.py +++ /dev/null @@ -1,173 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a class to read from a container file -# -# History: -# 1995-06-18 fl Created -# 1995-09-07 fl Added readline(), readlines() -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1995 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -from collections.abc import Iterable -from typing import IO, AnyStr, NoReturn - - -class ContainerIO(IO[AnyStr]): - """ - A file object that provides read access to a part of an existing - file (for example a TAR file). - """ - - def __init__(self, file: IO[AnyStr], offset: int, length: int) -> None: - """ - Create file object. - - :param file: Existing file. - :param offset: Start of region, in bytes. - :param length: Size of region, in bytes. - """ - self.fh: IO[AnyStr] = file - self.pos = 0 - self.offset = offset - self.length = length - self.fh.seek(offset) - - ## - # Always false. - - def isatty(self) -> bool: - return False - - def seekable(self) -> bool: - return True - - def seek(self, offset: int, mode: int = io.SEEK_SET) -> int: - """ - Move file pointer. - - :param offset: Offset in bytes. - :param mode: Starting position. Use 0 for beginning of region, 1 - for current offset, and 2 for end of region. You cannot move - the pointer outside the defined region. - :returns: Offset from start of region, in bytes. - """ - if mode == 1: - self.pos = self.pos + offset - elif mode == 2: - self.pos = self.length + offset - else: - self.pos = offset - # clamp - self.pos = max(0, min(self.pos, self.length)) - self.fh.seek(self.offset + self.pos) - return self.pos - - def tell(self) -> int: - """ - Get current file pointer. - - :returns: Offset from start of region, in bytes. - """ - return self.pos - - def readable(self) -> bool: - return True - - def read(self, n: int = -1) -> AnyStr: - """ - Read data. - - :param n: Number of bytes to read. If omitted, zero or negative, - read until end of region. - :returns: An 8-bit string. - """ - if n > 0: - n = min(n, self.length - self.pos) - else: - n = self.length - self.pos - if n <= 0: # EOF - return b"" if "b" in self.fh.mode else "" # type: ignore[return-value] - self.pos = self.pos + n - return self.fh.read(n) - - def readline(self, n: int = -1) -> AnyStr: - """ - Read a line of text. - - :param n: Number of bytes to read. If omitted, zero or negative, - read until end of line. - :returns: An 8-bit string. - """ - s: AnyStr = b"" if "b" in self.fh.mode else "" # type: ignore[assignment] - newline_character = b"\n" if "b" in self.fh.mode else "\n" - while True: - c = self.read(1) - if not c: - break - s = s + c - if c == newline_character or len(s) == n: - break - return s - - def readlines(self, n: int | None = -1) -> list[AnyStr]: - """ - Read multiple lines of text. - - :param n: Number of lines to read. If omitted, zero, negative or None, - read until end of region. - :returns: A list of 8-bit strings. - """ - lines = [] - while True: - s = self.readline() - if not s: - break - lines.append(s) - if len(lines) == n: - break - return lines - - def writable(self) -> bool: - return False - - def write(self, b: AnyStr) -> NoReturn: - raise NotImplementedError() - - def writelines(self, lines: Iterable[AnyStr]) -> NoReturn: - raise NotImplementedError() - - def truncate(self, size: int | None = None) -> int: - raise NotImplementedError() - - def __enter__(self) -> ContainerIO[AnyStr]: - return self - - def __exit__(self, *args: object) -> None: - self.close() - - def __iter__(self) -> ContainerIO[AnyStr]: - return self - - def __next__(self) -> AnyStr: - line = self.readline() - if not line: - msg = "end of region" - raise StopIteration(msg) - return line - - def fileno(self) -> int: - return self.fh.fileno() - - def flush(self) -> None: - self.fh.flush() - - def close(self) -> None: - self.fh.close() diff --git a/pptx-env/lib/python3.12/site-packages/PIL/CurImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/CurImagePlugin.py deleted file mode 100644 index 9c188e08..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/CurImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Windows Cursor support for PIL -# -# notes: -# uses BmpImagePlugin.py to read the bitmap data. -# -# history: -# 96-05-27 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import BmpImagePlugin, Image -from ._binary import i16le as i16 -from ._binary import i32le as i32 - -# -# -------------------------------------------------------------------- - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"\0\0\2\0") - - -## -# Image plugin for Windows Cursor files. - - -class CurImageFile(BmpImagePlugin.BmpImageFile): - format = "CUR" - format_description = "Windows Cursor" - - def _open(self) -> None: - assert self.fp is not None - offset = self.fp.tell() - - # check magic - s = self.fp.read(6) - if not _accept(s): - msg = "not a CUR file" - raise SyntaxError(msg) - - # pick the largest cursor in the file - m = b"" - for i in range(i16(s, 4)): - s = self.fp.read(16) - if not m: - m = s - elif s[0] > m[0] and s[1] > m[1]: - m = s - if not m: - msg = "No cursors were found" - raise TypeError(msg) - - # load as bitmap - self._bitmap(i32(m, 12) + offset) - - # patch up the bitmap height - self._size = self.size[0], self.size[1] // 2 - self.tile = [self.tile[0]._replace(extents=(0, 0) + self.size)] - - -# -# -------------------------------------------------------------------- - -Image.register_open(CurImageFile.format, CurImageFile, _accept) - -Image.register_extension(CurImageFile.format, ".cur") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/DcxImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/DcxImagePlugin.py deleted file mode 100644 index aea661b9..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/DcxImagePlugin.py +++ /dev/null @@ -1,83 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# DCX file handling -# -# DCX is a container file format defined by Intel, commonly used -# for fax applications. Each DCX file consists of a directory -# (a list of file offsets) followed by a set of (usually 1-bit) -# PCX files. -# -# History: -# 1995-09-09 fl Created -# 1996-03-20 fl Properly derived from PcxImageFile. -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 2002-07-30 fl Fixed file handling -# -# Copyright (c) 1997-98 by Secret Labs AB. -# Copyright (c) 1995-96 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image -from ._binary import i32le as i32 -from ._util import DeferredError -from .PcxImagePlugin import PcxImageFile - -MAGIC = 0x3ADE68B1 # QUIZ: what's this value, then? - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 4 and i32(prefix) == MAGIC - - -## -# Image plugin for the Intel DCX format. - - -class DcxImageFile(PcxImageFile): - format = "DCX" - format_description = "Intel DCX" - _close_exclusive_fp_after_loading = False - - def _open(self) -> None: - # Header - s = self.fp.read(4) - if not _accept(s): - msg = "not a DCX file" - raise SyntaxError(msg) - - # Component directory - self._offset = [] - for i in range(1024): - offset = i32(self.fp.read(4)) - if not offset: - break - self._offset.append(offset) - - self._fp = self.fp - self.frame = -1 - self.n_frames = len(self._offset) - self.is_animated = self.n_frames > 1 - self.seek(0) - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - if isinstance(self._fp, DeferredError): - raise self._fp.ex - self.frame = frame - self.fp = self._fp - self.fp.seek(self._offset[frame]) - PcxImageFile._open(self) - - def tell(self) -> int: - return self.frame - - -Image.register_open(DcxImageFile.format, DcxImageFile, _accept) - -Image.register_extension(DcxImageFile.format, ".dcx") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/DdsImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/DdsImagePlugin.py deleted file mode 100644 index f9ade18f..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/DdsImagePlugin.py +++ /dev/null @@ -1,624 +0,0 @@ -""" -A Pillow plugin for .dds files (S3TC-compressed aka DXTC) -Jerome Leclanche - -Documentation: -https://web.archive.org/web/20170802060935/http://oss.sgi.com/projects/ogl-sample/registry/EXT/texture_compression_s3tc.txt - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: -https://creativecommons.org/publicdomain/zero/1.0/ -""" - -from __future__ import annotations - -import io -import struct -import sys -from enum import IntEnum, IntFlag -from typing import IO - -from . import Image, ImageFile, ImagePalette -from ._binary import i32le as i32 -from ._binary import o8 -from ._binary import o32le as o32 - -# Magic ("DDS ") -DDS_MAGIC = 0x20534444 - - -# DDS flags -class DDSD(IntFlag): - CAPS = 0x1 - HEIGHT = 0x2 - WIDTH = 0x4 - PITCH = 0x8 - PIXELFORMAT = 0x1000 - MIPMAPCOUNT = 0x20000 - LINEARSIZE = 0x80000 - DEPTH = 0x800000 - - -# DDS caps -class DDSCAPS(IntFlag): - COMPLEX = 0x8 - TEXTURE = 0x1000 - MIPMAP = 0x400000 - - -class DDSCAPS2(IntFlag): - CUBEMAP = 0x200 - CUBEMAP_POSITIVEX = 0x400 - CUBEMAP_NEGATIVEX = 0x800 - CUBEMAP_POSITIVEY = 0x1000 - CUBEMAP_NEGATIVEY = 0x2000 - CUBEMAP_POSITIVEZ = 0x4000 - CUBEMAP_NEGATIVEZ = 0x8000 - VOLUME = 0x200000 - - -# Pixel Format -class DDPF(IntFlag): - ALPHAPIXELS = 0x1 - ALPHA = 0x2 - FOURCC = 0x4 - PALETTEINDEXED8 = 0x20 - RGB = 0x40 - LUMINANCE = 0x20000 - - -# dxgiformat.h -class DXGI_FORMAT(IntEnum): - UNKNOWN = 0 - R32G32B32A32_TYPELESS = 1 - R32G32B32A32_FLOAT = 2 - R32G32B32A32_UINT = 3 - R32G32B32A32_SINT = 4 - R32G32B32_TYPELESS = 5 - R32G32B32_FLOAT = 6 - R32G32B32_UINT = 7 - R32G32B32_SINT = 8 - R16G16B16A16_TYPELESS = 9 - R16G16B16A16_FLOAT = 10 - R16G16B16A16_UNORM = 11 - R16G16B16A16_UINT = 12 - R16G16B16A16_SNORM = 13 - R16G16B16A16_SINT = 14 - R32G32_TYPELESS = 15 - R32G32_FLOAT = 16 - R32G32_UINT = 17 - R32G32_SINT = 18 - R32G8X24_TYPELESS = 19 - D32_FLOAT_S8X24_UINT = 20 - R32_FLOAT_X8X24_TYPELESS = 21 - X32_TYPELESS_G8X24_UINT = 22 - R10G10B10A2_TYPELESS = 23 - R10G10B10A2_UNORM = 24 - R10G10B10A2_UINT = 25 - R11G11B10_FLOAT = 26 - R8G8B8A8_TYPELESS = 27 - R8G8B8A8_UNORM = 28 - R8G8B8A8_UNORM_SRGB = 29 - R8G8B8A8_UINT = 30 - R8G8B8A8_SNORM = 31 - R8G8B8A8_SINT = 32 - R16G16_TYPELESS = 33 - R16G16_FLOAT = 34 - R16G16_UNORM = 35 - R16G16_UINT = 36 - R16G16_SNORM = 37 - R16G16_SINT = 38 - R32_TYPELESS = 39 - D32_FLOAT = 40 - R32_FLOAT = 41 - R32_UINT = 42 - R32_SINT = 43 - R24G8_TYPELESS = 44 - D24_UNORM_S8_UINT = 45 - R24_UNORM_X8_TYPELESS = 46 - X24_TYPELESS_G8_UINT = 47 - R8G8_TYPELESS = 48 - R8G8_UNORM = 49 - R8G8_UINT = 50 - R8G8_SNORM = 51 - R8G8_SINT = 52 - R16_TYPELESS = 53 - R16_FLOAT = 54 - D16_UNORM = 55 - R16_UNORM = 56 - R16_UINT = 57 - R16_SNORM = 58 - R16_SINT = 59 - R8_TYPELESS = 60 - R8_UNORM = 61 - R8_UINT = 62 - R8_SNORM = 63 - R8_SINT = 64 - A8_UNORM = 65 - R1_UNORM = 66 - R9G9B9E5_SHAREDEXP = 67 - R8G8_B8G8_UNORM = 68 - G8R8_G8B8_UNORM = 69 - BC1_TYPELESS = 70 - BC1_UNORM = 71 - BC1_UNORM_SRGB = 72 - BC2_TYPELESS = 73 - BC2_UNORM = 74 - BC2_UNORM_SRGB = 75 - BC3_TYPELESS = 76 - BC3_UNORM = 77 - BC3_UNORM_SRGB = 78 - BC4_TYPELESS = 79 - BC4_UNORM = 80 - BC4_SNORM = 81 - BC5_TYPELESS = 82 - BC5_UNORM = 83 - BC5_SNORM = 84 - B5G6R5_UNORM = 85 - B5G5R5A1_UNORM = 86 - B8G8R8A8_UNORM = 87 - B8G8R8X8_UNORM = 88 - R10G10B10_XR_BIAS_A2_UNORM = 89 - B8G8R8A8_TYPELESS = 90 - B8G8R8A8_UNORM_SRGB = 91 - B8G8R8X8_TYPELESS = 92 - B8G8R8X8_UNORM_SRGB = 93 - BC6H_TYPELESS = 94 - BC6H_UF16 = 95 - BC6H_SF16 = 96 - BC7_TYPELESS = 97 - BC7_UNORM = 98 - BC7_UNORM_SRGB = 99 - AYUV = 100 - Y410 = 101 - Y416 = 102 - NV12 = 103 - P010 = 104 - P016 = 105 - OPAQUE_420 = 106 - YUY2 = 107 - Y210 = 108 - Y216 = 109 - NV11 = 110 - AI44 = 111 - IA44 = 112 - P8 = 113 - A8P8 = 114 - B4G4R4A4_UNORM = 115 - P208 = 130 - V208 = 131 - V408 = 132 - SAMPLER_FEEDBACK_MIN_MIP_OPAQUE = 189 - SAMPLER_FEEDBACK_MIP_REGION_USED_OPAQUE = 190 - - -class D3DFMT(IntEnum): - UNKNOWN = 0 - R8G8B8 = 20 - A8R8G8B8 = 21 - X8R8G8B8 = 22 - R5G6B5 = 23 - X1R5G5B5 = 24 - A1R5G5B5 = 25 - A4R4G4B4 = 26 - R3G3B2 = 27 - A8 = 28 - A8R3G3B2 = 29 - X4R4G4B4 = 30 - A2B10G10R10 = 31 - A8B8G8R8 = 32 - X8B8G8R8 = 33 - G16R16 = 34 - A2R10G10B10 = 35 - A16B16G16R16 = 36 - A8P8 = 40 - P8 = 41 - L8 = 50 - A8L8 = 51 - A4L4 = 52 - V8U8 = 60 - L6V5U5 = 61 - X8L8V8U8 = 62 - Q8W8V8U8 = 63 - V16U16 = 64 - A2W10V10U10 = 67 - D16_LOCKABLE = 70 - D32 = 71 - D15S1 = 73 - D24S8 = 75 - D24X8 = 77 - D24X4S4 = 79 - D16 = 80 - D32F_LOCKABLE = 82 - D24FS8 = 83 - D32_LOCKABLE = 84 - S8_LOCKABLE = 85 - L16 = 81 - VERTEXDATA = 100 - INDEX16 = 101 - INDEX32 = 102 - Q16W16V16U16 = 110 - R16F = 111 - G16R16F = 112 - A16B16G16R16F = 113 - R32F = 114 - G32R32F = 115 - A32B32G32R32F = 116 - CxV8U8 = 117 - A1 = 118 - A2B10G10R10_XR_BIAS = 119 - BINARYBUFFER = 199 - - UYVY = i32(b"UYVY") - R8G8_B8G8 = i32(b"RGBG") - YUY2 = i32(b"YUY2") - G8R8_G8B8 = i32(b"GRGB") - DXT1 = i32(b"DXT1") - DXT2 = i32(b"DXT2") - DXT3 = i32(b"DXT3") - DXT4 = i32(b"DXT4") - DXT5 = i32(b"DXT5") - DX10 = i32(b"DX10") - BC4S = i32(b"BC4S") - BC4U = i32(b"BC4U") - BC5S = i32(b"BC5S") - BC5U = i32(b"BC5U") - ATI1 = i32(b"ATI1") - ATI2 = i32(b"ATI2") - MULTI2_ARGB8 = i32(b"MET1") - - -# Backward compatibility layer -module = sys.modules[__name__] -for item in DDSD: - assert item.name is not None - setattr(module, f"DDSD_{item.name}", item.value) -for item1 in DDSCAPS: - assert item1.name is not None - setattr(module, f"DDSCAPS_{item1.name}", item1.value) -for item2 in DDSCAPS2: - assert item2.name is not None - setattr(module, f"DDSCAPS2_{item2.name}", item2.value) -for item3 in DDPF: - assert item3.name is not None - setattr(module, f"DDPF_{item3.name}", item3.value) - -DDS_FOURCC = DDPF.FOURCC -DDS_RGB = DDPF.RGB -DDS_RGBA = DDPF.RGB | DDPF.ALPHAPIXELS -DDS_LUMINANCE = DDPF.LUMINANCE -DDS_LUMINANCEA = DDPF.LUMINANCE | DDPF.ALPHAPIXELS -DDS_ALPHA = DDPF.ALPHA -DDS_PAL8 = DDPF.PALETTEINDEXED8 - -DDS_HEADER_FLAGS_TEXTURE = DDSD.CAPS | DDSD.HEIGHT | DDSD.WIDTH | DDSD.PIXELFORMAT -DDS_HEADER_FLAGS_MIPMAP = DDSD.MIPMAPCOUNT -DDS_HEADER_FLAGS_VOLUME = DDSD.DEPTH -DDS_HEADER_FLAGS_PITCH = DDSD.PITCH -DDS_HEADER_FLAGS_LINEARSIZE = DDSD.LINEARSIZE - -DDS_HEIGHT = DDSD.HEIGHT -DDS_WIDTH = DDSD.WIDTH - -DDS_SURFACE_FLAGS_TEXTURE = DDSCAPS.TEXTURE -DDS_SURFACE_FLAGS_MIPMAP = DDSCAPS.COMPLEX | DDSCAPS.MIPMAP -DDS_SURFACE_FLAGS_CUBEMAP = DDSCAPS.COMPLEX - -DDS_CUBEMAP_POSITIVEX = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_POSITIVEX -DDS_CUBEMAP_NEGATIVEX = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_NEGATIVEX -DDS_CUBEMAP_POSITIVEY = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_POSITIVEY -DDS_CUBEMAP_NEGATIVEY = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_NEGATIVEY -DDS_CUBEMAP_POSITIVEZ = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_POSITIVEZ -DDS_CUBEMAP_NEGATIVEZ = DDSCAPS2.CUBEMAP | DDSCAPS2.CUBEMAP_NEGATIVEZ - -DXT1_FOURCC = D3DFMT.DXT1 -DXT3_FOURCC = D3DFMT.DXT3 -DXT5_FOURCC = D3DFMT.DXT5 - -DXGI_FORMAT_R8G8B8A8_TYPELESS = DXGI_FORMAT.R8G8B8A8_TYPELESS -DXGI_FORMAT_R8G8B8A8_UNORM = DXGI_FORMAT.R8G8B8A8_UNORM -DXGI_FORMAT_R8G8B8A8_UNORM_SRGB = DXGI_FORMAT.R8G8B8A8_UNORM_SRGB -DXGI_FORMAT_BC5_TYPELESS = DXGI_FORMAT.BC5_TYPELESS -DXGI_FORMAT_BC5_UNORM = DXGI_FORMAT.BC5_UNORM -DXGI_FORMAT_BC5_SNORM = DXGI_FORMAT.BC5_SNORM -DXGI_FORMAT_BC6H_UF16 = DXGI_FORMAT.BC6H_UF16 -DXGI_FORMAT_BC6H_SF16 = DXGI_FORMAT.BC6H_SF16 -DXGI_FORMAT_BC7_TYPELESS = DXGI_FORMAT.BC7_TYPELESS -DXGI_FORMAT_BC7_UNORM = DXGI_FORMAT.BC7_UNORM -DXGI_FORMAT_BC7_UNORM_SRGB = DXGI_FORMAT.BC7_UNORM_SRGB - - -class DdsImageFile(ImageFile.ImageFile): - format = "DDS" - format_description = "DirectDraw Surface" - - def _open(self) -> None: - if not _accept(self.fp.read(4)): - msg = "not a DDS file" - raise SyntaxError(msg) - (header_size,) = struct.unpack(" None: - pass - - -class DdsRgbDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - bitcount, masks = self.args - - # Some masks will be padded with zeros, e.g. R 0b11 G 0b1100 - # Calculate how many zeros each mask is padded with - mask_offsets = [] - # And the maximum value of each channel without the padding - mask_totals = [] - for mask in masks: - offset = 0 - if mask != 0: - while mask >> (offset + 1) << (offset + 1) == mask: - offset += 1 - mask_offsets.append(offset) - mask_totals.append(mask >> offset) - - data = bytearray() - bytecount = bitcount // 8 - dest_length = self.state.xsize * self.state.ysize * len(masks) - while len(data) < dest_length: - value = int.from_bytes(self.fd.read(bytecount), "little") - for i, mask in enumerate(masks): - masked_value = value & mask - # Remove the zero padding, and scale it to 8 bits - data += o8( - int(((masked_value >> mask_offsets[i]) / mask_totals[i]) * 255) - ) - self.set_as_raw(data) - return -1, 0 - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode not in ("RGB", "RGBA", "L", "LA"): - msg = f"cannot write mode {im.mode} as DDS" - raise OSError(msg) - - flags = DDSD.CAPS | DDSD.HEIGHT | DDSD.WIDTH | DDSD.PIXELFORMAT - bitcount = len(im.getbands()) * 8 - pixel_format = im.encoderinfo.get("pixel_format") - args: tuple[int] | str - if pixel_format: - codec_name = "bcn" - flags |= DDSD.LINEARSIZE - pitch = (im.width + 3) * 4 - rgba_mask = [0, 0, 0, 0] - pixel_flags = DDPF.FOURCC - if pixel_format == "DXT1": - fourcc = D3DFMT.DXT1 - args = (1,) - elif pixel_format == "DXT3": - fourcc = D3DFMT.DXT3 - args = (2,) - elif pixel_format == "DXT5": - fourcc = D3DFMT.DXT5 - args = (3,) - else: - fourcc = D3DFMT.DX10 - if pixel_format == "BC2": - args = (2,) - dxgi_format = DXGI_FORMAT.BC2_TYPELESS - elif pixel_format == "BC3": - args = (3,) - dxgi_format = DXGI_FORMAT.BC3_TYPELESS - elif pixel_format == "BC5": - args = (5,) - dxgi_format = DXGI_FORMAT.BC5_TYPELESS - if im.mode != "RGB": - msg = "only RGB mode can be written as BC5" - raise OSError(msg) - else: - msg = f"cannot write pixel format {pixel_format}" - raise OSError(msg) - else: - codec_name = "raw" - flags |= DDSD.PITCH - pitch = (im.width * bitcount + 7) // 8 - - alpha = im.mode[-1] == "A" - if im.mode[0] == "L": - pixel_flags = DDPF.LUMINANCE - args = im.mode - if alpha: - rgba_mask = [0x000000FF, 0x000000FF, 0x000000FF] - else: - rgba_mask = [0xFF000000, 0xFF000000, 0xFF000000] - else: - pixel_flags = DDPF.RGB - args = im.mode[::-1] - rgba_mask = [0x00FF0000, 0x0000FF00, 0x000000FF] - - if alpha: - r, g, b, a = im.split() - im = Image.merge("RGBA", (a, r, g, b)) - if alpha: - pixel_flags |= DDPF.ALPHAPIXELS - rgba_mask.append(0xFF000000 if alpha else 0) - - fourcc = D3DFMT.UNKNOWN - fp.write( - o32(DDS_MAGIC) - + struct.pack( - "<7I", - 124, # header size - flags, # flags - im.height, - im.width, - pitch, - 0, # depth - 0, # mipmaps - ) - + struct.pack("11I", *((0,) * 11)) # reserved - # pfsize, pfflags, fourcc, bitcount - + struct.pack("<4I", 32, pixel_flags, fourcc, bitcount) - + struct.pack("<4I", *rgba_mask) # dwRGBABitMask - + struct.pack("<5I", DDSCAPS.TEXTURE, 0, 0, 0, 0) - ) - if fourcc == D3DFMT.DX10: - fp.write( - # dxgi_format, 2D resource, misc, array size, straight alpha - struct.pack("<5I", dxgi_format, 3, 0, 0, 1) - ) - ImageFile._save(im, fp, [ImageFile._Tile(codec_name, (0, 0) + im.size, 0, args)]) - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"DDS ") - - -Image.register_open(DdsImageFile.format, DdsImageFile, _accept) -Image.register_decoder("dds_rgb", DdsRgbDecoder) -Image.register_save(DdsImageFile.format, _save) -Image.register_extension(DdsImageFile.format, ".dds") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/EpsImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/EpsImagePlugin.py deleted file mode 100644 index 69f3062b..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/EpsImagePlugin.py +++ /dev/null @@ -1,479 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# EPS file handling -# -# History: -# 1995-09-01 fl Created (0.1) -# 1996-05-18 fl Don't choke on "atend" fields, Ghostscript interface (0.2) -# 1996-08-22 fl Don't choke on floating point BoundingBox values -# 1996-08-23 fl Handle files from Macintosh (0.3) -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.4) -# 2003-09-07 fl Check gs.close status (from Federico Di Gregorio) (0.5) -# 2014-05-07 e Handling of EPS with binary preview and fixed resolution -# resizing -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -import os -import re -import subprocess -import sys -import tempfile -from typing import IO - -from . import Image, ImageFile -from ._binary import i32le as i32 - -# -------------------------------------------------------------------- - - -split = re.compile(r"^%%([^:]*):[ \t]*(.*)[ \t]*$") -field = re.compile(r"^%[%!\w]([^:]*)[ \t]*$") - -gs_binary: str | bool | None = None -gs_windows_binary = None - - -def has_ghostscript() -> bool: - global gs_binary, gs_windows_binary - if gs_binary is None: - if sys.platform.startswith("win"): - if gs_windows_binary is None: - import shutil - - for binary in ("gswin32c", "gswin64c", "gs"): - if shutil.which(binary) is not None: - gs_windows_binary = binary - break - else: - gs_windows_binary = False - gs_binary = gs_windows_binary - else: - try: - subprocess.check_call(["gs", "--version"], stdout=subprocess.DEVNULL) - gs_binary = "gs" - except OSError: - gs_binary = False - return gs_binary is not False - - -def Ghostscript( - tile: list[ImageFile._Tile], - size: tuple[int, int], - fp: IO[bytes], - scale: int = 1, - transparency: bool = False, -) -> Image.core.ImagingCore: - """Render an image using Ghostscript""" - global gs_binary - if not has_ghostscript(): - msg = "Unable to locate Ghostscript on paths" - raise OSError(msg) - assert isinstance(gs_binary, str) - - # Unpack decoder tile - args = tile[0].args - assert isinstance(args, tuple) - length, bbox = args - - # Hack to support hi-res rendering - scale = int(scale) or 1 - width = size[0] * scale - height = size[1] * scale - # resolution is dependent on bbox and size - res_x = 72.0 * width / (bbox[2] - bbox[0]) - res_y = 72.0 * height / (bbox[3] - bbox[1]) - - out_fd, outfile = tempfile.mkstemp() - os.close(out_fd) - - infile_temp = None - if hasattr(fp, "name") and os.path.exists(fp.name): - infile = fp.name - else: - in_fd, infile_temp = tempfile.mkstemp() - os.close(in_fd) - infile = infile_temp - - # Ignore length and offset! - # Ghostscript can read it - # Copy whole file to read in Ghostscript - with open(infile_temp, "wb") as f: - # fetch length of fp - fp.seek(0, io.SEEK_END) - fsize = fp.tell() - # ensure start position - # go back - fp.seek(0) - lengthfile = fsize - while lengthfile > 0: - s = fp.read(min(lengthfile, 100 * 1024)) - if not s: - break - lengthfile -= len(s) - f.write(s) - - if transparency: - # "RGBA" - device = "pngalpha" - else: - # "pnmraw" automatically chooses between - # PBM ("1"), PGM ("L"), and PPM ("RGB"). - device = "pnmraw" - - # Build Ghostscript command - command = [ - gs_binary, - "-q", # quiet mode - f"-g{width:d}x{height:d}", # set output geometry (pixels) - f"-r{res_x:f}x{res_y:f}", # set input DPI (dots per inch) - "-dBATCH", # exit after processing - "-dNOPAUSE", # don't pause between pages - "-dSAFER", # safe mode - f"-sDEVICE={device}", - f"-sOutputFile={outfile}", # output file - # adjust for image origin - "-c", - f"{-bbox[0]} {-bbox[1]} translate", - "-f", - infile, # input file - # showpage (see https://bugs.ghostscript.com/show_bug.cgi?id=698272) - "-c", - "showpage", - ] - - # push data through Ghostscript - try: - startupinfo = None - if sys.platform.startswith("win"): - startupinfo = subprocess.STARTUPINFO() - startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW - subprocess.check_call(command, startupinfo=startupinfo) - with Image.open(outfile) as out_im: - out_im.load() - return out_im.im.copy() - finally: - try: - os.unlink(outfile) - if infile_temp: - os.unlink(infile_temp) - except OSError: - pass - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"%!PS") or ( - len(prefix) >= 4 and i32(prefix) == 0xC6D3D0C5 - ) - - -## -# Image plugin for Encapsulated PostScript. This plugin supports only -# a few variants of this format. - - -class EpsImageFile(ImageFile.ImageFile): - """EPS File Parser for the Python Imaging Library""" - - format = "EPS" - format_description = "Encapsulated Postscript" - - mode_map = {1: "L", 2: "LAB", 3: "RGB", 4: "CMYK"} - - def _open(self) -> None: - (length, offset) = self._find_offset(self.fp) - - # go to offset - start of "%!PS" - self.fp.seek(offset) - - self._mode = "RGB" - - # When reading header comments, the first comment is used. - # When reading trailer comments, the last comment is used. - bounding_box: list[int] | None = None - imagedata_size: tuple[int, int] | None = None - - byte_arr = bytearray(255) - bytes_mv = memoryview(byte_arr) - bytes_read = 0 - reading_header_comments = True - reading_trailer_comments = False - trailer_reached = False - - def check_required_header_comments() -> None: - """ - The EPS specification requires that some headers exist. - This should be checked when the header comments formally end, - when image data starts, or when the file ends, whichever comes first. - """ - if "PS-Adobe" not in self.info: - msg = 'EPS header missing "%!PS-Adobe" comment' - raise SyntaxError(msg) - if "BoundingBox" not in self.info: - msg = 'EPS header missing "%%BoundingBox" comment' - raise SyntaxError(msg) - - def read_comment(s: str) -> bool: - nonlocal bounding_box, reading_trailer_comments - try: - m = split.match(s) - except re.error as e: - msg = "not an EPS file" - raise SyntaxError(msg) from e - - if not m: - return False - - k, v = m.group(1, 2) - self.info[k] = v - if k == "BoundingBox": - if v == "(atend)": - reading_trailer_comments = True - elif not bounding_box or (trailer_reached and reading_trailer_comments): - try: - # Note: The DSC spec says that BoundingBox - # fields should be integers, but some drivers - # put floating point values there anyway. - bounding_box = [int(float(i)) for i in v.split()] - except Exception: - pass - return True - - while True: - byte = self.fp.read(1) - if byte == b"": - # if we didn't read a byte we must be at the end of the file - if bytes_read == 0: - if reading_header_comments: - check_required_header_comments() - break - elif byte in b"\r\n": - # if we read a line ending character, ignore it and parse what - # we have already read. if we haven't read any other characters, - # continue reading - if bytes_read == 0: - continue - else: - # ASCII/hexadecimal lines in an EPS file must not exceed - # 255 characters, not including line ending characters - if bytes_read >= 255: - # only enforce this for lines starting with a "%", - # otherwise assume it's binary data - if byte_arr[0] == ord("%"): - msg = "not an EPS file" - raise SyntaxError(msg) - else: - if reading_header_comments: - check_required_header_comments() - reading_header_comments = False - # reset bytes_read so we can keep reading - # data until the end of the line - bytes_read = 0 - byte_arr[bytes_read] = byte[0] - bytes_read += 1 - continue - - if reading_header_comments: - # Load EPS header - - # if this line doesn't start with a "%", - # or does start with "%%EndComments", - # then we've reached the end of the header/comments - if byte_arr[0] != ord("%") or bytes_mv[:13] == b"%%EndComments": - check_required_header_comments() - reading_header_comments = False - continue - - s = str(bytes_mv[:bytes_read], "latin-1") - if not read_comment(s): - m = field.match(s) - if m: - k = m.group(1) - if k.startswith("PS-Adobe"): - self.info["PS-Adobe"] = k[9:] - else: - self.info[k] = "" - elif s[0] == "%": - # handle non-DSC PostScript comments that some - # tools mistakenly put in the Comments section - pass - else: - msg = "bad EPS header" - raise OSError(msg) - elif bytes_mv[:11] == b"%ImageData:": - # Check for an "ImageData" descriptor - # https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577413_pgfId-1035096 - - # If we've already read an "ImageData" descriptor, - # don't read another one. - if imagedata_size: - bytes_read = 0 - continue - - # Values: - # columns - # rows - # bit depth (1 or 8) - # mode (1: L, 2: LAB, 3: RGB, 4: CMYK) - # number of padding channels - # block size (number of bytes per row per channel) - # binary/ascii (1: binary, 2: ascii) - # data start identifier (the image data follows after a single line - # consisting only of this quoted value) - image_data_values = byte_arr[11:bytes_read].split(None, 7) - columns, rows, bit_depth, mode_id = ( - int(value) for value in image_data_values[:4] - ) - - if bit_depth == 1: - self._mode = "1" - elif bit_depth == 8: - try: - self._mode = self.mode_map[mode_id] - except ValueError: - break - else: - break - - # Parse the columns and rows after checking the bit depth and mode - # in case the bit depth and/or mode are invalid. - imagedata_size = columns, rows - elif bytes_mv[:5] == b"%%EOF": - break - elif trailer_reached and reading_trailer_comments: - # Load EPS trailer - s = str(bytes_mv[:bytes_read], "latin-1") - read_comment(s) - elif bytes_mv[:9] == b"%%Trailer": - trailer_reached = True - elif bytes_mv[:14] == b"%%BeginBinary:": - bytecount = int(byte_arr[14:bytes_read]) - self.fp.seek(bytecount, os.SEEK_CUR) - bytes_read = 0 - - # A "BoundingBox" is always required, - # even if an "ImageData" descriptor size exists. - if not bounding_box: - msg = "cannot determine EPS bounding box" - raise OSError(msg) - - # An "ImageData" size takes precedence over the "BoundingBox". - self._size = imagedata_size or ( - bounding_box[2] - bounding_box[0], - bounding_box[3] - bounding_box[1], - ) - - self.tile = [ - ImageFile._Tile("eps", (0, 0) + self.size, offset, (length, bounding_box)) - ] - - def _find_offset(self, fp: IO[bytes]) -> tuple[int, int]: - s = fp.read(4) - - if s == b"%!PS": - # for HEAD without binary preview - fp.seek(0, io.SEEK_END) - length = fp.tell() - offset = 0 - elif i32(s) == 0xC6D3D0C5: - # FIX for: Some EPS file not handled correctly / issue #302 - # EPS can contain binary data - # or start directly with latin coding - # more info see: - # https://web.archive.org/web/20160528181353/http://partners.adobe.com/public/developer/en/ps/5002.EPSF_Spec.pdf - s = fp.read(8) - offset = i32(s) - length = i32(s, 4) - else: - msg = "not an EPS file" - raise SyntaxError(msg) - - return length, offset - - def load( - self, scale: int = 1, transparency: bool = False - ) -> Image.core.PixelAccess | None: - # Load EPS via Ghostscript - if self.tile: - self.im = Ghostscript(self.tile, self.size, self.fp, scale, transparency) - self._mode = self.im.mode - self._size = self.im.size - self.tile = [] - return Image.Image.load(self) - - def load_seek(self, pos: int) -> None: - # we can't incrementally load, so force ImageFile.parser to - # use our custom load method by defining this method. - pass - - -# -------------------------------------------------------------------- - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes, eps: int = 1) -> None: - """EPS Writer for the Python Imaging Library.""" - - # make sure image data is available - im.load() - - # determine PostScript image mode - if im.mode == "L": - operator = (8, 1, b"image") - elif im.mode == "RGB": - operator = (8, 3, b"false 3 colorimage") - elif im.mode == "CMYK": - operator = (8, 4, b"false 4 colorimage") - else: - msg = "image mode is not supported" - raise ValueError(msg) - - if eps: - # write EPS header - fp.write(b"%!PS-Adobe-3.0 EPSF-3.0\n") - fp.write(b"%%Creator: PIL 0.1 EpsEncode\n") - # fp.write("%%CreationDate: %s"...) - fp.write(b"%%%%BoundingBox: 0 0 %d %d\n" % im.size) - fp.write(b"%%Pages: 1\n") - fp.write(b"%%EndComments\n") - fp.write(b"%%Page: 1 1\n") - fp.write(b"%%ImageData: %d %d " % im.size) - fp.write(b'%d %d 0 1 1 "%s"\n' % operator) - - # image header - fp.write(b"gsave\n") - fp.write(b"10 dict begin\n") - fp.write(b"/buf %d string def\n" % (im.size[0] * operator[1])) - fp.write(b"%d %d scale\n" % im.size) - fp.write(b"%d %d 8\n" % im.size) # <= bits - fp.write(b"[%d 0 0 -%d 0 %d]\n" % (im.size[0], im.size[1], im.size[1])) - fp.write(b"{ currentfile buf readhexstring pop } bind\n") - fp.write(operator[2] + b"\n") - if hasattr(fp, "flush"): - fp.flush() - - ImageFile._save(im, fp, [ImageFile._Tile("eps", (0, 0) + im.size)]) - - fp.write(b"\n%%%%EndBinary\n") - fp.write(b"grestore end\n") - if hasattr(fp, "flush"): - fp.flush() - - -# -------------------------------------------------------------------- - - -Image.register_open(EpsImageFile.format, EpsImageFile, _accept) - -Image.register_save(EpsImageFile.format, _save) - -Image.register_extensions(EpsImageFile.format, [".ps", ".eps"]) - -Image.register_mime(EpsImageFile.format, "application/postscript") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ExifTags.py b/pptx-env/lib/python3.12/site-packages/PIL/ExifTags.py deleted file mode 100644 index 2280d5ce..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ExifTags.py +++ /dev/null @@ -1,382 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# EXIF tags -# -# Copyright (c) 2003 by Secret Labs AB -# -# See the README file for information on usage and redistribution. -# - -""" -This module provides constants and clear-text names for various -well-known EXIF tags. -""" -from __future__ import annotations - -from enum import IntEnum - - -class Base(IntEnum): - # possibly incomplete - InteropIndex = 0x0001 - ProcessingSoftware = 0x000B - NewSubfileType = 0x00FE - SubfileType = 0x00FF - ImageWidth = 0x0100 - ImageLength = 0x0101 - BitsPerSample = 0x0102 - Compression = 0x0103 - PhotometricInterpretation = 0x0106 - Thresholding = 0x0107 - CellWidth = 0x0108 - CellLength = 0x0109 - FillOrder = 0x010A - DocumentName = 0x010D - ImageDescription = 0x010E - Make = 0x010F - Model = 0x0110 - StripOffsets = 0x0111 - Orientation = 0x0112 - SamplesPerPixel = 0x0115 - RowsPerStrip = 0x0116 - StripByteCounts = 0x0117 - MinSampleValue = 0x0118 - MaxSampleValue = 0x0119 - XResolution = 0x011A - YResolution = 0x011B - PlanarConfiguration = 0x011C - PageName = 0x011D - FreeOffsets = 0x0120 - FreeByteCounts = 0x0121 - GrayResponseUnit = 0x0122 - GrayResponseCurve = 0x0123 - T4Options = 0x0124 - T6Options = 0x0125 - ResolutionUnit = 0x0128 - PageNumber = 0x0129 - TransferFunction = 0x012D - Software = 0x0131 - DateTime = 0x0132 - Artist = 0x013B - HostComputer = 0x013C - Predictor = 0x013D - WhitePoint = 0x013E - PrimaryChromaticities = 0x013F - ColorMap = 0x0140 - HalftoneHints = 0x0141 - TileWidth = 0x0142 - TileLength = 0x0143 - TileOffsets = 0x0144 - TileByteCounts = 0x0145 - SubIFDs = 0x014A - InkSet = 0x014C - InkNames = 0x014D - NumberOfInks = 0x014E - DotRange = 0x0150 - TargetPrinter = 0x0151 - ExtraSamples = 0x0152 - SampleFormat = 0x0153 - SMinSampleValue = 0x0154 - SMaxSampleValue = 0x0155 - TransferRange = 0x0156 - ClipPath = 0x0157 - XClipPathUnits = 0x0158 - YClipPathUnits = 0x0159 - Indexed = 0x015A - JPEGTables = 0x015B - OPIProxy = 0x015F - JPEGProc = 0x0200 - JpegIFOffset = 0x0201 - JpegIFByteCount = 0x0202 - JpegRestartInterval = 0x0203 - JpegLosslessPredictors = 0x0205 - JpegPointTransforms = 0x0206 - JpegQTables = 0x0207 - JpegDCTables = 0x0208 - JpegACTables = 0x0209 - YCbCrCoefficients = 0x0211 - YCbCrSubSampling = 0x0212 - YCbCrPositioning = 0x0213 - ReferenceBlackWhite = 0x0214 - XMLPacket = 0x02BC - RelatedImageFileFormat = 0x1000 - RelatedImageWidth = 0x1001 - RelatedImageLength = 0x1002 - Rating = 0x4746 - RatingPercent = 0x4749 - ImageID = 0x800D - CFARepeatPatternDim = 0x828D - BatteryLevel = 0x828F - Copyright = 0x8298 - ExposureTime = 0x829A - FNumber = 0x829D - IPTCNAA = 0x83BB - ImageResources = 0x8649 - ExifOffset = 0x8769 - InterColorProfile = 0x8773 - ExposureProgram = 0x8822 - SpectralSensitivity = 0x8824 - GPSInfo = 0x8825 - ISOSpeedRatings = 0x8827 - OECF = 0x8828 - Interlace = 0x8829 - TimeZoneOffset = 0x882A - SelfTimerMode = 0x882B - SensitivityType = 0x8830 - StandardOutputSensitivity = 0x8831 - RecommendedExposureIndex = 0x8832 - ISOSpeed = 0x8833 - ISOSpeedLatitudeyyy = 0x8834 - ISOSpeedLatitudezzz = 0x8835 - ExifVersion = 0x9000 - DateTimeOriginal = 0x9003 - DateTimeDigitized = 0x9004 - OffsetTime = 0x9010 - OffsetTimeOriginal = 0x9011 - OffsetTimeDigitized = 0x9012 - ComponentsConfiguration = 0x9101 - CompressedBitsPerPixel = 0x9102 - ShutterSpeedValue = 0x9201 - ApertureValue = 0x9202 - BrightnessValue = 0x9203 - ExposureBiasValue = 0x9204 - MaxApertureValue = 0x9205 - SubjectDistance = 0x9206 - MeteringMode = 0x9207 - LightSource = 0x9208 - Flash = 0x9209 - FocalLength = 0x920A - Noise = 0x920D - ImageNumber = 0x9211 - SecurityClassification = 0x9212 - ImageHistory = 0x9213 - TIFFEPStandardID = 0x9216 - MakerNote = 0x927C - UserComment = 0x9286 - SubsecTime = 0x9290 - SubsecTimeOriginal = 0x9291 - SubsecTimeDigitized = 0x9292 - AmbientTemperature = 0x9400 - Humidity = 0x9401 - Pressure = 0x9402 - WaterDepth = 0x9403 - Acceleration = 0x9404 - CameraElevationAngle = 0x9405 - XPTitle = 0x9C9B - XPComment = 0x9C9C - XPAuthor = 0x9C9D - XPKeywords = 0x9C9E - XPSubject = 0x9C9F - FlashPixVersion = 0xA000 - ColorSpace = 0xA001 - ExifImageWidth = 0xA002 - ExifImageHeight = 0xA003 - RelatedSoundFile = 0xA004 - ExifInteroperabilityOffset = 0xA005 - FlashEnergy = 0xA20B - SpatialFrequencyResponse = 0xA20C - FocalPlaneXResolution = 0xA20E - FocalPlaneYResolution = 0xA20F - FocalPlaneResolutionUnit = 0xA210 - SubjectLocation = 0xA214 - ExposureIndex = 0xA215 - SensingMethod = 0xA217 - FileSource = 0xA300 - SceneType = 0xA301 - CFAPattern = 0xA302 - CustomRendered = 0xA401 - ExposureMode = 0xA402 - WhiteBalance = 0xA403 - DigitalZoomRatio = 0xA404 - FocalLengthIn35mmFilm = 0xA405 - SceneCaptureType = 0xA406 - GainControl = 0xA407 - Contrast = 0xA408 - Saturation = 0xA409 - Sharpness = 0xA40A - DeviceSettingDescription = 0xA40B - SubjectDistanceRange = 0xA40C - ImageUniqueID = 0xA420 - CameraOwnerName = 0xA430 - BodySerialNumber = 0xA431 - LensSpecification = 0xA432 - LensMake = 0xA433 - LensModel = 0xA434 - LensSerialNumber = 0xA435 - CompositeImage = 0xA460 - CompositeImageCount = 0xA461 - CompositeImageExposureTimes = 0xA462 - Gamma = 0xA500 - PrintImageMatching = 0xC4A5 - DNGVersion = 0xC612 - DNGBackwardVersion = 0xC613 - UniqueCameraModel = 0xC614 - LocalizedCameraModel = 0xC615 - CFAPlaneColor = 0xC616 - CFALayout = 0xC617 - LinearizationTable = 0xC618 - BlackLevelRepeatDim = 0xC619 - BlackLevel = 0xC61A - BlackLevelDeltaH = 0xC61B - BlackLevelDeltaV = 0xC61C - WhiteLevel = 0xC61D - DefaultScale = 0xC61E - DefaultCropOrigin = 0xC61F - DefaultCropSize = 0xC620 - ColorMatrix1 = 0xC621 - ColorMatrix2 = 0xC622 - CameraCalibration1 = 0xC623 - CameraCalibration2 = 0xC624 - ReductionMatrix1 = 0xC625 - ReductionMatrix2 = 0xC626 - AnalogBalance = 0xC627 - AsShotNeutral = 0xC628 - AsShotWhiteXY = 0xC629 - BaselineExposure = 0xC62A - BaselineNoise = 0xC62B - BaselineSharpness = 0xC62C - BayerGreenSplit = 0xC62D - LinearResponseLimit = 0xC62E - CameraSerialNumber = 0xC62F - LensInfo = 0xC630 - ChromaBlurRadius = 0xC631 - AntiAliasStrength = 0xC632 - ShadowScale = 0xC633 - DNGPrivateData = 0xC634 - MakerNoteSafety = 0xC635 - CalibrationIlluminant1 = 0xC65A - CalibrationIlluminant2 = 0xC65B - BestQualityScale = 0xC65C - RawDataUniqueID = 0xC65D - OriginalRawFileName = 0xC68B - OriginalRawFileData = 0xC68C - ActiveArea = 0xC68D - MaskedAreas = 0xC68E - AsShotICCProfile = 0xC68F - AsShotPreProfileMatrix = 0xC690 - CurrentICCProfile = 0xC691 - CurrentPreProfileMatrix = 0xC692 - ColorimetricReference = 0xC6BF - CameraCalibrationSignature = 0xC6F3 - ProfileCalibrationSignature = 0xC6F4 - AsShotProfileName = 0xC6F6 - NoiseReductionApplied = 0xC6F7 - ProfileName = 0xC6F8 - ProfileHueSatMapDims = 0xC6F9 - ProfileHueSatMapData1 = 0xC6FA - ProfileHueSatMapData2 = 0xC6FB - ProfileToneCurve = 0xC6FC - ProfileEmbedPolicy = 0xC6FD - ProfileCopyright = 0xC6FE - ForwardMatrix1 = 0xC714 - ForwardMatrix2 = 0xC715 - PreviewApplicationName = 0xC716 - PreviewApplicationVersion = 0xC717 - PreviewSettingsName = 0xC718 - PreviewSettingsDigest = 0xC719 - PreviewColorSpace = 0xC71A - PreviewDateTime = 0xC71B - RawImageDigest = 0xC71C - OriginalRawFileDigest = 0xC71D - SubTileBlockSize = 0xC71E - RowInterleaveFactor = 0xC71F - ProfileLookTableDims = 0xC725 - ProfileLookTableData = 0xC726 - OpcodeList1 = 0xC740 - OpcodeList2 = 0xC741 - OpcodeList3 = 0xC74E - NoiseProfile = 0xC761 - - -"""Maps EXIF tags to tag names.""" -TAGS = { - **{i.value: i.name for i in Base}, - 0x920C: "SpatialFrequencyResponse", - 0x9214: "SubjectLocation", - 0x9215: "ExposureIndex", - 0x828E: "CFAPattern", - 0x920B: "FlashEnergy", - 0x9216: "TIFF/EPStandardID", -} - - -class GPS(IntEnum): - GPSVersionID = 0x00 - GPSLatitudeRef = 0x01 - GPSLatitude = 0x02 - GPSLongitudeRef = 0x03 - GPSLongitude = 0x04 - GPSAltitudeRef = 0x05 - GPSAltitude = 0x06 - GPSTimeStamp = 0x07 - GPSSatellites = 0x08 - GPSStatus = 0x09 - GPSMeasureMode = 0x0A - GPSDOP = 0x0B - GPSSpeedRef = 0x0C - GPSSpeed = 0x0D - GPSTrackRef = 0x0E - GPSTrack = 0x0F - GPSImgDirectionRef = 0x10 - GPSImgDirection = 0x11 - GPSMapDatum = 0x12 - GPSDestLatitudeRef = 0x13 - GPSDestLatitude = 0x14 - GPSDestLongitudeRef = 0x15 - GPSDestLongitude = 0x16 - GPSDestBearingRef = 0x17 - GPSDestBearing = 0x18 - GPSDestDistanceRef = 0x19 - GPSDestDistance = 0x1A - GPSProcessingMethod = 0x1B - GPSAreaInformation = 0x1C - GPSDateStamp = 0x1D - GPSDifferential = 0x1E - GPSHPositioningError = 0x1F - - -"""Maps EXIF GPS tags to tag names.""" -GPSTAGS = {i.value: i.name for i in GPS} - - -class Interop(IntEnum): - InteropIndex = 0x0001 - InteropVersion = 0x0002 - RelatedImageFileFormat = 0x1000 - RelatedImageWidth = 0x1001 - RelatedImageHeight = 0x1002 - - -class IFD(IntEnum): - Exif = 0x8769 - GPSInfo = 0x8825 - MakerNote = 0x927C - Makernote = 0x927C # Deprecated - Interop = 0xA005 - IFD1 = -1 - - -class LightSource(IntEnum): - Unknown = 0x00 - Daylight = 0x01 - Fluorescent = 0x02 - Tungsten = 0x03 - Flash = 0x04 - Fine = 0x09 - Cloudy = 0x0A - Shade = 0x0B - DaylightFluorescent = 0x0C - DayWhiteFluorescent = 0x0D - CoolWhiteFluorescent = 0x0E - WhiteFluorescent = 0x0F - StandardLightA = 0x11 - StandardLightB = 0x12 - StandardLightC = 0x13 - D55 = 0x14 - D65 = 0x15 - D75 = 0x16 - D50 = 0x17 - ISO = 0x18 - Other = 0xFF diff --git a/pptx-env/lib/python3.12/site-packages/PIL/FitsImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/FitsImagePlugin.py deleted file mode 100644 index a3fdc0ef..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/FitsImagePlugin.py +++ /dev/null @@ -1,152 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# FITS file handling -# -# Copyright (c) 1998-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import gzip -import math - -from . import Image, ImageFile - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"SIMPLE") - - -class FitsImageFile(ImageFile.ImageFile): - format = "FITS" - format_description = "FITS" - - def _open(self) -> None: - assert self.fp is not None - - headers: dict[bytes, bytes] = {} - header_in_progress = False - decoder_name = "" - while True: - header = self.fp.read(80) - if not header: - msg = "Truncated FITS file" - raise OSError(msg) - keyword = header[:8].strip() - if keyword in (b"SIMPLE", b"XTENSION"): - header_in_progress = True - elif headers and not header_in_progress: - # This is now a data unit - break - elif keyword == b"END": - # Seek to the end of the header unit - self.fp.seek(math.ceil(self.fp.tell() / 2880) * 2880) - if not decoder_name: - decoder_name, offset, args = self._parse_headers(headers) - - header_in_progress = False - continue - - if decoder_name: - # Keep going to read past the headers - continue - - value = header[8:].split(b"/")[0].strip() - if value.startswith(b"="): - value = value[1:].strip() - if not headers and (not _accept(keyword) or value != b"T"): - msg = "Not a FITS file" - raise SyntaxError(msg) - headers[keyword] = value - - if not decoder_name: - msg = "No image data" - raise ValueError(msg) - - offset += self.fp.tell() - 80 - self.tile = [ImageFile._Tile(decoder_name, (0, 0) + self.size, offset, args)] - - def _get_size( - self, headers: dict[bytes, bytes], prefix: bytes - ) -> tuple[int, int] | None: - naxis = int(headers[prefix + b"NAXIS"]) - if naxis == 0: - return None - - if naxis == 1: - return 1, int(headers[prefix + b"NAXIS1"]) - else: - return int(headers[prefix + b"NAXIS1"]), int(headers[prefix + b"NAXIS2"]) - - def _parse_headers( - self, headers: dict[bytes, bytes] - ) -> tuple[str, int, tuple[str | int, ...]]: - prefix = b"" - decoder_name = "raw" - offset = 0 - if ( - headers.get(b"XTENSION") == b"'BINTABLE'" - and headers.get(b"ZIMAGE") == b"T" - and headers[b"ZCMPTYPE"] == b"'GZIP_1 '" - ): - no_prefix_size = self._get_size(headers, prefix) or (0, 0) - number_of_bits = int(headers[b"BITPIX"]) - offset = no_prefix_size[0] * no_prefix_size[1] * (number_of_bits // 8) - - prefix = b"Z" - decoder_name = "fits_gzip" - - size = self._get_size(headers, prefix) - if not size: - return "", 0, () - - self._size = size - - number_of_bits = int(headers[prefix + b"BITPIX"]) - if number_of_bits == 8: - self._mode = "L" - elif number_of_bits == 16: - self._mode = "I;16" - elif number_of_bits == 32: - self._mode = "I" - elif number_of_bits in (-32, -64): - self._mode = "F" - - args: tuple[str | int, ...] - if decoder_name == "raw": - args = (self.mode, 0, -1) - else: - args = (number_of_bits,) - return decoder_name, offset, args - - -class FitsGzipDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - value = gzip.decompress(self.fd.read()) - - rows = [] - offset = 0 - number_of_bits = min(self.args[0] // 8, 4) - for y in range(self.state.ysize): - row = bytearray() - for x in range(self.state.xsize): - row += value[offset + (4 - number_of_bits) : offset + 4] - offset += 4 - rows.append(row) - self.set_as_raw(bytes([pixel for row in rows[::-1] for pixel in row])) - return -1, 0 - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(FitsImageFile.format, FitsImageFile, _accept) -Image.register_decoder("fits_gzip", FitsGzipDecoder) - -Image.register_extensions(FitsImageFile.format, [".fit", ".fits"]) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/FliImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/FliImagePlugin.py deleted file mode 100644 index da1e8e95..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/FliImagePlugin.py +++ /dev/null @@ -1,184 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# FLI/FLC file handling. -# -# History: -# 95-09-01 fl Created -# 97-01-03 fl Fixed parser, setup decoder tile -# 98-07-15 fl Renamed offset attribute to avoid name clash -# -# Copyright (c) Secret Labs AB 1997-98. -# Copyright (c) Fredrik Lundh 1995-97. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import i32le as i32 -from ._binary import o8 -from ._util import DeferredError - -# -# decoder - - -def _accept(prefix: bytes) -> bool: - return ( - len(prefix) >= 16 - and i16(prefix, 4) in [0xAF11, 0xAF12] - and i16(prefix, 14) in [0, 3] # flags - ) - - -## -# Image plugin for the FLI/FLC animation format. Use the seek -# method to load individual frames. - - -class FliImageFile(ImageFile.ImageFile): - format = "FLI" - format_description = "Autodesk FLI/FLC Animation" - _close_exclusive_fp_after_loading = False - - def _open(self) -> None: - # HEAD - assert self.fp is not None - s = self.fp.read(128) - if not ( - _accept(s) - and s[20:22] == b"\x00" * 2 - and s[42:80] == b"\x00" * 38 - and s[88:] == b"\x00" * 40 - ): - msg = "not an FLI/FLC file" - raise SyntaxError(msg) - - # frames - self.n_frames = i16(s, 6) - self.is_animated = self.n_frames > 1 - - # image characteristics - self._mode = "P" - self._size = i16(s, 8), i16(s, 10) - - # animation speed - duration = i32(s, 16) - magic = i16(s, 4) - if magic == 0xAF11: - duration = (duration * 1000) // 70 - self.info["duration"] = duration - - # look for palette - palette = [(a, a, a) for a in range(256)] - - s = self.fp.read(16) - - self.__offset = 128 - - if i16(s, 4) == 0xF100: - # prefix chunk; ignore it - self.fp.seek(self.__offset + i32(s)) - s = self.fp.read(16) - - if i16(s, 4) == 0xF1FA: - # look for palette chunk - number_of_subchunks = i16(s, 6) - chunk_size: int | None = None - for _ in range(number_of_subchunks): - if chunk_size is not None: - self.fp.seek(chunk_size - 6, os.SEEK_CUR) - s = self.fp.read(6) - chunk_type = i16(s, 4) - if chunk_type in (4, 11): - self._palette(palette, 2 if chunk_type == 11 else 0) - break - chunk_size = i32(s) - if not chunk_size: - break - - self.palette = ImagePalette.raw( - "RGB", b"".join(o8(r) + o8(g) + o8(b) for (r, g, b) in palette) - ) - - # set things up to decode first frame - self.__frame = -1 - self._fp = self.fp - self.__rewind = self.fp.tell() - self.seek(0) - - def _palette(self, palette: list[tuple[int, int, int]], shift: int) -> None: - # load palette - - i = 0 - assert self.fp is not None - for e in range(i16(self.fp.read(2))): - s = self.fp.read(2) - i = i + s[0] - n = s[1] - if n == 0: - n = 256 - s = self.fp.read(n * 3) - for n in range(0, len(s), 3): - r = s[n] << shift - g = s[n + 1] << shift - b = s[n + 2] << shift - palette[i] = (r, g, b) - i += 1 - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - if frame < self.__frame: - self._seek(0) - - for f in range(self.__frame + 1, frame + 1): - self._seek(f) - - def _seek(self, frame: int) -> None: - if isinstance(self._fp, DeferredError): - raise self._fp.ex - if frame == 0: - self.__frame = -1 - self._fp.seek(self.__rewind) - self.__offset = 128 - else: - # ensure that the previous frame was loaded - self.load() - - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - self.__frame = frame - - # move to next frame - self.fp = self._fp - self.fp.seek(self.__offset) - - s = self.fp.read(4) - if not s: - msg = "missing frame size" - raise EOFError(msg) - - framesize = i32(s) - - self.decodermaxblock = framesize - self.tile = [ImageFile._Tile("fli", (0, 0) + self.size, self.__offset)] - - self.__offset += framesize - - def tell(self) -> int: - return self.__frame - - -# -# registry - -Image.register_open(FliImageFile.format, FliImageFile, _accept) - -Image.register_extensions(FliImageFile.format, [".fli", ".flc"]) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/FontFile.py b/pptx-env/lib/python3.12/site-packages/PIL/FontFile.py deleted file mode 100644 index 1e0c1c16..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/FontFile.py +++ /dev/null @@ -1,134 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# base class for raster font file parsers -# -# history: -# 1997-06-05 fl created -# 1997-08-19 fl restrict image width -# -# Copyright (c) 1997-1998 by Secret Labs AB -# Copyright (c) 1997-1998 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -from typing import BinaryIO - -from . import Image, _binary - -WIDTH = 800 - - -def puti16( - fp: BinaryIO, values: tuple[int, int, int, int, int, int, int, int, int, int] -) -> None: - """Write network order (big-endian) 16-bit sequence""" - for v in values: - if v < 0: - v += 65536 - fp.write(_binary.o16be(v)) - - -class FontFile: - """Base class for raster font file handlers.""" - - bitmap: Image.Image | None = None - - def __init__(self) -> None: - self.info: dict[bytes, bytes | int] = {} - self.glyph: list[ - tuple[ - tuple[int, int], - tuple[int, int, int, int], - tuple[int, int, int, int], - Image.Image, - ] - | None - ] = [None] * 256 - - def __getitem__(self, ix: int) -> ( - tuple[ - tuple[int, int], - tuple[int, int, int, int], - tuple[int, int, int, int], - Image.Image, - ] - | None - ): - return self.glyph[ix] - - def compile(self) -> None: - """Create metrics and bitmap""" - - if self.bitmap: - return - - # create bitmap large enough to hold all data - h = w = maxwidth = 0 - lines = 1 - for glyph in self.glyph: - if glyph: - d, dst, src, im = glyph - h = max(h, src[3] - src[1]) - w = w + (src[2] - src[0]) - if w > WIDTH: - lines += 1 - w = src[2] - src[0] - maxwidth = max(maxwidth, w) - - xsize = maxwidth - ysize = lines * h - - if xsize == 0 and ysize == 0: - return - - self.ysize = h - - # paste glyphs into bitmap - self.bitmap = Image.new("1", (xsize, ysize)) - self.metrics: list[ - tuple[tuple[int, int], tuple[int, int, int, int], tuple[int, int, int, int]] - | None - ] = [None] * 256 - x = y = 0 - for i in range(256): - glyph = self[i] - if glyph: - d, dst, src, im = glyph - xx = src[2] - src[0] - x0, y0 = x, y - x = x + xx - if x > WIDTH: - x, y = 0, y + h - x0, y0 = x, y - x = xx - s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0 - self.bitmap.paste(im.crop(src), s) - self.metrics[i] = d, dst, s - - def save(self, filename: str) -> None: - """Save font""" - - self.compile() - - # font data - if not self.bitmap: - msg = "No bitmap created" - raise ValueError(msg) - self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG") - - # font metrics - with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp: - fp.write(b"PILfont\n") - fp.write(f";;;;;;{self.ysize};\n".encode("ascii")) # HACK!!! - fp.write(b"DATA\n") - for id in range(256): - m = self.metrics[id] - if not m: - puti16(fp, (0,) * 10) - else: - puti16(fp, m[0] + m[1] + m[2]) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/FpxImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/FpxImagePlugin.py deleted file mode 100644 index fd992cd9..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/FpxImagePlugin.py +++ /dev/null @@ -1,257 +0,0 @@ -# -# THIS IS WORK IN PROGRESS -# -# The Python Imaging Library. -# $Id$ -# -# FlashPix support for PIL -# -# History: -# 97-01-25 fl Created (reads uncompressed RGB images only) -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import olefile - -from . import Image, ImageFile -from ._binary import i32le as i32 - -# we map from colour field tuples to (mode, rawmode) descriptors -MODES = { - # opacity - (0x00007FFE,): ("A", "L"), - # monochrome - (0x00010000,): ("L", "L"), - (0x00018000, 0x00017FFE): ("RGBA", "LA"), - # photo YCC - (0x00020000, 0x00020001, 0x00020002): ("RGB", "YCC;P"), - (0x00028000, 0x00028001, 0x00028002, 0x00027FFE): ("RGBA", "YCCA;P"), - # standard RGB (NIFRGB) - (0x00030000, 0x00030001, 0x00030002): ("RGB", "RGB"), - (0x00038000, 0x00038001, 0x00038002, 0x00037FFE): ("RGBA", "RGBA"), -} - - -# -# -------------------------------------------------------------------- - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(olefile.MAGIC) - - -## -# Image plugin for the FlashPix images. - - -class FpxImageFile(ImageFile.ImageFile): - format = "FPX" - format_description = "FlashPix" - - def _open(self) -> None: - # - # read the OLE directory and see if this is a likely - # to be a FlashPix file - - try: - self.ole = olefile.OleFileIO(self.fp) - except OSError as e: - msg = "not an FPX file; invalid OLE file" - raise SyntaxError(msg) from e - - root = self.ole.root - if not root or root.clsid != "56616700-C154-11CE-8553-00AA00A1F95B": - msg = "not an FPX file; bad root CLSID" - raise SyntaxError(msg) - - self._open_index(1) - - def _open_index(self, index: int = 1) -> None: - # - # get the Image Contents Property Set - - prop = self.ole.getproperties( - [f"Data Object Store {index:06d}", "\005Image Contents"] - ) - - # size (highest resolution) - - assert isinstance(prop[0x1000002], int) - assert isinstance(prop[0x1000003], int) - self._size = prop[0x1000002], prop[0x1000003] - - size = max(self.size) - i = 1 - while size > 64: - size = size // 2 - i += 1 - self.maxid = i - 1 - - # mode. instead of using a single field for this, flashpix - # requires you to specify the mode for each channel in each - # resolution subimage, and leaves it to the decoder to make - # sure that they all match. for now, we'll cheat and assume - # that this is always the case. - - id = self.maxid << 16 - - s = prop[0x2000002 | id] - - if not isinstance(s, bytes) or (bands := i32(s, 4)) > 4: - msg = "Invalid number of bands" - raise OSError(msg) - - # note: for now, we ignore the "uncalibrated" flag - colors = tuple(i32(s, 8 + i * 4) & 0x7FFFFFFF for i in range(bands)) - - self._mode, self.rawmode = MODES[colors] - - # load JPEG tables, if any - self.jpeg = {} - for i in range(256): - id = 0x3000001 | (i << 16) - if id in prop: - self.jpeg[i] = prop[id] - - self._open_subimage(1, self.maxid) - - def _open_subimage(self, index: int = 1, subimage: int = 0) -> None: - # - # setup tile descriptors for a given subimage - - stream = [ - f"Data Object Store {index:06d}", - f"Resolution {subimage:04d}", - "Subimage 0000 Header", - ] - - fp = self.ole.openstream(stream) - - # skip prefix - fp.read(28) - - # header stream - s = fp.read(36) - - size = i32(s, 4), i32(s, 8) - # tilecount = i32(s, 12) - tilesize = i32(s, 16), i32(s, 20) - # channels = i32(s, 24) - offset = i32(s, 28) - length = i32(s, 32) - - if size != self.size: - msg = "subimage mismatch" - raise OSError(msg) - - # get tile descriptors - fp.seek(28 + offset) - s = fp.read(i32(s, 12) * length) - - x = y = 0 - xsize, ysize = size - xtile, ytile = tilesize - self.tile = [] - - for i in range(0, len(s), length): - x1 = min(xsize, x + xtile) - y1 = min(ysize, y + ytile) - - compression = i32(s, i + 8) - - if compression == 0: - self.tile.append( - ImageFile._Tile( - "raw", - (x, y, x1, y1), - i32(s, i) + 28, - self.rawmode, - ) - ) - - elif compression == 1: - # FIXME: the fill decoder is not implemented - self.tile.append( - ImageFile._Tile( - "fill", - (x, y, x1, y1), - i32(s, i) + 28, - (self.rawmode, s[12:16]), - ) - ) - - elif compression == 2: - internal_color_conversion = s[14] - jpeg_tables = s[15] - rawmode = self.rawmode - - if internal_color_conversion: - # The image is stored as usual (usually YCbCr). - if rawmode == "RGBA": - # For "RGBA", data is stored as YCbCrA based on - # negative RGB. The following trick works around - # this problem : - jpegmode, rawmode = "YCbCrK", "CMYK" - else: - jpegmode = None # let the decoder decide - - else: - # The image is stored as defined by rawmode - jpegmode = rawmode - - self.tile.append( - ImageFile._Tile( - "jpeg", - (x, y, x1, y1), - i32(s, i) + 28, - (rawmode, jpegmode), - ) - ) - - # FIXME: jpeg tables are tile dependent; the prefix - # data must be placed in the tile descriptor itself! - - if jpeg_tables: - self.tile_prefix = self.jpeg[jpeg_tables] - - else: - msg = "unknown/invalid compression" - raise OSError(msg) - - x = x + xtile - if x >= xsize: - x, y = 0, y + ytile - if y >= ysize: - break # isn't really required - - self.stream = stream - self._fp = self.fp - self.fp = None - - def load(self) -> Image.core.PixelAccess | None: - if not self.fp: - self.fp = self.ole.openstream(self.stream[:2] + ["Subimage 0000 Data"]) - - return ImageFile.ImageFile.load(self) - - def close(self) -> None: - self.ole.close() - super().close() - - def __exit__(self, *args: object) -> None: - self.ole.close() - super().__exit__() - - -# -# -------------------------------------------------------------------- - - -Image.register_open(FpxImageFile.format, FpxImageFile, _accept) - -Image.register_extension(FpxImageFile.format, ".fpx") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/FtexImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/FtexImagePlugin.py deleted file mode 100644 index d60e75bb..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/FtexImagePlugin.py +++ /dev/null @@ -1,114 +0,0 @@ -""" -A Pillow loader for .ftc and .ftu files (FTEX) -Jerome Leclanche - -The contents of this file are hereby released in the public domain (CC0) -Full text of the CC0 license: - https://creativecommons.org/publicdomain/zero/1.0/ - -Independence War 2: Edge Of Chaos - Texture File Format - 16 October 2001 - -The textures used for 3D objects in Independence War 2: Edge Of Chaos are in a -packed custom format called FTEX. This file format uses file extensions FTC -and FTU. -* FTC files are compressed textures (using standard texture compression). -* FTU files are not compressed. -Texture File Format -The FTC and FTU texture files both use the same format. This -has the following structure: -{header} -{format_directory} -{data} -Where: -{header} = { - u32:magic, - u32:version, - u32:width, - u32:height, - u32:mipmap_count, - u32:format_count -} - -* The "magic" number is "FTEX". -* "width" and "height" are the dimensions of the texture. -* "mipmap_count" is the number of mipmaps in the texture. -* "format_count" is the number of texture formats (different versions of the -same texture) in this file. - -{format_directory} = format_count * { u32:format, u32:where } - -The format value is 0 for DXT1 compressed textures and 1 for 24-bit RGB -uncompressed textures. -The texture data for a format starts at the position "where" in the file. - -Each set of texture data in the file has the following structure: -{data} = format_count * { u32:mipmap_size, mipmap_size * { u8 } } -* "mipmap_size" is the number of bytes in that mip level. For compressed -textures this is the size of the texture data compressed with DXT1. For 24 bit -uncompressed textures, this is 3 * width * height. Following this are the image -bytes for that mipmap level. - -Note: All data is stored in little-Endian (Intel) byte order. -""" - -from __future__ import annotations - -import struct -from enum import IntEnum -from io import BytesIO - -from . import Image, ImageFile - -MAGIC = b"FTEX" - - -class Format(IntEnum): - DXT1 = 0 - UNCOMPRESSED = 1 - - -class FtexImageFile(ImageFile.ImageFile): - format = "FTEX" - format_description = "Texture File Format (IW2:EOC)" - - def _open(self) -> None: - if not _accept(self.fp.read(4)): - msg = "not an FTEX file" - raise SyntaxError(msg) - struct.unpack(" None: - pass - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(MAGIC) - - -Image.register_open(FtexImageFile.format, FtexImageFile, _accept) -Image.register_extensions(FtexImageFile.format, [".ftc", ".ftu"]) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/GbrImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/GbrImagePlugin.py deleted file mode 100644 index d6929536..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/GbrImagePlugin.py +++ /dev/null @@ -1,101 +0,0 @@ -# -# The Python Imaging Library -# -# load a GIMP brush file -# -# History: -# 96-03-14 fl Created -# 16-01-08 es Version 2 -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# Copyright (c) Eric Soroos 2016. -# -# See the README file for information on usage and redistribution. -# -# -# See https://github.com/GNOME/gimp/blob/mainline/devel-docs/gbr.txt for -# format documentation. -# -# This code Interprets version 1 and 2 .gbr files. -# Version 1 files are obsolete, and should not be used for new -# brushes. -# Version 2 files are saved by GIMP v2.8 (at least) -# Version 3 files have a format specifier of 18 for 16bit floats in -# the color depth field. This is currently unsupported by Pillow. -from __future__ import annotations - -from . import Image, ImageFile -from ._binary import i32be as i32 - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 8 and i32(prefix, 0) >= 20 and i32(prefix, 4) in (1, 2) - - -## -# Image plugin for the GIMP brush format. - - -class GbrImageFile(ImageFile.ImageFile): - format = "GBR" - format_description = "GIMP brush file" - - def _open(self) -> None: - header_size = i32(self.fp.read(4)) - if header_size < 20: - msg = "not a GIMP brush" - raise SyntaxError(msg) - version = i32(self.fp.read(4)) - if version not in (1, 2): - msg = f"Unsupported GIMP brush version: {version}" - raise SyntaxError(msg) - - width = i32(self.fp.read(4)) - height = i32(self.fp.read(4)) - color_depth = i32(self.fp.read(4)) - if width == 0 or height == 0: - msg = "not a GIMP brush" - raise SyntaxError(msg) - if color_depth not in (1, 4): - msg = f"Unsupported GIMP brush color depth: {color_depth}" - raise SyntaxError(msg) - - if version == 1: - comment_length = header_size - 20 - else: - comment_length = header_size - 28 - magic_number = self.fp.read(4) - if magic_number != b"GIMP": - msg = "not a GIMP brush, bad magic number" - raise SyntaxError(msg) - self.info["spacing"] = i32(self.fp.read(4)) - - self.info["comment"] = self.fp.read(comment_length)[:-1] - - if color_depth == 1: - self._mode = "L" - else: - self._mode = "RGBA" - - self._size = width, height - - # Image might not be small - Image._decompression_bomb_check(self.size) - - # Data is an uncompressed block of w * h * bytes/pixel - self._data_size = width * height * color_depth - - def load(self) -> Image.core.PixelAccess | None: - if self._im is None: - self.im = Image.core.new(self.mode, self.size) - self.frombytes(self.fp.read(self._data_size)) - return Image.Image.load(self) - - -# -# registry - - -Image.register_open(GbrImageFile.format, GbrImageFile, _accept) -Image.register_extension(GbrImageFile.format, ".gbr") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/GdImageFile.py b/pptx-env/lib/python3.12/site-packages/PIL/GdImageFile.py deleted file mode 100644 index 891225ce..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/GdImageFile.py +++ /dev/null @@ -1,102 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# GD file handling -# -# History: -# 1996-04-12 fl Created -# -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1996 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - - -""" -.. note:: - This format cannot be automatically recognized, so the - class is not registered for use with :py:func:`PIL.Image.open()`. To open a - gd file, use the :py:func:`PIL.GdImageFile.open()` function instead. - -.. warning:: - THE GD FORMAT IS NOT DESIGNED FOR DATA INTERCHANGE. This - implementation is provided for convenience and demonstrational - purposes only. -""" -from __future__ import annotations - -from typing import IO - -from . import ImageFile, ImagePalette, UnidentifiedImageError -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._typing import StrOrBytesPath - - -class GdImageFile(ImageFile.ImageFile): - """ - Image plugin for the GD uncompressed format. Note that this format - is not supported by the standard :py:func:`PIL.Image.open()` function. To use - this plugin, you have to import the :py:mod:`PIL.GdImageFile` module and - use the :py:func:`PIL.GdImageFile.open()` function. - """ - - format = "GD" - format_description = "GD uncompressed images" - - def _open(self) -> None: - # Header - assert self.fp is not None - - s = self.fp.read(1037) - - if i16(s) not in [65534, 65535]: - msg = "Not a valid GD 2.x .gd file" - raise SyntaxError(msg) - - self._mode = "P" - self._size = i16(s, 2), i16(s, 4) - - true_color = s[6] - true_color_offset = 2 if true_color else 0 - - # transparency index - tindex = i32(s, 7 + true_color_offset) - if tindex < 256: - self.info["transparency"] = tindex - - self.palette = ImagePalette.raw( - "RGBX", s[7 + true_color_offset + 6 : 7 + true_color_offset + 6 + 256 * 4] - ) - - self.tile = [ - ImageFile._Tile( - "raw", - (0, 0) + self.size, - 7 + true_color_offset + 6 + 256 * 4, - "L", - ) - ] - - -def open(fp: StrOrBytesPath | IO[bytes], mode: str = "r") -> GdImageFile: - """ - Load texture from a GD image file. - - :param fp: GD file name, or an opened file handle. - :param mode: Optional mode. In this version, if the mode argument - is given, it must be "r". - :returns: An image instance. - :raises OSError: If the image could not be read. - """ - if mode != "r": - msg = "bad mode" - raise ValueError(msg) - - try: - return GdImageFile(fp) - except SyntaxError as e: - msg = "cannot identify this image file" - raise UnidentifiedImageError(msg) from e diff --git a/pptx-env/lib/python3.12/site-packages/PIL/GifImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/GifImagePlugin.py deleted file mode 100644 index 58c460ef..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/GifImagePlugin.py +++ /dev/null @@ -1,1215 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# GIF file handling -# -# History: -# 1995-09-01 fl Created -# 1996-12-14 fl Added interlace support -# 1996-12-30 fl Added animation support -# 1997-01-05 fl Added write support, fixed local colour map bug -# 1997-02-23 fl Make sure to load raster data in getdata() -# 1997-07-05 fl Support external decoder (0.4) -# 1998-07-09 fl Handle all modes when saving (0.5) -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 2001-04-16 fl Added rewind support (seek to frame 0) (0.6) -# 2001-04-17 fl Added palette optimization (0.7) -# 2002-06-06 fl Added transparency support for save (0.8) -# 2004-02-24 fl Disable interlacing for small images -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1995-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import itertools -import math -import os -import subprocess -from enum import IntEnum -from functools import cached_property -from typing import Any, NamedTuple, cast - -from . import ( - Image, - ImageChops, - ImageFile, - ImageMath, - ImageOps, - ImagePalette, - ImageSequence, -) -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 -from ._util import DeferredError - -TYPE_CHECKING = False -if TYPE_CHECKING: - from typing import IO, Literal - - from . import _imaging - from ._typing import Buffer - - -class LoadingStrategy(IntEnum): - """.. versionadded:: 9.1.0""" - - RGB_AFTER_FIRST = 0 - RGB_AFTER_DIFFERENT_PALETTE_ONLY = 1 - RGB_ALWAYS = 2 - - -#: .. versionadded:: 9.1.0 -LOADING_STRATEGY = LoadingStrategy.RGB_AFTER_FIRST - -# -------------------------------------------------------------------- -# Identify/read GIF files - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith((b"GIF87a", b"GIF89a")) - - -## -# Image plugin for GIF images. This plugin supports both GIF87 and -# GIF89 images. - - -class GifImageFile(ImageFile.ImageFile): - format = "GIF" - format_description = "Compuserve GIF" - _close_exclusive_fp_after_loading = False - - global_palette = None - - def data(self) -> bytes | None: - s = self.fp.read(1) - if s and s[0]: - return self.fp.read(s[0]) - return None - - def _is_palette_needed(self, p: bytes) -> bool: - for i in range(0, len(p), 3): - if not (i // 3 == p[i] == p[i + 1] == p[i + 2]): - return True - return False - - def _open(self) -> None: - # Screen - s = self.fp.read(13) - if not _accept(s): - msg = "not a GIF file" - raise SyntaxError(msg) - - self.info["version"] = s[:6] - self._size = i16(s, 6), i16(s, 8) - flags = s[10] - bits = (flags & 7) + 1 - - if flags & 128: - # get global palette - self.info["background"] = s[11] - # check if palette contains colour indices - p = self.fp.read(3 << bits) - if self._is_palette_needed(p): - p = ImagePalette.raw("RGB", p) - self.global_palette = self.palette = p - - self._fp = self.fp # FIXME: hack - self.__rewind = self.fp.tell() - self._n_frames: int | None = None - self._seek(0) # get ready to read first frame - - @property - def n_frames(self) -> int: - if self._n_frames is None: - current = self.tell() - try: - while True: - self._seek(self.tell() + 1, False) - except EOFError: - self._n_frames = self.tell() + 1 - self.seek(current) - return self._n_frames - - @cached_property - def is_animated(self) -> bool: - if self._n_frames is not None: - return self._n_frames != 1 - - current = self.tell() - if current: - return True - - try: - self._seek(1, False) - is_animated = True - except EOFError: - is_animated = False - - self.seek(current) - return is_animated - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - if frame < self.__frame: - self._im = None - self._seek(0) - - last_frame = self.__frame - for f in range(self.__frame + 1, frame + 1): - try: - self._seek(f) - except EOFError as e: - self.seek(last_frame) - msg = "no more images in GIF file" - raise EOFError(msg) from e - - def _seek(self, frame: int, update_image: bool = True) -> None: - if isinstance(self._fp, DeferredError): - raise self._fp.ex - if frame == 0: - # rewind - self.__offset = 0 - self.dispose: _imaging.ImagingCore | None = None - self.__frame = -1 - self._fp.seek(self.__rewind) - self.disposal_method = 0 - if "comment" in self.info: - del self.info["comment"] - else: - # ensure that the previous frame was loaded - if self.tile and update_image: - self.load() - - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - - self.fp = self._fp - if self.__offset: - # backup to last frame - self.fp.seek(self.__offset) - while self.data(): - pass - self.__offset = 0 - - s = self.fp.read(1) - if not s or s == b";": - msg = "no more images in GIF file" - raise EOFError(msg) - - palette: ImagePalette.ImagePalette | Literal[False] | None = None - - info: dict[str, Any] = {} - frame_transparency = None - interlace = None - frame_dispose_extent = None - while True: - if not s: - s = self.fp.read(1) - if not s or s == b";": - break - - elif s == b"!": - # - # extensions - # - s = self.fp.read(1) - block = self.data() - if s[0] == 249 and block is not None: - # - # graphic control extension - # - flags = block[0] - if flags & 1: - frame_transparency = block[3] - info["duration"] = i16(block, 1) * 10 - - # disposal method - find the value of bits 4 - 6 - dispose_bits = 0b00011100 & flags - dispose_bits = dispose_bits >> 2 - if dispose_bits: - # only set the dispose if it is not - # unspecified. I'm not sure if this is - # correct, but it seems to prevent the last - # frame from looking odd for some animations - self.disposal_method = dispose_bits - elif s[0] == 254: - # - # comment extension - # - comment = b"" - - # Read this comment block - while block: - comment += block - block = self.data() - - if "comment" in info: - # If multiple comment blocks in frame, separate with \n - info["comment"] += b"\n" + comment - else: - info["comment"] = comment - s = None - continue - elif s[0] == 255 and frame == 0 and block is not None: - # - # application extension - # - info["extension"] = block, self.fp.tell() - if block.startswith(b"NETSCAPE2.0"): - block = self.data() - if block and len(block) >= 3 and block[0] == 1: - self.info["loop"] = i16(block, 1) - while self.data(): - pass - - elif s == b",": - # - # local image - # - s = self.fp.read(9) - - # extent - x0, y0 = i16(s, 0), i16(s, 2) - x1, y1 = x0 + i16(s, 4), y0 + i16(s, 6) - if (x1 > self.size[0] or y1 > self.size[1]) and update_image: - self._size = max(x1, self.size[0]), max(y1, self.size[1]) - Image._decompression_bomb_check(self._size) - frame_dispose_extent = x0, y0, x1, y1 - flags = s[8] - - interlace = (flags & 64) != 0 - - if flags & 128: - bits = (flags & 7) + 1 - p = self.fp.read(3 << bits) - if self._is_palette_needed(p): - palette = ImagePalette.raw("RGB", p) - else: - palette = False - - # image data - bits = self.fp.read(1)[0] - self.__offset = self.fp.tell() - break - s = None - - if interlace is None: - msg = "image not found in GIF frame" - raise EOFError(msg) - - self.__frame = frame - if not update_image: - return - - self.tile = [] - - if self.dispose: - self.im.paste(self.dispose, self.dispose_extent) - - self._frame_palette = palette if palette is not None else self.global_palette - self._frame_transparency = frame_transparency - if frame == 0: - if self._frame_palette: - if LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS: - self._mode = "RGBA" if frame_transparency is not None else "RGB" - else: - self._mode = "P" - else: - self._mode = "L" - - if palette: - self.palette = palette - elif self.global_palette: - from copy import copy - - self.palette = copy(self.global_palette) - else: - self.palette = None - else: - if self.mode == "P": - if ( - LOADING_STRATEGY != LoadingStrategy.RGB_AFTER_DIFFERENT_PALETTE_ONLY - or palette - ): - if "transparency" in self.info: - self.im.putpalettealpha(self.info["transparency"], 0) - self.im = self.im.convert("RGBA", Image.Dither.FLOYDSTEINBERG) - self._mode = "RGBA" - del self.info["transparency"] - else: - self._mode = "RGB" - self.im = self.im.convert("RGB", Image.Dither.FLOYDSTEINBERG) - - def _rgb(color: int) -> tuple[int, int, int]: - if self._frame_palette: - if color * 3 + 3 > len(self._frame_palette.palette): - color = 0 - return cast( - tuple[int, int, int], - tuple(self._frame_palette.palette[color * 3 : color * 3 + 3]), - ) - else: - return (color, color, color) - - self.dispose = None - self.dispose_extent: tuple[int, int, int, int] | None = frame_dispose_extent - if self.dispose_extent and self.disposal_method >= 2: - try: - if self.disposal_method == 2: - # replace with background colour - - # only dispose the extent in this frame - x0, y0, x1, y1 = self.dispose_extent - dispose_size = (x1 - x0, y1 - y0) - - Image._decompression_bomb_check(dispose_size) - - # by convention, attempt to use transparency first - dispose_mode = "P" - color = self.info.get("transparency", frame_transparency) - if color is not None: - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGBA" - color = _rgb(color) + (0,) - else: - color = self.info.get("background", 0) - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGB" - color = _rgb(color) - self.dispose = Image.core.fill(dispose_mode, dispose_size, color) - else: - # replace with previous contents - if self._im is not None: - # only dispose the extent in this frame - self.dispose = self._crop(self.im, self.dispose_extent) - elif frame_transparency is not None: - x0, y0, x1, y1 = self.dispose_extent - dispose_size = (x1 - x0, y1 - y0) - - Image._decompression_bomb_check(dispose_size) - dispose_mode = "P" - color = frame_transparency - if self.mode in ("RGB", "RGBA"): - dispose_mode = "RGBA" - color = _rgb(frame_transparency) + (0,) - self.dispose = Image.core.fill( - dispose_mode, dispose_size, color - ) - except AttributeError: - pass - - if interlace is not None: - transparency = -1 - if frame_transparency is not None: - if frame == 0: - if LOADING_STRATEGY != LoadingStrategy.RGB_ALWAYS: - self.info["transparency"] = frame_transparency - elif self.mode not in ("RGB", "RGBA"): - transparency = frame_transparency - self.tile = [ - ImageFile._Tile( - "gif", - (x0, y0, x1, y1), - self.__offset, - (bits, interlace, transparency), - ) - ] - - if info.get("comment"): - self.info["comment"] = info["comment"] - for k in ["duration", "extension"]: - if k in info: - self.info[k] = info[k] - elif k in self.info: - del self.info[k] - - def load_prepare(self) -> None: - temp_mode = "P" if self._frame_palette else "L" - self._prev_im = None - if self.__frame == 0: - if self._frame_transparency is not None: - self.im = Image.core.fill( - temp_mode, self.size, self._frame_transparency - ) - elif self.mode in ("RGB", "RGBA"): - self._prev_im = self.im - if self._frame_palette: - self.im = Image.core.fill("P", self.size, self._frame_transparency or 0) - self.im.putpalette("RGB", *self._frame_palette.getdata()) - else: - self._im = None - if not self._prev_im and self._im is not None and self.size != self.im.size: - expanded_im = Image.core.fill(self.im.mode, self.size) - if self._frame_palette: - expanded_im.putpalette("RGB", *self._frame_palette.getdata()) - expanded_im.paste(self.im, (0, 0) + self.im.size) - - self.im = expanded_im - self._mode = temp_mode - self._frame_palette = None - - super().load_prepare() - - def load_end(self) -> None: - if self.__frame == 0: - if self.mode == "P" and LOADING_STRATEGY == LoadingStrategy.RGB_ALWAYS: - if self._frame_transparency is not None: - self.im.putpalettealpha(self._frame_transparency, 0) - self._mode = "RGBA" - else: - self._mode = "RGB" - self.im = self.im.convert(self.mode, Image.Dither.FLOYDSTEINBERG) - return - if not self._prev_im: - return - if self.size != self._prev_im.size: - if self._frame_transparency is not None: - expanded_im = Image.core.fill("RGBA", self.size) - else: - expanded_im = Image.core.fill("P", self.size) - expanded_im.putpalette("RGB", "RGB", self.im.getpalette()) - expanded_im = expanded_im.convert("RGB") - expanded_im.paste(self._prev_im, (0, 0) + self._prev_im.size) - - self._prev_im = expanded_im - assert self._prev_im is not None - if self._frame_transparency is not None: - if self.mode == "L": - frame_im = self.im.convert_transparent("LA", self._frame_transparency) - else: - self.im.putpalettealpha(self._frame_transparency, 0) - frame_im = self.im.convert("RGBA") - else: - frame_im = self.im.convert("RGB") - - assert self.dispose_extent is not None - frame_im = self._crop(frame_im, self.dispose_extent) - - self.im = self._prev_im - self._mode = self.im.mode - if frame_im.mode in ("LA", "RGBA"): - self.im.paste(frame_im, self.dispose_extent, frame_im) - else: - self.im.paste(frame_im, self.dispose_extent) - - def tell(self) -> int: - return self.__frame - - -# -------------------------------------------------------------------- -# Write GIF files - - -RAWMODE = {"1": "L", "L": "L", "P": "P"} - - -def _normalize_mode(im: Image.Image) -> Image.Image: - """ - Takes an image (or frame), returns an image in a mode that is appropriate - for saving in a Gif. - - It may return the original image, or it may return an image converted to - palette or 'L' mode. - - :param im: Image object - :returns: Image object - """ - if im.mode in RAWMODE: - im.load() - return im - if Image.getmodebase(im.mode) == "RGB": - im = im.convert("P", palette=Image.Palette.ADAPTIVE) - assert im.palette is not None - if im.palette.mode == "RGBA": - for rgba in im.palette.colors: - if rgba[3] == 0: - im.info["transparency"] = im.palette.colors[rgba] - break - return im - return im.convert("L") - - -_Palette = bytes | bytearray | list[int] | ImagePalette.ImagePalette - - -def _normalize_palette( - im: Image.Image, palette: _Palette | None, info: dict[str, Any] -) -> Image.Image: - """ - Normalizes the palette for image. - - Sets the palette to the incoming palette, if provided. - - Ensures that there's a palette for L mode images - - Optimizes the palette if necessary/desired. - - :param im: Image object - :param palette: bytes object containing the source palette, or .... - :param info: encoderinfo - :returns: Image object - """ - source_palette = None - if palette: - # a bytes palette - if isinstance(palette, (bytes, bytearray, list)): - source_palette = bytearray(palette[:768]) - if isinstance(palette, ImagePalette.ImagePalette): - source_palette = bytearray(palette.palette) - - if im.mode == "P": - if not source_palette: - im_palette = im.getpalette(None) - assert im_palette is not None - source_palette = bytearray(im_palette) - else: # L-mode - if not source_palette: - source_palette = bytearray(i // 3 for i in range(768)) - im.palette = ImagePalette.ImagePalette("RGB", palette=source_palette) - assert source_palette is not None - - if palette: - used_palette_colors: list[int | None] = [] - assert im.palette is not None - for i in range(0, len(source_palette), 3): - source_color = tuple(source_palette[i : i + 3]) - index = im.palette.colors.get(source_color) - if index in used_palette_colors: - index = None - used_palette_colors.append(index) - for i, index in enumerate(used_palette_colors): - if index is None: - for j in range(len(used_palette_colors)): - if j not in used_palette_colors: - used_palette_colors[i] = j - break - dest_map: list[int] = [] - for index in used_palette_colors: - assert index is not None - dest_map.append(index) - im = im.remap_palette(dest_map) - else: - optimized_palette_colors = _get_optimize(im, info) - if optimized_palette_colors is not None: - im = im.remap_palette(optimized_palette_colors, source_palette) - if "transparency" in info: - try: - info["transparency"] = optimized_palette_colors.index( - info["transparency"] - ) - except ValueError: - del info["transparency"] - return im - - assert im.palette is not None - im.palette.palette = source_palette - return im - - -def _write_single_frame( - im: Image.Image, - fp: IO[bytes], - palette: _Palette | None, -) -> None: - im_out = _normalize_mode(im) - for k, v in im_out.info.items(): - if isinstance(k, str): - im.encoderinfo.setdefault(k, v) - im_out = _normalize_palette(im_out, palette, im.encoderinfo) - - for s in _get_global_header(im_out, im.encoderinfo): - fp.write(s) - - # local image header - flags = 0 - if get_interlace(im): - flags = flags | 64 - _write_local_header(fp, im, (0, 0), flags) - - im_out.encoderconfig = (8, get_interlace(im)) - ImageFile._save( - im_out, fp, [ImageFile._Tile("gif", (0, 0) + im.size, 0, RAWMODE[im_out.mode])] - ) - - fp.write(b"\0") # end of image data - - -def _getbbox( - base_im: Image.Image, im_frame: Image.Image -) -> tuple[Image.Image, tuple[int, int, int, int] | None]: - palette_bytes = [ - bytes(im.palette.palette) if im.palette else b"" for im in (base_im, im_frame) - ] - if palette_bytes[0] != palette_bytes[1]: - im_frame = im_frame.convert("RGBA") - base_im = base_im.convert("RGBA") - delta = ImageChops.subtract_modulo(im_frame, base_im) - return delta, delta.getbbox(alpha_only=False) - - -class _Frame(NamedTuple): - im: Image.Image - bbox: tuple[int, int, int, int] | None - encoderinfo: dict[str, Any] - - -def _write_multiple_frames( - im: Image.Image, fp: IO[bytes], palette: _Palette | None -) -> bool: - duration = im.encoderinfo.get("duration") - disposal = im.encoderinfo.get("disposal", im.info.get("disposal")) - - im_frames: list[_Frame] = [] - previous_im: Image.Image | None = None - frame_count = 0 - background_im = None - for imSequence in itertools.chain([im], im.encoderinfo.get("append_images", [])): - for im_frame in ImageSequence.Iterator(imSequence): - # a copy is required here since seek can still mutate the image - im_frame = _normalize_mode(im_frame.copy()) - if frame_count == 0: - for k, v in im_frame.info.items(): - if k == "transparency": - continue - if isinstance(k, str): - im.encoderinfo.setdefault(k, v) - - encoderinfo = im.encoderinfo.copy() - if "transparency" in im_frame.info: - encoderinfo.setdefault("transparency", im_frame.info["transparency"]) - im_frame = _normalize_palette(im_frame, palette, encoderinfo) - if isinstance(duration, (list, tuple)): - encoderinfo["duration"] = duration[frame_count] - elif duration is None and "duration" in im_frame.info: - encoderinfo["duration"] = im_frame.info["duration"] - if isinstance(disposal, (list, tuple)): - encoderinfo["disposal"] = disposal[frame_count] - frame_count += 1 - - diff_frame = None - if im_frames and previous_im: - # delta frame - delta, bbox = _getbbox(previous_im, im_frame) - if not bbox: - # This frame is identical to the previous frame - if encoderinfo.get("duration"): - im_frames[-1].encoderinfo["duration"] += encoderinfo["duration"] - continue - if im_frames[-1].encoderinfo.get("disposal") == 2: - # To appear correctly in viewers using a convention, - # only consider transparency, and not background color - color = im.encoderinfo.get( - "transparency", im.info.get("transparency") - ) - if color is not None: - if background_im is None: - background = _get_background(im_frame, color) - background_im = Image.new("P", im_frame.size, background) - first_palette = im_frames[0].im.palette - assert first_palette is not None - background_im.putpalette(first_palette, first_palette.mode) - bbox = _getbbox(background_im, im_frame)[1] - else: - bbox = (0, 0) + im_frame.size - elif encoderinfo.get("optimize") and im_frame.mode != "1": - if "transparency" not in encoderinfo: - assert im_frame.palette is not None - try: - encoderinfo["transparency"] = ( - im_frame.palette._new_color_index(im_frame) - ) - except ValueError: - pass - if "transparency" in encoderinfo: - # When the delta is zero, fill the image with transparency - diff_frame = im_frame.copy() - fill = Image.new("P", delta.size, encoderinfo["transparency"]) - if delta.mode == "RGBA": - r, g, b, a = delta.split() - mask = ImageMath.lambda_eval( - lambda args: args["convert"]( - args["max"]( - args["max"]( - args["max"](args["r"], args["g"]), args["b"] - ), - args["a"], - ) - * 255, - "1", - ), - r=r, - g=g, - b=b, - a=a, - ) - else: - if delta.mode == "P": - # Convert to L without considering palette - delta_l = Image.new("L", delta.size) - delta_l.putdata(delta.getdata()) - delta = delta_l - mask = ImageMath.lambda_eval( - lambda args: args["convert"](args["im"] * 255, "1"), - im=delta, - ) - diff_frame.paste(fill, mask=ImageOps.invert(mask)) - else: - bbox = None - previous_im = im_frame - im_frames.append(_Frame(diff_frame or im_frame, bbox, encoderinfo)) - - if len(im_frames) == 1: - if "duration" in im.encoderinfo: - # Since multiple frames will not be written, use the combined duration - im.encoderinfo["duration"] = im_frames[0].encoderinfo["duration"] - return False - - for frame_data in im_frames: - im_frame = frame_data.im - if not frame_data.bbox: - # global header - for s in _get_global_header(im_frame, frame_data.encoderinfo): - fp.write(s) - offset = (0, 0) - else: - # compress difference - if not palette: - frame_data.encoderinfo["include_color_table"] = True - - if frame_data.bbox != (0, 0) + im_frame.size: - im_frame = im_frame.crop(frame_data.bbox) - offset = frame_data.bbox[:2] - _write_frame_data(fp, im_frame, offset, frame_data.encoderinfo) - return True - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - _save(im, fp, filename, save_all=True) - - -def _save( - im: Image.Image, fp: IO[bytes], filename: str | bytes, save_all: bool = False -) -> None: - # header - if "palette" in im.encoderinfo or "palette" in im.info: - palette = im.encoderinfo.get("palette", im.info.get("palette")) - else: - palette = None - im.encoderinfo.setdefault("optimize", True) - - if not save_all or not _write_multiple_frames(im, fp, palette): - _write_single_frame(im, fp, palette) - - fp.write(b";") # end of file - - if hasattr(fp, "flush"): - fp.flush() - - -def get_interlace(im: Image.Image) -> int: - interlace = im.encoderinfo.get("interlace", 1) - - # workaround for @PIL153 - if min(im.size) < 16: - interlace = 0 - - return interlace - - -def _write_local_header( - fp: IO[bytes], im: Image.Image, offset: tuple[int, int], flags: int -) -> None: - try: - transparency = im.encoderinfo["transparency"] - except KeyError: - transparency = None - - if "duration" in im.encoderinfo: - duration = int(im.encoderinfo["duration"] / 10) - else: - duration = 0 - - disposal = int(im.encoderinfo.get("disposal", 0)) - - if transparency is not None or duration != 0 or disposal: - packed_flag = 1 if transparency is not None else 0 - packed_flag |= disposal << 2 - - fp.write( - b"!" - + o8(249) # extension intro - + o8(4) # length - + o8(packed_flag) # packed fields - + o16(duration) # duration - + o8(transparency or 0) # transparency index - + o8(0) - ) - - include_color_table = im.encoderinfo.get("include_color_table") - if include_color_table: - palette_bytes = _get_palette_bytes(im) - color_table_size = _get_color_table_size(palette_bytes) - if color_table_size: - flags = flags | 128 # local color table flag - flags = flags | color_table_size - - fp.write( - b"," - + o16(offset[0]) # offset - + o16(offset[1]) - + o16(im.size[0]) # size - + o16(im.size[1]) - + o8(flags) # flags - ) - if include_color_table and color_table_size: - fp.write(_get_header_palette(palette_bytes)) - fp.write(o8(8)) # bits - - -def _save_netpbm(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - # Unused by default. - # To use, uncomment the register_save call at the end of the file. - # - # If you need real GIF compression and/or RGB quantization, you - # can use the external NETPBM/PBMPLUS utilities. See comments - # below for information on how to enable this. - tempfile = im._dump() - - try: - with open(filename, "wb") as f: - if im.mode != "RGB": - subprocess.check_call( - ["ppmtogif", tempfile], stdout=f, stderr=subprocess.DEVNULL - ) - else: - # Pipe ppmquant output into ppmtogif - # "ppmquant 256 %s | ppmtogif > %s" % (tempfile, filename) - quant_cmd = ["ppmquant", "256", tempfile] - togif_cmd = ["ppmtogif"] - quant_proc = subprocess.Popen( - quant_cmd, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL - ) - togif_proc = subprocess.Popen( - togif_cmd, - stdin=quant_proc.stdout, - stdout=f, - stderr=subprocess.DEVNULL, - ) - - # Allow ppmquant to receive SIGPIPE if ppmtogif exits - assert quant_proc.stdout is not None - quant_proc.stdout.close() - - retcode = quant_proc.wait() - if retcode: - raise subprocess.CalledProcessError(retcode, quant_cmd) - - retcode = togif_proc.wait() - if retcode: - raise subprocess.CalledProcessError(retcode, togif_cmd) - finally: - try: - os.unlink(tempfile) - except OSError: - pass - - -# Force optimization so that we can test performance against -# cases where it took lots of memory and time previously. -_FORCE_OPTIMIZE = False - - -def _get_optimize(im: Image.Image, info: dict[str, Any]) -> list[int] | None: - """ - Palette optimization is a potentially expensive operation. - - This function determines if the palette should be optimized using - some heuristics, then returns the list of palette entries in use. - - :param im: Image object - :param info: encoderinfo - :returns: list of indexes of palette entries in use, or None - """ - if im.mode in ("P", "L") and info and info.get("optimize"): - # Potentially expensive operation. - - # The palette saves 3 bytes per color not used, but palette - # lengths are restricted to 3*(2**N) bytes. Max saving would - # be 768 -> 6 bytes if we went all the way down to 2 colors. - # * If we're over 128 colors, we can't save any space. - # * If there aren't any holes, it's not worth collapsing. - # * If we have a 'large' image, the palette is in the noise. - - # create the new palette if not every color is used - optimise = _FORCE_OPTIMIZE or im.mode == "L" - if optimise or im.width * im.height < 512 * 512: - # check which colors are used - used_palette_colors = [] - for i, count in enumerate(im.histogram()): - if count: - used_palette_colors.append(i) - - if optimise or max(used_palette_colors) >= len(used_palette_colors): - return used_palette_colors - - assert im.palette is not None - num_palette_colors = len(im.palette.palette) // Image.getmodebands( - im.palette.mode - ) - current_palette_size = 1 << (num_palette_colors - 1).bit_length() - if ( - # check that the palette would become smaller when saved - len(used_palette_colors) <= current_palette_size // 2 - # check that the palette is not already the smallest possible size - and current_palette_size > 2 - ): - return used_palette_colors - return None - - -def _get_color_table_size(palette_bytes: bytes) -> int: - # calculate the palette size for the header - if not palette_bytes: - return 0 - elif len(palette_bytes) < 9: - return 1 - else: - return math.ceil(math.log(len(palette_bytes) // 3, 2)) - 1 - - -def _get_header_palette(palette_bytes: bytes) -> bytes: - """ - Returns the palette, null padded to the next power of 2 (*3) bytes - suitable for direct inclusion in the GIF header - - :param palette_bytes: Unpadded palette bytes, in RGBRGB form - :returns: Null padded palette - """ - color_table_size = _get_color_table_size(palette_bytes) - - # add the missing amount of bytes - # the palette has to be 2< 0: - palette_bytes += o8(0) * 3 * actual_target_size_diff - return palette_bytes - - -def _get_palette_bytes(im: Image.Image) -> bytes: - """ - Gets the palette for inclusion in the gif header - - :param im: Image object - :returns: Bytes, len<=768 suitable for inclusion in gif header - """ - if not im.palette: - return b"" - - palette = bytes(im.palette.palette) - if im.palette.mode == "RGBA": - palette = b"".join(palette[i * 4 : i * 4 + 3] for i in range(len(palette) // 3)) - return palette - - -def _get_background( - im: Image.Image, - info_background: int | tuple[int, int, int] | tuple[int, int, int, int] | None, -) -> int: - background = 0 - if info_background: - if isinstance(info_background, tuple): - # WebPImagePlugin stores an RGBA value in info["background"] - # So it must be converted to the same format as GifImagePlugin's - # info["background"] - a global color table index - assert im.palette is not None - try: - background = im.palette.getcolor(info_background, im) - except ValueError as e: - if str(e) not in ( - # If all 256 colors are in use, - # then there is no need for the background color - "cannot allocate more than 256 colors", - # Ignore non-opaque WebP background - "cannot add non-opaque RGBA color to RGB palette", - ): - raise - else: - background = info_background - return background - - -def _get_global_header(im: Image.Image, info: dict[str, Any]) -> list[bytes]: - """Return a list of strings representing a GIF header""" - - # Header Block - # https://www.matthewflickinger.com/lab/whatsinagif/bits_and_bytes.asp - - version = b"87a" - if im.info.get("version") == b"89a" or ( - info - and ( - "transparency" in info - or info.get("loop") is not None - or info.get("duration") - or info.get("comment") - ) - ): - version = b"89a" - - background = _get_background(im, info.get("background")) - - palette_bytes = _get_palette_bytes(im) - color_table_size = _get_color_table_size(palette_bytes) - - header = [ - b"GIF" # signature - + version # version - + o16(im.size[0]) # canvas width - + o16(im.size[1]), # canvas height - # Logical Screen Descriptor - # size of global color table + global color table flag - o8(color_table_size + 128), # packed fields - # background + reserved/aspect - o8(background) + o8(0), - # Global Color Table - _get_header_palette(palette_bytes), - ] - if info.get("loop") is not None: - header.append( - b"!" - + o8(255) # extension intro - + o8(11) - + b"NETSCAPE2.0" - + o8(3) - + o8(1) - + o16(info["loop"]) # number of loops - + o8(0) - ) - if info.get("comment"): - comment_block = b"!" + o8(254) # extension intro - - comment = info["comment"] - if isinstance(comment, str): - comment = comment.encode() - for i in range(0, len(comment), 255): - subblock = comment[i : i + 255] - comment_block += o8(len(subblock)) + subblock - - comment_block += o8(0) - header.append(comment_block) - return header - - -def _write_frame_data( - fp: IO[bytes], - im_frame: Image.Image, - offset: tuple[int, int], - params: dict[str, Any], -) -> None: - try: - im_frame.encoderinfo = params - - # local image header - _write_local_header(fp, im_frame, offset, 0) - - ImageFile._save( - im_frame, - fp, - [ImageFile._Tile("gif", (0, 0) + im_frame.size, 0, RAWMODE[im_frame.mode])], - ) - - fp.write(b"\0") # end of image data - finally: - del im_frame.encoderinfo - - -# -------------------------------------------------------------------- -# Legacy GIF utilities - - -def getheader( - im: Image.Image, palette: _Palette | None = None, info: dict[str, Any] | None = None -) -> tuple[list[bytes], list[int] | None]: - """ - Legacy Method to get Gif data from image. - - Warning:: May modify image data. - - :param im: Image object - :param palette: bytes object containing the source palette, or .... - :param info: encoderinfo - :returns: tuple of(list of header items, optimized palette) - - """ - if info is None: - info = {} - - used_palette_colors = _get_optimize(im, info) - - if "background" not in info and "background" in im.info: - info["background"] = im.info["background"] - - im_mod = _normalize_palette(im, palette, info) - im.palette = im_mod.palette - im.im = im_mod.im - header = _get_global_header(im, info) - - return header, used_palette_colors - - -def getdata( - im: Image.Image, offset: tuple[int, int] = (0, 0), **params: Any -) -> list[bytes]: - """ - Legacy Method - - Return a list of strings representing this image. - The first string is a local image header, the rest contains - encoded image data. - - To specify duration, add the time in milliseconds, - e.g. ``getdata(im_frame, duration=1000)`` - - :param im: Image object - :param offset: Tuple of (x, y) pixels. Defaults to (0, 0) - :param \\**params: e.g. duration or other encoder info parameters - :returns: List of bytes containing GIF encoded frame data - - """ - from io import BytesIO - - class Collector(BytesIO): - data = [] - - def write(self, data: Buffer) -> int: - self.data.append(data) - return len(data) - - im.load() # make sure raster data is available - - fp = Collector() - - _write_frame_data(fp, im, offset, params) - - return fp.data - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GifImageFile.format, GifImageFile, _accept) -Image.register_save(GifImageFile.format, _save) -Image.register_save_all(GifImageFile.format, _save_all) -Image.register_extension(GifImageFile.format, ".gif") -Image.register_mime(GifImageFile.format, "image/gif") - -# -# Uncomment the following line if you wish to use NETPBM/PBMPLUS -# instead of the built-in "uncompressed" GIF encoder - -# Image.register_save(GifImageFile.format, _save_netpbm) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/GimpGradientFile.py b/pptx-env/lib/python3.12/site-packages/PIL/GimpGradientFile.py deleted file mode 100644 index 5f269188..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/GimpGradientFile.py +++ /dev/null @@ -1,153 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read (and render) GIMP gradient files -# -# History: -# 97-08-23 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -""" -Stuff to translate curve segments to palette values (derived from -the corresponding code in GIMP, written by Federico Mena Quintero. -See the GIMP distribution for more information.) -""" -from __future__ import annotations - -from math import log, pi, sin, sqrt - -from ._binary import o8 - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from typing import IO - -EPSILON = 1e-10 -"""""" # Enable auto-doc for data member - - -def linear(middle: float, pos: float) -> float: - if pos <= middle: - if middle < EPSILON: - return 0.0 - else: - return 0.5 * pos / middle - else: - pos = pos - middle - middle = 1.0 - middle - if middle < EPSILON: - return 1.0 - else: - return 0.5 + 0.5 * pos / middle - - -def curved(middle: float, pos: float) -> float: - return pos ** (log(0.5) / log(max(middle, EPSILON))) - - -def sine(middle: float, pos: float) -> float: - return (sin((-pi / 2.0) + pi * linear(middle, pos)) + 1.0) / 2.0 - - -def sphere_increasing(middle: float, pos: float) -> float: - return sqrt(1.0 - (linear(middle, pos) - 1.0) ** 2) - - -def sphere_decreasing(middle: float, pos: float) -> float: - return 1.0 - sqrt(1.0 - linear(middle, pos) ** 2) - - -SEGMENTS = [linear, curved, sine, sphere_increasing, sphere_decreasing] -"""""" # Enable auto-doc for data member - - -class GradientFile: - gradient: ( - list[ - tuple[ - float, - float, - float, - list[float], - list[float], - Callable[[float, float], float], - ] - ] - | None - ) = None - - def getpalette(self, entries: int = 256) -> tuple[bytes, str]: - assert self.gradient is not None - palette = [] - - ix = 0 - x0, x1, xm, rgb0, rgb1, segment = self.gradient[ix] - - for i in range(entries): - x = i / (entries - 1) - - while x1 < x: - ix += 1 - x0, x1, xm, rgb0, rgb1, segment = self.gradient[ix] - - w = x1 - x0 - - if w < EPSILON: - scale = segment(0.5, 0.5) - else: - scale = segment((xm - x0) / w, (x - x0) / w) - - # expand to RGBA - r = o8(int(255 * ((rgb1[0] - rgb0[0]) * scale + rgb0[0]) + 0.5)) - g = o8(int(255 * ((rgb1[1] - rgb0[1]) * scale + rgb0[1]) + 0.5)) - b = o8(int(255 * ((rgb1[2] - rgb0[2]) * scale + rgb0[2]) + 0.5)) - a = o8(int(255 * ((rgb1[3] - rgb0[3]) * scale + rgb0[3]) + 0.5)) - - # add to palette - palette.append(r + g + b + a) - - return b"".join(palette), "RGBA" - - -class GimpGradientFile(GradientFile): - """File handler for GIMP's gradient format.""" - - def __init__(self, fp: IO[bytes]) -> None: - if not fp.readline().startswith(b"GIMP Gradient"): - msg = "not a GIMP gradient file" - raise SyntaxError(msg) - - line = fp.readline() - - # GIMP 1.2 gradient files don't contain a name, but GIMP 1.3 files do - if line.startswith(b"Name: "): - line = fp.readline().strip() - - count = int(line) - - self.gradient = [] - - for i in range(count): - s = fp.readline().split() - w = [float(x) for x in s[:11]] - - x0, x1 = w[0], w[2] - xm = w[1] - rgb0 = w[3:7] - rgb1 = w[7:11] - - segment = SEGMENTS[int(s[11])] - cspace = int(s[12]) - - if cspace != 0: - msg = "cannot handle HSV colour space" - raise OSError(msg) - - self.gradient.append((x0, x1, xm, rgb0, rgb1, segment)) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/GimpPaletteFile.py b/pptx-env/lib/python3.12/site-packages/PIL/GimpPaletteFile.py deleted file mode 100644 index 016257d3..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/GimpPaletteFile.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read GIMP palette files -# -# History: -# 1997-08-23 fl Created -# 2004-09-07 fl Support GIMP 2.0 palette files. -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1997-2004. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import re -from io import BytesIO - -TYPE_CHECKING = False -if TYPE_CHECKING: - from typing import IO - - -class GimpPaletteFile: - """File handler for GIMP's palette format.""" - - rawmode = "RGB" - - def _read(self, fp: IO[bytes], limit: bool = True) -> None: - if not fp.readline().startswith(b"GIMP Palette"): - msg = "not a GIMP palette file" - raise SyntaxError(msg) - - palette: list[int] = [] - i = 0 - while True: - if limit and i == 256 + 3: - break - - i += 1 - s = fp.readline() - if not s: - break - - # skip fields and comment lines - if re.match(rb"\w+:|#", s): - continue - if limit and len(s) > 100: - msg = "bad palette file" - raise SyntaxError(msg) - - v = s.split(maxsplit=3) - if len(v) < 3: - msg = "bad palette entry" - raise ValueError(msg) - - palette += (int(v[i]) for i in range(3)) - if limit and len(palette) == 768: - break - - self.palette = bytes(palette) - - def __init__(self, fp: IO[bytes]) -> None: - self._read(fp) - - @classmethod - def frombytes(cls, data: bytes) -> GimpPaletteFile: - self = cls.__new__(cls) - self._read(BytesIO(data), False) - return self - - def getpalette(self) -> tuple[bytes, str]: - return self.palette, self.rawmode diff --git a/pptx-env/lib/python3.12/site-packages/PIL/GribStubImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/GribStubImagePlugin.py deleted file mode 100644 index dfa79889..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/GribStubImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# GRIB stub adapter -# -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -from typing import IO - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler: ImageFile.StubHandler | None) -> None: - """ - Install application-specific GRIB image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 8 and prefix.startswith(b"GRIB") and prefix[7] == 1 - - -class GribStubImageFile(ImageFile.StubImageFile): - format = "GRIB" - format_description = "GRIB" - - def _open(self) -> None: - if not _accept(self.fp.read(8)): - msg = "Not a GRIB file" - raise SyntaxError(msg) - - self.fp.seek(-8, os.SEEK_CUR) - - # make something up - self._mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self) -> ImageFile.StubHandler | None: - return _handler - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if _handler is None or not hasattr(_handler, "save"): - msg = "GRIB save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(GribStubImageFile.format, GribStubImageFile, _accept) -Image.register_save(GribStubImageFile.format, _save) - -Image.register_extension(GribStubImageFile.format, ".grib") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/Hdf5StubImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/Hdf5StubImagePlugin.py deleted file mode 100644 index 76e640f1..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/Hdf5StubImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# HDF5 stub adapter -# -# Copyright (c) 2000-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -from typing import IO - -from . import Image, ImageFile - -_handler = None - - -def register_handler(handler: ImageFile.StubHandler | None) -> None: - """ - Install application-specific HDF5 image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -# -------------------------------------------------------------------- -# Image adapter - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"\x89HDF\r\n\x1a\n") - - -class HDF5StubImageFile(ImageFile.StubImageFile): - format = "HDF5" - format_description = "HDF5" - - def _open(self) -> None: - if not _accept(self.fp.read(8)): - msg = "Not an HDF file" - raise SyntaxError(msg) - - self.fp.seek(-8, os.SEEK_CUR) - - # make something up - self._mode = "F" - self._size = 1, 1 - - loader = self._load() - if loader: - loader.open(self) - - def _load(self) -> ImageFile.StubHandler | None: - return _handler - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if _handler is None or not hasattr(_handler, "save"): - msg = "HDF5 save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(HDF5StubImageFile.format, HDF5StubImageFile, _accept) -Image.register_save(HDF5StubImageFile.format, _save) - -Image.register_extensions(HDF5StubImageFile.format, [".h5", ".hdf"]) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/IcnsImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/IcnsImagePlugin.py deleted file mode 100644 index 197ea7a2..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/IcnsImagePlugin.py +++ /dev/null @@ -1,401 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# macOS icns file decoder, based on icns.py by Bob Ippolito. -# -# history: -# 2004-10-09 fl Turned into a PIL plugin; removed 2.3 dependencies. -# 2020-04-04 Allow saving on all operating systems. -# -# Copyright (c) 2004 by Bob Ippolito. -# Copyright (c) 2004 by Secret Labs. -# Copyright (c) 2004 by Fredrik Lundh. -# Copyright (c) 2014 by Alastair Houghton. -# Copyright (c) 2020 by Pan Jing. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -import os -import struct -import sys -from typing import IO - -from . import Image, ImageFile, PngImagePlugin, features - -enable_jpeg2k = features.check_codec("jpg_2000") -if enable_jpeg2k: - from . import Jpeg2KImagePlugin - -MAGIC = b"icns" -HEADERSIZE = 8 - - -def nextheader(fobj: IO[bytes]) -> tuple[bytes, int]: - return struct.unpack(">4sI", fobj.read(HEADERSIZE)) - - -def read_32t( - fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int] -) -> dict[str, Image.Image]: - # The 128x128 icon seems to have an extra header for some reason. - (start, length) = start_length - fobj.seek(start) - sig = fobj.read(4) - if sig != b"\x00\x00\x00\x00": - msg = "Unknown signature, expecting 0x00000000" - raise SyntaxError(msg) - return read_32(fobj, (start + 4, length - 4), size) - - -def read_32( - fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int] -) -> dict[str, Image.Image]: - """ - Read a 32bit RGB icon resource. Seems to be either uncompressed or - an RLE packbits-like scheme. - """ - (start, length) = start_length - fobj.seek(start) - pixel_size = (size[0] * size[2], size[1] * size[2]) - sizesq = pixel_size[0] * pixel_size[1] - if length == sizesq * 3: - # uncompressed ("RGBRGBGB") - indata = fobj.read(length) - im = Image.frombuffer("RGB", pixel_size, indata, "raw", "RGB", 0, 1) - else: - # decode image - im = Image.new("RGB", pixel_size, None) - for band_ix in range(3): - data = [] - bytesleft = sizesq - while bytesleft > 0: - byte = fobj.read(1) - if not byte: - break - byte_int = byte[0] - if byte_int & 0x80: - blocksize = byte_int - 125 - byte = fobj.read(1) - for i in range(blocksize): - data.append(byte) - else: - blocksize = byte_int + 1 - data.append(fobj.read(blocksize)) - bytesleft -= blocksize - if bytesleft <= 0: - break - if bytesleft != 0: - msg = f"Error reading channel [{repr(bytesleft)} left]" - raise SyntaxError(msg) - band = Image.frombuffer("L", pixel_size, b"".join(data), "raw", "L", 0, 1) - im.im.putband(band.im, band_ix) - return {"RGB": im} - - -def read_mk( - fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int] -) -> dict[str, Image.Image]: - # Alpha masks seem to be uncompressed - start = start_length[0] - fobj.seek(start) - pixel_size = (size[0] * size[2], size[1] * size[2]) - sizesq = pixel_size[0] * pixel_size[1] - band = Image.frombuffer("L", pixel_size, fobj.read(sizesq), "raw", "L", 0, 1) - return {"A": band} - - -def read_png_or_jpeg2000( - fobj: IO[bytes], start_length: tuple[int, int], size: tuple[int, int, int] -) -> dict[str, Image.Image]: - (start, length) = start_length - fobj.seek(start) - sig = fobj.read(12) - - im: Image.Image - if sig.startswith(b"\x89PNG\x0d\x0a\x1a\x0a"): - fobj.seek(start) - im = PngImagePlugin.PngImageFile(fobj) - Image._decompression_bomb_check(im.size) - return {"RGBA": im} - elif ( - sig.startswith((b"\xff\x4f\xff\x51", b"\x0d\x0a\x87\x0a")) - or sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a" - ): - if not enable_jpeg2k: - msg = ( - "Unsupported icon subimage format (rebuild PIL " - "with JPEG 2000 support to fix this)" - ) - raise ValueError(msg) - # j2k, jpc or j2c - fobj.seek(start) - jp2kstream = fobj.read(length) - f = io.BytesIO(jp2kstream) - im = Jpeg2KImagePlugin.Jpeg2KImageFile(f) - Image._decompression_bomb_check(im.size) - if im.mode != "RGBA": - im = im.convert("RGBA") - return {"RGBA": im} - else: - msg = "Unsupported icon subimage format" - raise ValueError(msg) - - -class IcnsFile: - SIZES = { - (512, 512, 2): [(b"ic10", read_png_or_jpeg2000)], - (512, 512, 1): [(b"ic09", read_png_or_jpeg2000)], - (256, 256, 2): [(b"ic14", read_png_or_jpeg2000)], - (256, 256, 1): [(b"ic08", read_png_or_jpeg2000)], - (128, 128, 2): [(b"ic13", read_png_or_jpeg2000)], - (128, 128, 1): [ - (b"ic07", read_png_or_jpeg2000), - (b"it32", read_32t), - (b"t8mk", read_mk), - ], - (64, 64, 1): [(b"icp6", read_png_or_jpeg2000)], - (32, 32, 2): [(b"ic12", read_png_or_jpeg2000)], - (48, 48, 1): [(b"ih32", read_32), (b"h8mk", read_mk)], - (32, 32, 1): [ - (b"icp5", read_png_or_jpeg2000), - (b"il32", read_32), - (b"l8mk", read_mk), - ], - (16, 16, 2): [(b"ic11", read_png_or_jpeg2000)], - (16, 16, 1): [ - (b"icp4", read_png_or_jpeg2000), - (b"is32", read_32), - (b"s8mk", read_mk), - ], - } - - def __init__(self, fobj: IO[bytes]) -> None: - """ - fobj is a file-like object as an icns resource - """ - # signature : (start, length) - self.dct = {} - self.fobj = fobj - sig, filesize = nextheader(fobj) - if not _accept(sig): - msg = "not an icns file" - raise SyntaxError(msg) - i = HEADERSIZE - while i < filesize: - sig, blocksize = nextheader(fobj) - if blocksize <= 0: - msg = "invalid block header" - raise SyntaxError(msg) - i += HEADERSIZE - blocksize -= HEADERSIZE - self.dct[sig] = (i, blocksize) - fobj.seek(blocksize, io.SEEK_CUR) - i += blocksize - - def itersizes(self) -> list[tuple[int, int, int]]: - sizes = [] - for size, fmts in self.SIZES.items(): - for fmt, reader in fmts: - if fmt in self.dct: - sizes.append(size) - break - return sizes - - def bestsize(self) -> tuple[int, int, int]: - sizes = self.itersizes() - if not sizes: - msg = "No 32bit icon resources found" - raise SyntaxError(msg) - return max(sizes) - - def dataforsize(self, size: tuple[int, int, int]) -> dict[str, Image.Image]: - """ - Get an icon resource as {channel: array}. Note that - the arrays are bottom-up like windows bitmaps and will likely - need to be flipped or transposed in some way. - """ - dct = {} - for code, reader in self.SIZES[size]: - desc = self.dct.get(code) - if desc is not None: - dct.update(reader(self.fobj, desc, size)) - return dct - - def getimage( - self, size: tuple[int, int] | tuple[int, int, int] | None = None - ) -> Image.Image: - if size is None: - size = self.bestsize() - elif len(size) == 2: - size = (size[0], size[1], 1) - channels = self.dataforsize(size) - - im = channels.get("RGBA") - if im: - return im - - im = channels["RGB"].copy() - try: - im.putalpha(channels["A"]) - except KeyError: - pass - return im - - -## -# Image plugin for Mac OS icons. - - -class IcnsImageFile(ImageFile.ImageFile): - """ - PIL image support for Mac OS .icns files. - Chooses the best resolution, but will possibly load - a different size image if you mutate the size attribute - before calling 'load'. - - The info dictionary has a key 'sizes' that is a list - of sizes that the icns file has. - """ - - format = "ICNS" - format_description = "Mac OS icns resource" - - def _open(self) -> None: - self.icns = IcnsFile(self.fp) - self._mode = "RGBA" - self.info["sizes"] = self.icns.itersizes() - self.best_size = self.icns.bestsize() - self.size = ( - self.best_size[0] * self.best_size[2], - self.best_size[1] * self.best_size[2], - ) - - @property - def size(self) -> tuple[int, int]: - return self._size - - @size.setter - def size(self, value: tuple[int, int]) -> None: - # Check that a matching size exists, - # or that there is a scale that would create a size that matches - for size in self.info["sizes"]: - simple_size = size[0] * size[2], size[1] * size[2] - scale = simple_size[0] // value[0] - if simple_size[1] / value[1] == scale: - self._size = value - return - msg = "This is not one of the allowed sizes of this image" - raise ValueError(msg) - - def load(self, scale: int | None = None) -> Image.core.PixelAccess | None: - if scale is not None: - width, height = self.size[:2] - self.size = width * scale, height * scale - self.best_size = width, height, scale - - px = Image.Image.load(self) - if self._im is not None and self.im.size == self.size: - # Already loaded - return px - self.load_prepare() - # This is likely NOT the best way to do it, but whatever. - im = self.icns.getimage(self.best_size) - - # If this is a PNG or JPEG 2000, it won't be loaded yet - px = im.load() - - self.im = im.im - self._mode = im.mode - self.size = im.size - - return px - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - """ - Saves the image as a series of PNG files, - that are then combined into a .icns file. - """ - if hasattr(fp, "flush"): - fp.flush() - - sizes = { - b"ic07": 128, - b"ic08": 256, - b"ic09": 512, - b"ic10": 1024, - b"ic11": 32, - b"ic12": 64, - b"ic13": 256, - b"ic14": 512, - } - provided_images = {im.width: im for im in im.encoderinfo.get("append_images", [])} - size_streams = {} - for size in set(sizes.values()): - image = ( - provided_images[size] - if size in provided_images - else im.resize((size, size)) - ) - - temp = io.BytesIO() - image.save(temp, "png") - size_streams[size] = temp.getvalue() - - entries = [] - for type, size in sizes.items(): - stream = size_streams[size] - entries.append((type, HEADERSIZE + len(stream), stream)) - - # Header - fp.write(MAGIC) - file_length = HEADERSIZE # Header - file_length += HEADERSIZE + 8 * len(entries) # TOC - file_length += sum(entry[1] for entry in entries) - fp.write(struct.pack(">i", file_length)) - - # TOC - fp.write(b"TOC ") - fp.write(struct.pack(">i", HEADERSIZE + len(entries) * HEADERSIZE)) - for entry in entries: - fp.write(entry[0]) - fp.write(struct.pack(">i", entry[1])) - - # Data - for entry in entries: - fp.write(entry[0]) - fp.write(struct.pack(">i", entry[1])) - fp.write(entry[2]) - - if hasattr(fp, "flush"): - fp.flush() - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(MAGIC) - - -Image.register_open(IcnsImageFile.format, IcnsImageFile, _accept) -Image.register_extension(IcnsImageFile.format, ".icns") - -Image.register_save(IcnsImageFile.format, _save) -Image.register_mime(IcnsImageFile.format, "image/icns") - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 IcnsImagePlugin.py [file]") - sys.exit() - - with open(sys.argv[1], "rb") as fp: - imf = IcnsImageFile(fp) - for size in imf.info["sizes"]: - width, height, scale = imf.size = size - imf.save(f"out-{width}-{height}-{scale}.png") - with Image.open(sys.argv[1]) as im: - im.save("out.png") - if sys.platform == "windows": - os.startfile("out.png") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/IcoImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/IcoImagePlugin.py deleted file mode 100644 index bd35ac89..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/IcoImagePlugin.py +++ /dev/null @@ -1,381 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Windows Icon support for PIL -# -# History: -# 96-05-27 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# - -# This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis -# . -# https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki -# -# Icon format references: -# * https://en.wikipedia.org/wiki/ICO_(file_format) -# * https://msdn.microsoft.com/en-us/library/ms997538.aspx -from __future__ import annotations - -import warnings -from io import BytesIO -from math import ceil, log -from typing import IO, NamedTuple - -from . import BmpImagePlugin, Image, ImageFile, PngImagePlugin -from ._binary import i16le as i16 -from ._binary import i32le as i32 -from ._binary import o8 -from ._binary import o16le as o16 -from ._binary import o32le as o32 - -# -# -------------------------------------------------------------------- - -_MAGIC = b"\0\0\1\0" - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - fp.write(_MAGIC) # (2+2) - bmp = im.encoderinfo.get("bitmap_format") == "bmp" - sizes = im.encoderinfo.get( - "sizes", - [(16, 16), (24, 24), (32, 32), (48, 48), (64, 64), (128, 128), (256, 256)], - ) - frames = [] - provided_ims = [im] + im.encoderinfo.get("append_images", []) - width, height = im.size - for size in sorted(set(sizes)): - if size[0] > width or size[1] > height or size[0] > 256 or size[1] > 256: - continue - - for provided_im in provided_ims: - if provided_im.size != size: - continue - frames.append(provided_im) - if bmp: - bits = BmpImagePlugin.SAVE[provided_im.mode][1] - bits_used = [bits] - for other_im in provided_ims: - if other_im.size != size: - continue - bits = BmpImagePlugin.SAVE[other_im.mode][1] - if bits not in bits_used: - # Another image has been supplied for this size - # with a different bit depth - frames.append(other_im) - bits_used.append(bits) - break - else: - # TODO: invent a more convenient method for proportional scalings - frame = provided_im.copy() - frame.thumbnail(size, Image.Resampling.LANCZOS, reducing_gap=None) - frames.append(frame) - fp.write(o16(len(frames))) # idCount(2) - offset = fp.tell() + len(frames) * 16 - for frame in frames: - width, height = frame.size - # 0 means 256 - fp.write(o8(width if width < 256 else 0)) # bWidth(1) - fp.write(o8(height if height < 256 else 0)) # bHeight(1) - - bits, colors = BmpImagePlugin.SAVE[frame.mode][1:] if bmp else (32, 0) - fp.write(o8(colors)) # bColorCount(1) - fp.write(b"\0") # bReserved(1) - fp.write(b"\0\0") # wPlanes(2) - fp.write(o16(bits)) # wBitCount(2) - - image_io = BytesIO() - if bmp: - frame.save(image_io, "dib") - - if bits != 32: - and_mask = Image.new("1", size) - ImageFile._save( - and_mask, - image_io, - [ImageFile._Tile("raw", (0, 0) + size, 0, ("1", 0, -1))], - ) - else: - frame.save(image_io, "png") - image_io.seek(0) - image_bytes = image_io.read() - if bmp: - image_bytes = image_bytes[:8] + o32(height * 2) + image_bytes[12:] - bytes_len = len(image_bytes) - fp.write(o32(bytes_len)) # dwBytesInRes(4) - fp.write(o32(offset)) # dwImageOffset(4) - current = fp.tell() - fp.seek(offset) - fp.write(image_bytes) - offset = offset + bytes_len - fp.seek(current) - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(_MAGIC) - - -class IconHeader(NamedTuple): - width: int - height: int - nb_color: int - reserved: int - planes: int - bpp: int - size: int - offset: int - dim: tuple[int, int] - square: int - color_depth: int - - -class IcoFile: - def __init__(self, buf: IO[bytes]) -> None: - """ - Parse image from file-like object containing ico file data - """ - - # check magic - s = buf.read(6) - if not _accept(s): - msg = "not an ICO file" - raise SyntaxError(msg) - - self.buf = buf - self.entry = [] - - # Number of items in file - self.nb_items = i16(s, 4) - - # Get headers for each item - for i in range(self.nb_items): - s = buf.read(16) - - # See Wikipedia - width = s[0] or 256 - height = s[1] or 256 - - # No. of colors in image (0 if >=8bpp) - nb_color = s[2] - bpp = i16(s, 6) - icon_header = IconHeader( - width=width, - height=height, - nb_color=nb_color, - reserved=s[3], - planes=i16(s, 4), - bpp=i16(s, 6), - size=i32(s, 8), - offset=i32(s, 12), - dim=(width, height), - square=width * height, - # See Wikipedia notes about color depth. - # We need this just to differ images with equal sizes - color_depth=bpp or (nb_color != 0 and ceil(log(nb_color, 2))) or 256, - ) - - self.entry.append(icon_header) - - self.entry = sorted(self.entry, key=lambda x: x.color_depth) - # ICO images are usually squares - self.entry = sorted(self.entry, key=lambda x: x.square, reverse=True) - - def sizes(self) -> set[tuple[int, int]]: - """ - Get a set of all available icon sizes and color depths. - """ - return {(h.width, h.height) for h in self.entry} - - def getentryindex(self, size: tuple[int, int], bpp: int | bool = False) -> int: - for i, h in enumerate(self.entry): - if size == h.dim and (bpp is False or bpp == h.color_depth): - return i - return 0 - - def getimage(self, size: tuple[int, int], bpp: int | bool = False) -> Image.Image: - """ - Get an image from the icon - """ - return self.frame(self.getentryindex(size, bpp)) - - def frame(self, idx: int) -> Image.Image: - """ - Get an image from frame idx - """ - - header = self.entry[idx] - - self.buf.seek(header.offset) - data = self.buf.read(8) - self.buf.seek(header.offset) - - im: Image.Image - if data[:8] == PngImagePlugin._MAGIC: - # png frame - im = PngImagePlugin.PngImageFile(self.buf) - Image._decompression_bomb_check(im.size) - else: - # XOR + AND mask bmp frame - im = BmpImagePlugin.DibImageFile(self.buf) - Image._decompression_bomb_check(im.size) - - # change tile dimension to only encompass XOR image - im._size = (im.size[0], int(im.size[1] / 2)) - d, e, o, a = im.tile[0] - im.tile[0] = ImageFile._Tile(d, (0, 0) + im.size, o, a) - - # figure out where AND mask image starts - if header.bpp == 32: - # 32-bit color depth icon image allows semitransparent areas - # PIL's DIB format ignores transparency bits, recover them. - # The DIB is packed in BGRX byte order where X is the alpha - # channel. - - # Back up to start of bmp data - self.buf.seek(o) - # extract every 4th byte (eg. 3,7,11,15,...) - alpha_bytes = self.buf.read(im.size[0] * im.size[1] * 4)[3::4] - - # convert to an 8bpp grayscale image - try: - mask = Image.frombuffer( - "L", # 8bpp - im.size, # (w, h) - alpha_bytes, # source chars - "raw", # raw decoder - ("L", 0, -1), # 8bpp inverted, unpadded, reversed - ) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - mask = None - else: - raise - else: - # get AND image from end of bitmap - w = im.size[0] - if (w % 32) > 0: - # bitmap row data is aligned to word boundaries - w += 32 - (im.size[0] % 32) - - # the total mask data is - # padded row size * height / bits per char - - total_bytes = int((w * im.size[1]) / 8) - and_mask_offset = header.offset + header.size - total_bytes - - self.buf.seek(and_mask_offset) - mask_data = self.buf.read(total_bytes) - - # convert raw data to image - try: - mask = Image.frombuffer( - "1", # 1 bpp - im.size, # (w, h) - mask_data, # source chars - "raw", # raw decoder - ("1;I", int(w / 8), -1), # 1bpp inverted, padded, reversed - ) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - mask = None - else: - raise - - # now we have two images, im is XOR image and mask is AND image - - # apply mask image as alpha channel - if mask: - im = im.convert("RGBA") - im.putalpha(mask) - - return im - - -## -# Image plugin for Windows Icon files. - - -class IcoImageFile(ImageFile.ImageFile): - """ - PIL read-only image support for Microsoft Windows .ico files. - - By default the largest resolution image in the file will be loaded. This - can be changed by altering the 'size' attribute before calling 'load'. - - The info dictionary has a key 'sizes' that is a list of the sizes available - in the icon file. - - Handles classic, XP and Vista icon formats. - - When saving, PNG compression is used. Support for this was only added in - Windows Vista. If you are unable to view the icon in Windows, convert the - image to "RGBA" mode before saving. - - This plugin is a refactored version of Win32IconImagePlugin by Bryan Davis - . - https://code.google.com/archive/p/casadebender/wikis/Win32IconImagePlugin.wiki - """ - - format = "ICO" - format_description = "Windows Icon" - - def _open(self) -> None: - self.ico = IcoFile(self.fp) - self.info["sizes"] = self.ico.sizes() - self.size = self.ico.entry[0].dim - self.load() - - @property - def size(self) -> tuple[int, int]: - return self._size - - @size.setter - def size(self, value: tuple[int, int]) -> None: - if value not in self.info["sizes"]: - msg = "This is not one of the allowed sizes of this image" - raise ValueError(msg) - self._size = value - - def load(self) -> Image.core.PixelAccess | None: - if self._im is not None and self.im.size == self.size: - # Already loaded - return Image.Image.load(self) - im = self.ico.getimage(self.size) - # if tile is PNG, it won't really be loaded yet - im.load() - self.im = im.im - self._mode = im.mode - if im.palette: - self.palette = im.palette - if im.size != self.size: - warnings.warn("Image was not the expected size") - - index = self.ico.getentryindex(self.size) - sizes = list(self.info["sizes"]) - sizes[index] = im.size - self.info["sizes"] = set(sizes) - - self.size = im.size - return Image.Image.load(self) - - def load_seek(self, pos: int) -> None: - # Flag the ImageFile.Parser so that it - # just does all the decode at the end. - pass - - -# -# -------------------------------------------------------------------- - - -Image.register_open(IcoImageFile.format, IcoImageFile, _accept) -Image.register_save(IcoImageFile.format, _save) -Image.register_extension(IcoImageFile.format, ".ico") - -Image.register_mime(IcoImageFile.format, "image/x-icon") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/ImImagePlugin.py deleted file mode 100644 index 71b99967..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImImagePlugin.py +++ /dev/null @@ -1,389 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IFUNC IM file handling for PIL -# -# history: -# 1995-09-01 fl Created. -# 1997-01-03 fl Save palette images -# 1997-01-08 fl Added sequence support -# 1997-01-23 fl Added P and RGB save support -# 1997-05-31 fl Read floating point images -# 1997-06-22 fl Save floating point images -# 1997-08-27 fl Read and save 1-bit images -# 1998-06-25 fl Added support for RGB+LUT images -# 1998-07-02 fl Added support for YCC images -# 1998-07-15 fl Renamed offset attribute to avoid name clash -# 1998-12-29 fl Added I;16 support -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7) -# 2003-09-26 fl Added LA/PA support -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -import re -from typing import IO, Any - -from . import Image, ImageFile, ImagePalette -from ._util import DeferredError - -# -------------------------------------------------------------------- -# Standard tags - -COMMENT = "Comment" -DATE = "Date" -EQUIPMENT = "Digitalization equipment" -FRAMES = "File size (no of images)" -LUT = "Lut" -NAME = "Name" -SCALE = "Scale (x,y)" -SIZE = "Image size (x*y)" -MODE = "Image type" - -TAGS = { - COMMENT: 0, - DATE: 0, - EQUIPMENT: 0, - FRAMES: 0, - LUT: 0, - NAME: 0, - SCALE: 0, - SIZE: 0, - MODE: 0, -} - -OPEN = { - # ifunc93/p3cfunc formats - "0 1 image": ("1", "1"), - "L 1 image": ("1", "1"), - "Greyscale image": ("L", "L"), - "Grayscale image": ("L", "L"), - "RGB image": ("RGB", "RGB;L"), - "RLB image": ("RGB", "RLB"), - "RYB image": ("RGB", "RLB"), - "B1 image": ("1", "1"), - "B2 image": ("P", "P;2"), - "B4 image": ("P", "P;4"), - "X 24 image": ("RGB", "RGB"), - "L 32 S image": ("I", "I;32"), - "L 32 F image": ("F", "F;32"), - # old p3cfunc formats - "RGB3 image": ("RGB", "RGB;T"), - "RYB3 image": ("RGB", "RYB;T"), - # extensions - "LA image": ("LA", "LA;L"), - "PA image": ("LA", "PA;L"), - "RGBA image": ("RGBA", "RGBA;L"), - "RGBX image": ("RGB", "RGBX;L"), - "CMYK image": ("CMYK", "CMYK;L"), - "YCC image": ("YCbCr", "YCbCr;L"), -} - -# ifunc95 extensions -for i in ["8", "8S", "16", "16S", "32", "32F"]: - OPEN[f"L {i} image"] = ("F", f"F;{i}") - OPEN[f"L*{i} image"] = ("F", f"F;{i}") -for i in ["16", "16L", "16B"]: - OPEN[f"L {i} image"] = (f"I;{i}", f"I;{i}") - OPEN[f"L*{i} image"] = (f"I;{i}", f"I;{i}") -for i in ["32S"]: - OPEN[f"L {i} image"] = ("I", f"I;{i}") - OPEN[f"L*{i} image"] = ("I", f"I;{i}") -for j in range(2, 33): - OPEN[f"L*{j} image"] = ("F", f"F;{j}") - - -# -------------------------------------------------------------------- -# Read IM directory - -split = re.compile(rb"^([A-Za-z][^:]*):[ \t]*(.*)[ \t]*$") - - -def number(s: Any) -> float: - try: - return int(s) - except ValueError: - return float(s) - - -## -# Image plugin for the IFUNC IM file format. - - -class ImImageFile(ImageFile.ImageFile): - format = "IM" - format_description = "IFUNC Image Memory" - _close_exclusive_fp_after_loading = False - - def _open(self) -> None: - # Quick rejection: if there's not an LF among the first - # 100 bytes, this is (probably) not a text header. - - if b"\n" not in self.fp.read(100): - msg = "not an IM file" - raise SyntaxError(msg) - self.fp.seek(0) - - n = 0 - - # Default values - self.info[MODE] = "L" - self.info[SIZE] = (512, 512) - self.info[FRAMES] = 1 - - self.rawmode = "L" - - while True: - s = self.fp.read(1) - - # Some versions of IFUNC uses \n\r instead of \r\n... - if s == b"\r": - continue - - if not s or s == b"\0" or s == b"\x1a": - break - - # FIXME: this may read whole file if not a text file - s = s + self.fp.readline() - - if len(s) > 100: - msg = "not an IM file" - raise SyntaxError(msg) - - if s.endswith(b"\r\n"): - s = s[:-2] - elif s.endswith(b"\n"): - s = s[:-1] - - try: - m = split.match(s) - except re.error as e: - msg = "not an IM file" - raise SyntaxError(msg) from e - - if m: - k, v = m.group(1, 2) - - # Don't know if this is the correct encoding, - # but a decent guess (I guess) - k = k.decode("latin-1", "replace") - v = v.decode("latin-1", "replace") - - # Convert value as appropriate - if k in [FRAMES, SCALE, SIZE]: - v = v.replace("*", ",") - v = tuple(map(number, v.split(","))) - if len(v) == 1: - v = v[0] - elif k == MODE and v in OPEN: - v, self.rawmode = OPEN[v] - - # Add to dictionary. Note that COMMENT tags are - # combined into a list of strings. - if k == COMMENT: - if k in self.info: - self.info[k].append(v) - else: - self.info[k] = [v] - else: - self.info[k] = v - - if k in TAGS: - n += 1 - - else: - msg = f"Syntax error in IM header: {s.decode('ascii', 'replace')}" - raise SyntaxError(msg) - - if not n: - msg = "Not an IM file" - raise SyntaxError(msg) - - # Basic attributes - self._size = self.info[SIZE] - self._mode = self.info[MODE] - - # Skip forward to start of image data - while s and not s.startswith(b"\x1a"): - s = self.fp.read(1) - if not s: - msg = "File truncated" - raise SyntaxError(msg) - - if LUT in self.info: - # convert lookup table to palette or lut attribute - palette = self.fp.read(768) - greyscale = 1 # greyscale palette - linear = 1 # linear greyscale palette - for i in range(256): - if palette[i] == palette[i + 256] == palette[i + 512]: - if palette[i] != i: - linear = 0 - else: - greyscale = 0 - if self.mode in ["L", "LA", "P", "PA"]: - if greyscale: - if not linear: - self.lut = list(palette[:256]) - else: - if self.mode in ["L", "P"]: - self._mode = self.rawmode = "P" - elif self.mode in ["LA", "PA"]: - self._mode = "PA" - self.rawmode = "PA;L" - self.palette = ImagePalette.raw("RGB;L", palette) - elif self.mode == "RGB": - if not greyscale or not linear: - self.lut = list(palette) - - self.frame = 0 - - self.__offset = offs = self.fp.tell() - - self._fp = self.fp # FIXME: hack - - if self.rawmode.startswith("F;"): - # ifunc95 formats - try: - # use bit decoder (if necessary) - bits = int(self.rawmode[2:]) - if bits not in [8, 16, 32]: - self.tile = [ - ImageFile._Tile( - "bit", (0, 0) + self.size, offs, (bits, 8, 3, 0, -1) - ) - ] - return - except ValueError: - pass - - if self.rawmode in ["RGB;T", "RYB;T"]: - # Old LabEye/3PC files. Would be very surprised if anyone - # ever stumbled upon such a file ;-) - size = self.size[0] * self.size[1] - self.tile = [ - ImageFile._Tile("raw", (0, 0) + self.size, offs, ("G", 0, -1)), - ImageFile._Tile("raw", (0, 0) + self.size, offs + size, ("R", 0, -1)), - ImageFile._Tile( - "raw", (0, 0) + self.size, offs + 2 * size, ("B", 0, -1) - ), - ] - else: - # LabEye/IFUNC files - self.tile = [ - ImageFile._Tile("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1)) - ] - - @property - def n_frames(self) -> int: - return self.info[FRAMES] - - @property - def is_animated(self) -> bool: - return self.info[FRAMES] > 1 - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - if isinstance(self._fp, DeferredError): - raise self._fp.ex - - self.frame = frame - - if self.mode == "1": - bits = 1 - else: - bits = 8 * len(self.mode) - - size = ((self.size[0] * bits + 7) // 8) * self.size[1] - offs = self.__offset + frame * size - - self.fp = self._fp - - self.tile = [ - ImageFile._Tile("raw", (0, 0) + self.size, offs, (self.rawmode, 0, -1)) - ] - - def tell(self) -> int: - return self.frame - - -# -# -------------------------------------------------------------------- -# Save IM files - - -SAVE = { - # mode: (im type, raw mode) - "1": ("0 1", "1"), - "L": ("Greyscale", "L"), - "LA": ("LA", "LA;L"), - "P": ("Greyscale", "P"), - "PA": ("LA", "PA;L"), - "I": ("L 32S", "I;32S"), - "I;16": ("L 16", "I;16"), - "I;16L": ("L 16L", "I;16L"), - "I;16B": ("L 16B", "I;16B"), - "F": ("L 32F", "F;32F"), - "RGB": ("RGB", "RGB;L"), - "RGBA": ("RGBA", "RGBA;L"), - "RGBX": ("RGBX", "RGBX;L"), - "CMYK": ("CMYK", "CMYK;L"), - "YCbCr": ("YCC", "YCbCr;L"), -} - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - try: - image_type, rawmode = SAVE[im.mode] - except KeyError as e: - msg = f"Cannot save {im.mode} images as IM" - raise ValueError(msg) from e - - frames = im.encoderinfo.get("frames", 1) - - fp.write(f"Image type: {image_type} image\r\n".encode("ascii")) - if filename: - # Each line must be 100 characters or less, - # or: SyntaxError("not an IM file") - # 8 characters are used for "Name: " and "\r\n" - # Keep just the filename, ditch the potentially overlong path - if isinstance(filename, bytes): - filename = filename.decode("ascii") - name, ext = os.path.splitext(os.path.basename(filename)) - name = "".join([name[: 92 - len(ext)], ext]) - - fp.write(f"Name: {name}\r\n".encode("ascii")) - fp.write(f"Image size (x*y): {im.size[0]}*{im.size[1]}\r\n".encode("ascii")) - fp.write(f"File size (no of images): {frames}\r\n".encode("ascii")) - if im.mode in ["P", "PA"]: - fp.write(b"Lut: 1\r\n") - fp.write(b"\000" * (511 - fp.tell()) + b"\032") - if im.mode in ["P", "PA"]: - im_palette = im.im.getpalette("RGB", "RGB;L") - colors = len(im_palette) // 3 - palette = b"" - for i in range(3): - palette += im_palette[colors * i : colors * (i + 1)] - palette += b"\x00" * (256 - colors) - fp.write(palette) # 768 bytes - ImageFile._save( - im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 0, (rawmode, 0, -1))] - ) - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(ImImageFile.format, ImImageFile) -Image.register_save(ImImageFile.format, _save) - -Image.register_extension(ImImageFile.format, ".im") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/Image.py b/pptx-env/lib/python3.12/site-packages/PIL/Image.py deleted file mode 100644 index 9d50812e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/Image.py +++ /dev/null @@ -1,4227 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# the Image class wrapper -# -# partial release history: -# 1995-09-09 fl Created -# 1996-03-11 fl PIL release 0.0 (proof of concept) -# 1996-04-30 fl PIL release 0.1b1 -# 1999-07-28 fl PIL release 1.0 final -# 2000-06-07 fl PIL release 1.1 -# 2000-10-20 fl PIL release 1.1.1 -# 2001-05-07 fl PIL release 1.1.2 -# 2002-03-15 fl PIL release 1.1.3 -# 2003-05-10 fl PIL release 1.1.4 -# 2005-03-28 fl PIL release 1.1.5 -# 2006-12-02 fl PIL release 1.1.6 -# 2009-11-15 fl PIL release 1.1.7 -# -# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-2009 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -from __future__ import annotations - -import abc -import atexit -import builtins -import io -import logging -import math -import os -import re -import struct -import sys -import tempfile -import warnings -from collections.abc import MutableMapping -from enum import IntEnum -from typing import IO, Protocol, cast - -# VERSION was removed in Pillow 6.0.0. -# PILLOW_VERSION was removed in Pillow 9.0.0. -# Use __version__ instead. -from . import ( - ExifTags, - ImageMode, - TiffTags, - UnidentifiedImageError, - __version__, - _plugins, -) -from ._binary import i32le, o32be, o32le -from ._deprecate import deprecate -from ._util import DeferredError, is_path - -ElementTree: ModuleType | None -try: - from defusedxml import ElementTree -except ImportError: - ElementTree = None - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable, Iterator, Sequence - from types import ModuleType - from typing import Any, Literal - -logger = logging.getLogger(__name__) - - -class DecompressionBombWarning(RuntimeWarning): - pass - - -class DecompressionBombError(Exception): - pass - - -WARN_POSSIBLE_FORMATS: bool = False - -# Limit to around a quarter gigabyte for a 24-bit (3 bpp) image -MAX_IMAGE_PIXELS: int | None = int(1024 * 1024 * 1024 // 4 // 3) - - -try: - # If the _imaging C module is not present, Pillow will not load. - # Note that other modules should not refer to _imaging directly; - # import Image and use the Image.core variable instead. - # Also note that Image.core is not a publicly documented interface, - # and should be considered private and subject to change. - from . import _imaging as core - - if __version__ != getattr(core, "PILLOW_VERSION", None): - msg = ( - "The _imaging extension was built for another version of Pillow or PIL:\n" - f"Core version: {getattr(core, 'PILLOW_VERSION', None)}\n" - f"Pillow version: {__version__}" - ) - raise ImportError(msg) - -except ImportError as v: - # Explanations for ways that we know we might have an import error - if str(v).startswith("Module use of python"): - # The _imaging C module is present, but not compiled for - # the right version (windows only). Print a warning, if - # possible. - warnings.warn( - "The _imaging extension was built for another version of Python.", - RuntimeWarning, - ) - elif str(v).startswith("The _imaging extension"): - warnings.warn(str(v), RuntimeWarning) - # Fail here anyway. Don't let people run with a mostly broken Pillow. - # see docs/porting.rst - raise - - -# -# Constants - - -# transpose -class Transpose(IntEnum): - FLIP_LEFT_RIGHT = 0 - FLIP_TOP_BOTTOM = 1 - ROTATE_90 = 2 - ROTATE_180 = 3 - ROTATE_270 = 4 - TRANSPOSE = 5 - TRANSVERSE = 6 - - -# transforms (also defined in Imaging.h) -class Transform(IntEnum): - AFFINE = 0 - EXTENT = 1 - PERSPECTIVE = 2 - QUAD = 3 - MESH = 4 - - -# resampling filters (also defined in Imaging.h) -class Resampling(IntEnum): - NEAREST = 0 - BOX = 4 - BILINEAR = 2 - HAMMING = 5 - BICUBIC = 3 - LANCZOS = 1 - - -_filters_support = { - Resampling.BOX: 0.5, - Resampling.BILINEAR: 1.0, - Resampling.HAMMING: 1.0, - Resampling.BICUBIC: 2.0, - Resampling.LANCZOS: 3.0, -} - - -# dithers -class Dither(IntEnum): - NONE = 0 - ORDERED = 1 # Not yet implemented - RASTERIZE = 2 # Not yet implemented - FLOYDSTEINBERG = 3 # default - - -# palettes/quantizers -class Palette(IntEnum): - WEB = 0 - ADAPTIVE = 1 - - -class Quantize(IntEnum): - MEDIANCUT = 0 - MAXCOVERAGE = 1 - FASTOCTREE = 2 - LIBIMAGEQUANT = 3 - - -module = sys.modules[__name__] -for enum in (Transpose, Transform, Resampling, Dither, Palette, Quantize): - for item in enum: - setattr(module, item.name, item.value) - - -if hasattr(core, "DEFAULT_STRATEGY"): - DEFAULT_STRATEGY = core.DEFAULT_STRATEGY - FILTERED = core.FILTERED - HUFFMAN_ONLY = core.HUFFMAN_ONLY - RLE = core.RLE - FIXED = core.FIXED - - -# -------------------------------------------------------------------- -# Registries - -TYPE_CHECKING = False -if TYPE_CHECKING: - import mmap - from xml.etree.ElementTree import Element - - from IPython.lib.pretty import PrettyPrinter - - from . import ImageFile, ImageFilter, ImagePalette, ImageQt, TiffImagePlugin - from ._typing import CapsuleType, NumpyArray, StrOrBytesPath -ID: list[str] = [] -OPEN: dict[ - str, - tuple[ - Callable[[IO[bytes], str | bytes], ImageFile.ImageFile], - Callable[[bytes], bool | str] | None, - ], -] = {} -MIME: dict[str, str] = {} -SAVE: dict[str, Callable[[Image, IO[bytes], str | bytes], None]] = {} -SAVE_ALL: dict[str, Callable[[Image, IO[bytes], str | bytes], None]] = {} -EXTENSION: dict[str, str] = {} -DECODERS: dict[str, type[ImageFile.PyDecoder]] = {} -ENCODERS: dict[str, type[ImageFile.PyEncoder]] = {} - -# -------------------------------------------------------------------- -# Modes - -_ENDIAN = "<" if sys.byteorder == "little" else ">" - - -def _conv_type_shape(im: Image) -> tuple[tuple[int, ...], str]: - m = ImageMode.getmode(im.mode) - shape: tuple[int, ...] = (im.height, im.width) - extra = len(m.bands) - if extra != 1: - shape += (extra,) - return shape, m.typestr - - -MODES = [ - "1", - "CMYK", - "F", - "HSV", - "I", - "I;16", - "I;16B", - "I;16L", - "I;16N", - "L", - "LA", - "La", - "LAB", - "P", - "PA", - "RGB", - "RGBA", - "RGBa", - "RGBX", - "YCbCr", -] - -# raw modes that may be memory mapped. NOTE: if you change this, you -# may have to modify the stride calculation in map.c too! -_MAPMODES = ("L", "P", "RGBX", "RGBA", "CMYK", "I;16", "I;16L", "I;16B") - - -def getmodebase(mode: str) -> str: - """ - Gets the "base" mode for given mode. This function returns "L" for - images that contain grayscale data, and "RGB" for images that - contain color data. - - :param mode: Input mode. - :returns: "L" or "RGB". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basemode - - -def getmodetype(mode: str) -> str: - """ - Gets the storage type mode. Given a mode, this function returns a - single-layer mode suitable for storing individual bands. - - :param mode: Input mode. - :returns: "L", "I", or "F". - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).basetype - - -def getmodebandnames(mode: str) -> tuple[str, ...]: - """ - Gets a list of individual band names. Given a mode, this function returns - a tuple containing the names of individual bands (use - :py:method:`~PIL.Image.getmodetype` to get the mode used to store each - individual band. - - :param mode: Input mode. - :returns: A tuple containing band names. The length of the tuple - gives the number of bands in an image of the given mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return ImageMode.getmode(mode).bands - - -def getmodebands(mode: str) -> int: - """ - Gets the number of individual bands for this mode. - - :param mode: Input mode. - :returns: The number of bands in this mode. - :exception KeyError: If the input mode was not a standard mode. - """ - return len(ImageMode.getmode(mode).bands) - - -# -------------------------------------------------------------------- -# Helpers - -_initialized = 0 - - -def preinit() -> None: - """ - Explicitly loads BMP, GIF, JPEG, PPM and PPM file format drivers. - - It is called when opening or saving images. - """ - - global _initialized - if _initialized >= 1: - return - - try: - from . import BmpImagePlugin - - assert BmpImagePlugin - except ImportError: - pass - try: - from . import GifImagePlugin - - assert GifImagePlugin - except ImportError: - pass - try: - from . import JpegImagePlugin - - assert JpegImagePlugin - except ImportError: - pass - try: - from . import PpmImagePlugin - - assert PpmImagePlugin - except ImportError: - pass - try: - from . import PngImagePlugin - - assert PngImagePlugin - except ImportError: - pass - - _initialized = 1 - - -def init() -> bool: - """ - Explicitly initializes the Python Imaging Library. This function - loads all available file format drivers. - - It is called when opening or saving images if :py:meth:`~preinit()` is - insufficient, and by :py:meth:`~PIL.features.pilinfo`. - """ - - global _initialized - if _initialized >= 2: - return False - - parent_name = __name__.rpartition(".")[0] - for plugin in _plugins: - try: - logger.debug("Importing %s", plugin) - __import__(f"{parent_name}.{plugin}", globals(), locals(), []) - except ImportError as e: - logger.debug("Image: failed to import %s: %s", plugin, e) - - if OPEN or SAVE: - _initialized = 2 - return True - return False - - -# -------------------------------------------------------------------- -# Codec factories (used by tobytes/frombytes and ImageFile.load) - - -def _getdecoder( - mode: str, decoder_name: str, args: Any, extra: tuple[Any, ...] = () -) -> core.ImagingDecoder | ImageFile.PyDecoder: - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - decoder = DECODERS[decoder_name] - except KeyError: - pass - else: - return decoder(mode, *args + extra) - - try: - # get decoder - decoder = getattr(core, f"{decoder_name}_decoder") - except AttributeError as e: - msg = f"decoder {decoder_name} not available" - raise OSError(msg) from e - return decoder(mode, *args + extra) - - -def _getencoder( - mode: str, encoder_name: str, args: Any, extra: tuple[Any, ...] = () -) -> core.ImagingEncoder | ImageFile.PyEncoder: - # tweak arguments - if args is None: - args = () - elif not isinstance(args, tuple): - args = (args,) - - try: - encoder = ENCODERS[encoder_name] - except KeyError: - pass - else: - return encoder(mode, *args + extra) - - try: - # get encoder - encoder = getattr(core, f"{encoder_name}_encoder") - except AttributeError as e: - msg = f"encoder {encoder_name} not available" - raise OSError(msg) from e - return encoder(mode, *args + extra) - - -# -------------------------------------------------------------------- -# Simple expression analyzer - - -class ImagePointTransform: - """ - Used with :py:meth:`~PIL.Image.Image.point` for single band images with more than - 8 bits, this represents an affine transformation, where the value is multiplied by - ``scale`` and ``offset`` is added. - """ - - def __init__(self, scale: float, offset: float) -> None: - self.scale = scale - self.offset = offset - - def __neg__(self) -> ImagePointTransform: - return ImagePointTransform(-self.scale, -self.offset) - - def __add__(self, other: ImagePointTransform | float) -> ImagePointTransform: - if isinstance(other, ImagePointTransform): - return ImagePointTransform( - self.scale + other.scale, self.offset + other.offset - ) - return ImagePointTransform(self.scale, self.offset + other) - - __radd__ = __add__ - - def __sub__(self, other: ImagePointTransform | float) -> ImagePointTransform: - return self + -other - - def __rsub__(self, other: ImagePointTransform | float) -> ImagePointTransform: - return other + -self - - def __mul__(self, other: ImagePointTransform | float) -> ImagePointTransform: - if isinstance(other, ImagePointTransform): - return NotImplemented - return ImagePointTransform(self.scale * other, self.offset * other) - - __rmul__ = __mul__ - - def __truediv__(self, other: ImagePointTransform | float) -> ImagePointTransform: - if isinstance(other, ImagePointTransform): - return NotImplemented - return ImagePointTransform(self.scale / other, self.offset / other) - - -def _getscaleoffset( - expr: Callable[[ImagePointTransform], ImagePointTransform | float], -) -> tuple[float, float]: - a = expr(ImagePointTransform(1, 0)) - return (a.scale, a.offset) if isinstance(a, ImagePointTransform) else (0, a) - - -# -------------------------------------------------------------------- -# Implementation wrapper - - -class SupportsGetData(Protocol): - def getdata( - self, - ) -> tuple[Transform, Sequence[int]]: ... - - -class Image: - """ - This class represents an image object. To create - :py:class:`~PIL.Image.Image` objects, use the appropriate factory - functions. There's hardly ever any reason to call the Image constructor - directly. - - * :py:func:`~PIL.Image.open` - * :py:func:`~PIL.Image.new` - * :py:func:`~PIL.Image.frombytes` - """ - - format: str | None = None - format_description: str | None = None - _close_exclusive_fp_after_loading = True - - def __init__(self) -> None: - # FIXME: take "new" parameters / other image? - self._im: core.ImagingCore | DeferredError | None = None - self._mode = "" - self._size = (0, 0) - self.palette: ImagePalette.ImagePalette | None = None - self.info: dict[str | tuple[int, int], Any] = {} - self.readonly = 0 - self._exif: Exif | None = None - - @property - def im(self) -> core.ImagingCore: - if isinstance(self._im, DeferredError): - raise self._im.ex - assert self._im is not None - return self._im - - @im.setter - def im(self, im: core.ImagingCore) -> None: - self._im = im - - @property - def width(self) -> int: - return self.size[0] - - @property - def height(self) -> int: - return self.size[1] - - @property - def size(self) -> tuple[int, int]: - return self._size - - @property - def mode(self) -> str: - return self._mode - - @property - def readonly(self) -> int: - return (self._im and self._im.readonly) or self._readonly - - @readonly.setter - def readonly(self, readonly: int) -> None: - self._readonly = readonly - - def _new(self, im: core.ImagingCore) -> Image: - new = Image() - new.im = im - new._mode = im.mode - new._size = im.size - if im.mode in ("P", "PA"): - if self.palette: - new.palette = self.palette.copy() - else: - from . import ImagePalette - - new.palette = ImagePalette.ImagePalette() - new.info = self.info.copy() - return new - - # Context manager support - def __enter__(self): - return self - - def __exit__(self, *args): - from . import ImageFile - - if isinstance(self, ImageFile.ImageFile): - if getattr(self, "_exclusive_fp", False): - self._close_fp() - self.fp = None - - def close(self) -> None: - """ - This operation will destroy the image core and release its memory. - The image data will be unusable afterward. - - This function is required to close images that have multiple frames or - have not had their file read and closed by the - :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for - more information. - """ - if getattr(self, "map", None): - if sys.platform == "win32" and hasattr(sys, "pypy_version_info"): - self.map.close() - self.map: mmap.mmap | None = None - - # Instead of simply setting to None, we're setting up a - # deferred error that will better explain that the core image - # object is gone. - self._im = DeferredError(ValueError("Operation on closed image")) - - def _copy(self) -> None: - self.load() - self.im = self.im.copy() - self.readonly = 0 - - def _ensure_mutable(self) -> None: - if self.readonly: - self._copy() - else: - self.load() - - def _dump( - self, file: str | None = None, format: str | None = None, **options: Any - ) -> str: - suffix = "" - if format: - suffix = f".{format}" - - if not file: - f, filename = tempfile.mkstemp(suffix) - os.close(f) - else: - filename = file - if not filename.endswith(suffix): - filename = filename + suffix - - self.load() - - if not format or format == "PPM": - self.im.save_ppm(filename) - else: - self.save(filename, format, **options) - - return filename - - def __eq__(self, other: object) -> bool: - if self.__class__ is not other.__class__: - return False - assert isinstance(other, Image) - return ( - self.mode == other.mode - and self.size == other.size - and self.info == other.info - and self.getpalette() == other.getpalette() - and self.tobytes() == other.tobytes() - ) - - def __repr__(self) -> str: - return ( - f"<{self.__class__.__module__}.{self.__class__.__name__} " - f"image mode={self.mode} size={self.size[0]}x{self.size[1]} " - f"at 0x{id(self):X}>" - ) - - def _repr_pretty_(self, p: PrettyPrinter, cycle: bool) -> None: - """IPython plain text display support""" - - # Same as __repr__ but without unpredictable id(self), - # to keep Jupyter notebook `text/plain` output stable. - p.text( - f"<{self.__class__.__module__}.{self.__class__.__name__} " - f"image mode={self.mode} size={self.size[0]}x{self.size[1]}>" - ) - - def _repr_image(self, image_format: str, **kwargs: Any) -> bytes | None: - """Helper function for iPython display hook. - - :param image_format: Image format. - :returns: image as bytes, saved into the given format. - """ - b = io.BytesIO() - try: - self.save(b, image_format, **kwargs) - except Exception: - return None - return b.getvalue() - - def _repr_png_(self) -> bytes | None: - """iPython display hook support for PNG format. - - :returns: PNG version of the image as bytes - """ - return self._repr_image("PNG", compress_level=1) - - def _repr_jpeg_(self) -> bytes | None: - """iPython display hook support for JPEG format. - - :returns: JPEG version of the image as bytes - """ - return self._repr_image("JPEG") - - @property - def __array_interface__(self) -> dict[str, str | bytes | int | tuple[int, ...]]: - # numpy array interface support - new: dict[str, str | bytes | int | tuple[int, ...]] = {"version": 3} - if self.mode == "1": - # Binary images need to be extended from bits to bytes - # See: https://github.com/python-pillow/Pillow/issues/350 - new["data"] = self.tobytes("raw", "L") - else: - new["data"] = self.tobytes() - new["shape"], new["typestr"] = _conv_type_shape(self) - return new - - def __arrow_c_schema__(self) -> object: - self.load() - return self.im.__arrow_c_schema__() - - def __arrow_c_array__( - self, requested_schema: object | None = None - ) -> tuple[object, object]: - self.load() - return (self.im.__arrow_c_schema__(), self.im.__arrow_c_array__()) - - def __getstate__(self) -> list[Any]: - im_data = self.tobytes() # load image first - return [self.info, self.mode, self.size, self.getpalette(), im_data] - - def __setstate__(self, state: list[Any]) -> None: - Image.__init__(self) - info, mode, size, palette, data = state[:5] - self.info = info - self._mode = mode - self._size = size - self.im = core.new(mode, size) - if mode in ("L", "LA", "P", "PA") and palette: - self.putpalette(palette) - self.frombytes(data) - - def tobytes(self, encoder_name: str = "raw", *args: Any) -> bytes: - """ - Return image as a bytes object. - - .. warning:: - - This method returns raw image data derived from Pillow's internal - storage. For compressed image data (e.g. PNG, JPEG) use - :meth:`~.save`, with a BytesIO parameter for in-memory data. - - :param encoder_name: What encoder to use. - - The default is to use the standard "raw" encoder. - To see how this packs pixel data into the returned - bytes, see :file:`libImaging/Pack.c`. - - A list of C encoders can be seen under codecs - section of the function array in - :file:`_imaging.c`. Python encoders are registered - within the relevant plugins. - :param args: Extra arguments to the encoder. - :returns: A :py:class:`bytes` object. - """ - - encoder_args: Any = args - if len(encoder_args) == 1 and isinstance(encoder_args[0], tuple): - # may pass tuple instead of argument list - encoder_args = encoder_args[0] - - if encoder_name == "raw" and encoder_args == (): - encoder_args = self.mode - - self.load() - - if self.width == 0 or self.height == 0: - return b"" - - # unpack data - e = _getencoder(self.mode, encoder_name, encoder_args) - e.setimage(self.im) - - from . import ImageFile - - bufsize = max(ImageFile.MAXBLOCK, self.size[0] * 4) # see RawEncode.c - - output = [] - while True: - bytes_consumed, errcode, data = e.encode(bufsize) - output.append(data) - if errcode: - break - if errcode < 0: - msg = f"encoder error {errcode} in tobytes" - raise RuntimeError(msg) - - return b"".join(output) - - def tobitmap(self, name: str = "image") -> bytes: - """ - Returns the image converted to an X11 bitmap. - - .. note:: This method only works for mode "1" images. - - :param name: The name prefix to use for the bitmap variables. - :returns: A string containing an X11 bitmap. - :raises ValueError: If the mode is not "1" - """ - - self.load() - if self.mode != "1": - msg = "not a bitmap" - raise ValueError(msg) - data = self.tobytes("xbm") - return b"".join( - [ - f"#define {name}_width {self.size[0]}\n".encode("ascii"), - f"#define {name}_height {self.size[1]}\n".encode("ascii"), - f"static char {name}_bits[] = {{\n".encode("ascii"), - data, - b"};", - ] - ) - - def frombytes( - self, - data: bytes | bytearray | SupportsArrayInterface, - decoder_name: str = "raw", - *args: Any, - ) -> None: - """ - Loads this image with pixel data from a bytes object. - - This method is similar to the :py:func:`~PIL.Image.frombytes` function, - but loads data into this image instead of creating a new image object. - """ - - if self.width == 0 or self.height == 0: - return - - decoder_args: Any = args - if len(decoder_args) == 1 and isinstance(decoder_args[0], tuple): - # may pass tuple instead of argument list - decoder_args = decoder_args[0] - - # default format - if decoder_name == "raw" and decoder_args == (): - decoder_args = self.mode - - # unpack data - d = _getdecoder(self.mode, decoder_name, decoder_args) - d.setimage(self.im) - s = d.decode(data) - - if s[0] >= 0: - msg = "not enough image data" - raise ValueError(msg) - if s[1] != 0: - msg = "cannot decode image data" - raise ValueError(msg) - - def load(self) -> core.PixelAccess | None: - """ - Allocates storage for the image and loads the pixel data. In - normal cases, you don't need to call this method, since the - Image class automatically loads an opened image when it is - accessed for the first time. - - If the file associated with the image was opened by Pillow, then this - method will close it. The exception to this is if the image has - multiple frames, in which case the file will be left open for seek - operations. See :ref:`file-handling` for more information. - - :returns: An image access object. - :rtype: :py:class:`.PixelAccess` - """ - if self._im is not None and self.palette and self.palette.dirty: - # realize palette - mode, arr = self.palette.getdata() - self.im.putpalette(self.palette.mode, mode, arr) - self.palette.dirty = 0 - self.palette.rawmode = None - if "transparency" in self.info and mode in ("LA", "PA"): - if isinstance(self.info["transparency"], int): - self.im.putpalettealpha(self.info["transparency"], 0) - else: - self.im.putpalettealphas(self.info["transparency"]) - self.palette.mode = "RGBA" - else: - self.palette.palette = self.im.getpalette( - self.palette.mode, self.palette.mode - ) - - if self._im is not None: - return self.im.pixel_access(self.readonly) - return None - - def verify(self) -> None: - """ - Verifies the contents of a file. For data read from a file, this - method attempts to determine if the file is broken, without - actually decoding the image data. If this method finds any - problems, it raises suitable exceptions. If you need to load - the image after using this method, you must reopen the image - file. - """ - pass - - def convert( - self, - mode: str | None = None, - matrix: tuple[float, ...] | None = None, - dither: Dither | None = None, - palette: Palette = Palette.WEB, - colors: int = 256, - ) -> Image: - """ - Returns a converted copy of this image. For the "P" mode, this - method translates pixels through the palette. If mode is - omitted, a mode is chosen so that all information in the image - and the palette can be represented without a palette. - - This supports all possible conversions between "L", "RGB" and "CMYK". The - ``matrix`` argument only supports "L" and "RGB". - - When translating a color image to grayscale (mode "L"), - the library uses the ITU-R 601-2 luma transform:: - - L = R * 299/1000 + G * 587/1000 + B * 114/1000 - - The default method of converting a grayscale ("L") or "RGB" - image into a bilevel (mode "1") image uses Floyd-Steinberg - dither to approximate the original image luminosity levels. If - dither is ``None``, all values larger than 127 are set to 255 (white), - all other values to 0 (black). To use other thresholds, use the - :py:meth:`~PIL.Image.Image.point` method. - - When converting from "RGBA" to "P" without a ``matrix`` argument, - this passes the operation to :py:meth:`~PIL.Image.Image.quantize`, - and ``dither`` and ``palette`` are ignored. - - When converting from "PA", if an "RGBA" palette is present, the alpha - channel from the image will be used instead of the values from the palette. - - :param mode: The requested mode. See: :ref:`concept-modes`. - :param matrix: An optional conversion matrix. If given, this - should be 4- or 12-tuple containing floating point values. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). Note that this is not used when ``matrix`` is supplied. - :param palette: Palette to use when converting from mode "RGB" - to "P". Available palettes are :data:`Palette.WEB` or - :data:`Palette.ADAPTIVE`. - :param colors: Number of colors to use for the :data:`Palette.ADAPTIVE` - palette. Defaults to 256. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - has_transparency = "transparency" in self.info - if not mode and self.mode == "P": - # determine default mode - if self.palette: - mode = self.palette.mode - else: - mode = "RGB" - if mode == "RGB" and has_transparency: - mode = "RGBA" - if not mode or (mode == self.mode and not matrix): - return self.copy() - - if matrix: - # matrix conversion - if mode not in ("L", "RGB"): - msg = "illegal conversion" - raise ValueError(msg) - im = self.im.convert_matrix(mode, matrix) - new_im = self._new(im) - if has_transparency and self.im.bands == 3: - transparency = new_im.info["transparency"] - - def convert_transparency( - m: tuple[float, ...], v: tuple[int, int, int] - ) -> int: - value = m[0] * v[0] + m[1] * v[1] + m[2] * v[2] + m[3] * 0.5 - return max(0, min(255, int(value))) - - if mode == "L": - transparency = convert_transparency(matrix, transparency) - elif len(mode) == 3: - transparency = tuple( - convert_transparency(matrix[i * 4 : i * 4 + 4], transparency) - for i in range(len(transparency)) - ) - new_im.info["transparency"] = transparency - return new_im - - if self.mode == "RGBA": - if mode == "P": - return self.quantize(colors) - elif mode == "PA": - r, g, b, a = self.split() - rgb = merge("RGB", (r, g, b)) - p = rgb.quantize(colors) - return merge("PA", (p, a)) - - trns = None - delete_trns = False - # transparency handling - if has_transparency: - if (self.mode in ("1", "L", "I", "I;16") and mode in ("LA", "RGBA")) or ( - self.mode == "RGB" and mode in ("La", "LA", "RGBa", "RGBA") - ): - # Use transparent conversion to promote from transparent - # color to an alpha channel. - new_im = self._new( - self.im.convert_transparent(mode, self.info["transparency"]) - ) - del new_im.info["transparency"] - return new_im - elif self.mode in ("L", "RGB", "P") and mode in ("L", "RGB", "P"): - t = self.info["transparency"] - if isinstance(t, bytes): - # Dragons. This can't be represented by a single color - warnings.warn( - "Palette images with Transparency expressed in bytes should be " - "converted to RGBA images" - ) - delete_trns = True - else: - # get the new transparency color. - # use existing conversions - trns_im = new(self.mode, (1, 1)) - if self.mode == "P": - assert self.palette is not None - trns_im.putpalette(self.palette, self.palette.mode) - if isinstance(t, tuple): - err = "Couldn't allocate a palette color for transparency" - assert trns_im.palette is not None - try: - t = trns_im.palette.getcolor(t, self) - except ValueError as e: - if str(e) == "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - t = None - else: - raise ValueError(err) from e - if t is None: - trns = None - else: - trns_im.putpixel((0, 0), t) - - if mode in ("L", "RGB"): - trns_im = trns_im.convert(mode) - else: - # can't just retrieve the palette number, got to do it - # after quantization. - trns_im = trns_im.convert("RGB") - trns = trns_im.getpixel((0, 0)) - - elif self.mode == "P" and mode in ("LA", "PA", "RGBA"): - t = self.info["transparency"] - delete_trns = True - - if isinstance(t, bytes): - self.im.putpalettealphas(t) - elif isinstance(t, int): - self.im.putpalettealpha(t, 0) - else: - msg = "Transparency for P mode should be bytes or int" - raise ValueError(msg) - - if mode == "P" and palette == Palette.ADAPTIVE: - im = self.im.quantize(colors) - new_im = self._new(im) - from . import ImagePalette - - new_im.palette = ImagePalette.ImagePalette( - "RGB", new_im.im.getpalette("RGB") - ) - if delete_trns: - # This could possibly happen if we requantize to fewer colors. - # The transparency would be totally off in that case. - del new_im.info["transparency"] - if trns is not None: - try: - new_im.info["transparency"] = new_im.palette.getcolor( - cast(tuple[int, ...], trns), # trns was converted to RGB - new_im, - ) - except Exception: - # if we can't make a transparent color, don't leave the old - # transparency hanging around to mess us up. - del new_im.info["transparency"] - warnings.warn("Couldn't allocate palette entry for transparency") - return new_im - - if "LAB" in (self.mode, mode): - im = self - if mode == "LAB": - if im.mode not in ("RGB", "RGBA", "RGBX"): - im = im.convert("RGBA") - other_mode = im.mode - else: - other_mode = mode - if other_mode in ("RGB", "RGBA", "RGBX"): - from . import ImageCms - - srgb = ImageCms.createProfile("sRGB") - lab = ImageCms.createProfile("LAB") - profiles = [lab, srgb] if im.mode == "LAB" else [srgb, lab] - transform = ImageCms.buildTransform( - profiles[0], profiles[1], im.mode, mode - ) - return transform.apply(im) - - # colorspace conversion - if dither is None: - dither = Dither.FLOYDSTEINBERG - - try: - im = self.im.convert(mode, dither) - except ValueError: - try: - # normalize source image and try again - modebase = getmodebase(self.mode) - if modebase == self.mode: - raise - im = self.im.convert(modebase) - im = im.convert(mode, dither) - except KeyError as e: - msg = "illegal conversion" - raise ValueError(msg) from e - - new_im = self._new(im) - if mode in ("P", "PA") and palette != Palette.ADAPTIVE: - from . import ImagePalette - - new_im.palette = ImagePalette.ImagePalette("RGB", im.getpalette("RGB")) - if delete_trns: - # crash fail if we leave a bytes transparency in an rgb/l mode. - del new_im.info["transparency"] - if trns is not None: - if new_im.mode == "P" and new_im.palette: - try: - new_im.info["transparency"] = new_im.palette.getcolor( - cast(tuple[int, ...], trns), new_im # trns was converted to RGB - ) - except ValueError as e: - del new_im.info["transparency"] - if str(e) != "cannot allocate more than 256 colors": - # If all 256 colors are in use, - # then there is no need for transparency - warnings.warn( - "Couldn't allocate palette entry for transparency" - ) - else: - new_im.info["transparency"] = trns - return new_im - - def quantize( - self, - colors: int = 256, - method: int | None = None, - kmeans: int = 0, - palette: Image | None = None, - dither: Dither = Dither.FLOYDSTEINBERG, - ) -> Image: - """ - Convert the image to 'P' mode with the specified number - of colors. - - :param colors: The desired number of colors, <= 256 - :param method: :data:`Quantize.MEDIANCUT` (median cut), - :data:`Quantize.MAXCOVERAGE` (maximum coverage), - :data:`Quantize.FASTOCTREE` (fast octree), - :data:`Quantize.LIBIMAGEQUANT` (libimagequant; check support - using :py:func:`PIL.features.check_feature` with - ``feature="libimagequant"``). - - By default, :data:`Quantize.MEDIANCUT` will be used. - - The exception to this is RGBA images. :data:`Quantize.MEDIANCUT` - and :data:`Quantize.MAXCOVERAGE` do not support RGBA images, so - :data:`Quantize.FASTOCTREE` is used by default instead. - :param kmeans: Integer greater than or equal to zero. - :param palette: Quantize to the palette of given - :py:class:`PIL.Image.Image`. - :param dither: Dithering method, used when converting from - mode "RGB" to "P" or from "RGB" or "L" to "1". - Available methods are :data:`Dither.NONE` or :data:`Dither.FLOYDSTEINBERG` - (default). - :returns: A new image - """ - - self.load() - - if method is None: - # defaults: - method = Quantize.MEDIANCUT - if self.mode == "RGBA": - method = Quantize.FASTOCTREE - - if self.mode == "RGBA" and method not in ( - Quantize.FASTOCTREE, - Quantize.LIBIMAGEQUANT, - ): - # Caller specified an invalid mode. - msg = ( - "Fast Octree (method == 2) and libimagequant (method == 3) " - "are the only valid methods for quantizing RGBA images" - ) - raise ValueError(msg) - - if palette: - # use palette from reference image - palette.load() - if palette.mode != "P": - msg = "bad mode for palette image" - raise ValueError(msg) - if self.mode not in {"RGB", "L"}: - msg = "only RGB or L mode images can be quantized to a palette" - raise ValueError(msg) - im = self.im.convert("P", dither, palette.im) - new_im = self._new(im) - assert palette.palette is not None - new_im.palette = palette.palette.copy() - return new_im - - if kmeans < 0: - msg = "kmeans must not be negative" - raise ValueError(msg) - - im = self._new(self.im.quantize(colors, method, kmeans)) - - from . import ImagePalette - - mode = im.im.getpalettemode() - palette_data = im.im.getpalette(mode, mode)[: colors * len(mode)] - im.palette = ImagePalette.ImagePalette(mode, palette_data) - - return im - - def copy(self) -> Image: - """ - Copies this image. Use this method if you wish to paste things - into an image, but still retain the original. - - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - self.load() - return self._new(self.im.copy()) - - __copy__ = copy - - def crop(self, box: tuple[float, float, float, float] | None = None) -> Image: - """ - Returns a rectangular region from this image. The box is a - 4-tuple defining the left, upper, right, and lower pixel - coordinate. See :ref:`coordinate-system`. - - Note: Prior to Pillow 3.4.0, this was a lazy operation. - - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :rtype: :py:class:`~PIL.Image.Image` - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if box is None: - return self.copy() - - if box[2] < box[0]: - msg = "Coordinate 'right' is less than 'left'" - raise ValueError(msg) - elif box[3] < box[1]: - msg = "Coordinate 'lower' is less than 'upper'" - raise ValueError(msg) - - self.load() - return self._new(self._crop(self.im, box)) - - def _crop( - self, im: core.ImagingCore, box: tuple[float, float, float, float] - ) -> core.ImagingCore: - """ - Returns a rectangular region from the core image object im. - - This is equivalent to calling im.crop((x0, y0, x1, y1)), but - includes additional sanity checks. - - :param im: a core image object - :param box: The crop rectangle, as a (left, upper, right, lower)-tuple. - :returns: A core image object. - """ - - x0, y0, x1, y1 = map(int, map(round, box)) - - absolute_values = (abs(x1 - x0), abs(y1 - y0)) - - _decompression_bomb_check(absolute_values) - - return im.crop((x0, y0, x1, y1)) - - def draft( - self, mode: str | None, size: tuple[int, int] | None - ) -> tuple[str, tuple[int, int, float, float]] | None: - """ - Configures the image file loader so it returns a version of the - image that as closely as possible matches the given mode and - size. For example, you can use this method to convert a color - JPEG to grayscale while loading it. - - If any changes are made, returns a tuple with the chosen ``mode`` and - ``box`` with coordinates of the original image within the altered one. - - Note that this method modifies the :py:class:`~PIL.Image.Image` object - in place. If the image has already been loaded, this method has no - effect. - - Note: This method is not implemented for most images. It is - currently implemented only for JPEG and MPO images. - - :param mode: The requested mode. - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - """ - pass - - def filter(self, filter: ImageFilter.Filter | type[ImageFilter.Filter]) -> Image: - """ - Filters this image using the given filter. For a list of - available filters, see the :py:mod:`~PIL.ImageFilter` module. - - :param filter: Filter kernel. - :returns: An :py:class:`~PIL.Image.Image` object.""" - - from . import ImageFilter - - self.load() - - if callable(filter): - filter = filter() - if not hasattr(filter, "filter"): - msg = "filter argument should be ImageFilter.Filter instance or class" - raise TypeError(msg) - - multiband = isinstance(filter, ImageFilter.MultibandFilter) - if self.im.bands == 1 or multiband: - return self._new(filter.filter(self.im)) - - ims = [ - self._new(filter.filter(self.im.getband(c))) for c in range(self.im.bands) - ] - return merge(self.mode, ims) - - def getbands(self) -> tuple[str, ...]: - """ - Returns a tuple containing the name of each band in this image. - For example, ``getbands`` on an RGB image returns ("R", "G", "B"). - - :returns: A tuple containing band names. - :rtype: tuple - """ - return ImageMode.getmode(self.mode).bands - - def getbbox(self, *, alpha_only: bool = True) -> tuple[int, int, int, int] | None: - """ - Calculates the bounding box of the non-zero regions in the - image. - - :param alpha_only: Optional flag, defaulting to ``True``. - If ``True`` and the image has an alpha channel, trim transparent pixels. - Otherwise, trim pixels when all channels are zero. - Keyword-only argument. - :returns: The bounding box is returned as a 4-tuple defining the - left, upper, right, and lower pixel coordinate. See - :ref:`coordinate-system`. If the image is completely empty, this - method returns None. - - """ - - self.load() - return self.im.getbbox(alpha_only) - - def getcolors( - self, maxcolors: int = 256 - ) -> list[tuple[int, tuple[int, ...]]] | list[tuple[int, float]] | None: - """ - Returns a list of colors used in this image. - - The colors will be in the image's mode. For example, an RGB image will - return a tuple of (red, green, blue) color values, and a P image will - return the index of the color in the palette. - - :param maxcolors: Maximum number of colors. If this number is - exceeded, this method returns None. The default limit is - 256 colors. - :returns: An unsorted list of (count, pixel) values. - """ - - self.load() - if self.mode in ("1", "L", "P"): - h = self.im.histogram() - out: list[tuple[int, float]] = [(h[i], i) for i in range(256) if h[i]] - if len(out) > maxcolors: - return None - return out - return self.im.getcolors(maxcolors) - - def getdata(self, band: int | None = None) -> core.ImagingCore: - """ - Returns the contents of this image as a sequence object - containing pixel values. The sequence object is flattened, so - that values for line one follow directly after the values of - line zero, and so on. - - Note that the sequence object returned by this method is an - internal PIL data type, which only supports certain sequence - operations. To convert it to an ordinary sequence (e.g. for - printing), use ``list(im.getdata())``. - - :param band: What band to return. The default is to return - all bands. To return a single band, pass in the index - value (e.g. 0 to get the "R" band from an "RGB" image). - :returns: A sequence-like object. - """ - - self.load() - if band is not None: - return self.im.getband(band) - return self.im # could be abused - - def getextrema(self) -> tuple[float, float] | tuple[tuple[int, int], ...]: - """ - Gets the minimum and maximum pixel values for each band in - the image. - - :returns: For a single-band image, a 2-tuple containing the - minimum and maximum pixel value. For a multi-band image, - a tuple containing one 2-tuple for each band. - """ - - self.load() - if self.im.bands > 1: - return tuple(self.im.getband(i).getextrema() for i in range(self.im.bands)) - return self.im.getextrema() - - def getxmp(self) -> dict[str, Any]: - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - - def get_name(tag: str) -> str: - return re.sub("^{[^}]+}", "", tag) - - def get_value(element: Element) -> str | dict[str, Any] | None: - value: dict[str, Any] = {get_name(k): v for k, v in element.attrib.items()} - children = list(element) - if children: - for child in children: - name = get_name(child.tag) - child_value = get_value(child) - if name in value: - if not isinstance(value[name], list): - value[name] = [value[name]] - value[name].append(child_value) - else: - value[name] = child_value - elif value: - if element.text: - value["text"] = element.text - else: - return element.text - return value - - if ElementTree is None: - warnings.warn("XMP data cannot be read without defusedxml dependency") - return {} - if "xmp" not in self.info: - return {} - root = ElementTree.fromstring(self.info["xmp"].rstrip(b"\x00 ")) - return {get_name(root.tag): get_value(root)} - - def getexif(self) -> Exif: - """ - Gets EXIF data from the image. - - :returns: an :py:class:`~PIL.Image.Exif` object. - """ - if self._exif is None: - self._exif = Exif() - elif self._exif._loaded: - return self._exif - self._exif._loaded = True - - exif_info = self.info.get("exif") - if exif_info is None: - if "Raw profile type exif" in self.info: - exif_info = bytes.fromhex( - "".join(self.info["Raw profile type exif"].split("\n")[3:]) - ) - elif hasattr(self, "tag_v2"): - self._exif.bigtiff = self.tag_v2._bigtiff - self._exif.endian = self.tag_v2._endian - self._exif.load_from_fp(self.fp, self.tag_v2._offset) - if exif_info is not None: - self._exif.load(exif_info) - - # XMP tags - if ExifTags.Base.Orientation not in self._exif: - xmp_tags = self.info.get("XML:com.adobe.xmp") - pattern: str | bytes = r'tiff:Orientation(="|>)([0-9])' - if not xmp_tags and (xmp_tags := self.info.get("xmp")): - pattern = rb'tiff:Orientation(="|>)([0-9])' - if xmp_tags: - match = re.search(pattern, xmp_tags) - if match: - self._exif[ExifTags.Base.Orientation] = int(match[2]) - - return self._exif - - def _reload_exif(self) -> None: - if self._exif is None or not self._exif._loaded: - return - self._exif._loaded = False - self.getexif() - - def get_child_images(self) -> list[ImageFile.ImageFile]: - from . import ImageFile - - deprecate("Image.Image.get_child_images", 13) - return ImageFile.ImageFile.get_child_images(self) # type: ignore[arg-type] - - def getim(self) -> CapsuleType: - """ - Returns a capsule that points to the internal image memory. - - :returns: A capsule object. - """ - - self.load() - return self.im.ptr - - def getpalette(self, rawmode: str | None = "RGB") -> list[int] | None: - """ - Returns the image palette as a list. - - :param rawmode: The mode in which to return the palette. ``None`` will - return the palette in its current mode. - - .. versionadded:: 9.1.0 - - :returns: A list of color values [r, g, b, ...], or None if the - image has no palette. - """ - - self.load() - try: - mode = self.im.getpalettemode() - except ValueError: - return None # no palette - if rawmode is None: - rawmode = mode - return list(self.im.getpalette(mode, rawmode)) - - @property - def has_transparency_data(self) -> bool: - """ - Determine if an image has transparency data, whether in the form of an - alpha channel, a palette with an alpha channel, or a "transparency" key - in the info dictionary. - - Note the image might still appear solid, if all of the values shown - within are opaque. - - :returns: A boolean. - """ - if ( - self.mode in ("LA", "La", "PA", "RGBA", "RGBa") - or "transparency" in self.info - ): - return True - if self.mode == "P": - assert self.palette is not None - return self.palette.mode.endswith("A") - return False - - def apply_transparency(self) -> None: - """ - If a P mode image has a "transparency" key in the info dictionary, - remove the key and instead apply the transparency to the palette. - Otherwise, the image is unchanged. - """ - if self.mode != "P" or "transparency" not in self.info: - return - - from . import ImagePalette - - palette = self.getpalette("RGBA") - assert palette is not None - transparency = self.info["transparency"] - if isinstance(transparency, bytes): - for i, alpha in enumerate(transparency): - palette[i * 4 + 3] = alpha - else: - palette[transparency * 4 + 3] = 0 - self.palette = ImagePalette.ImagePalette("RGBA", bytes(palette)) - self.palette.dirty = 1 - - del self.info["transparency"] - - def getpixel( - self, xy: tuple[int, int] | list[int] - ) -> float | tuple[int, ...] | None: - """ - Returns the pixel value at a given position. - - :param xy: The coordinate, given as (x, y). See - :ref:`coordinate-system`. - :returns: The pixel value. If the image is a multi-layer image, - this method returns a tuple. - """ - - self.load() - return self.im.getpixel(tuple(xy)) - - def getprojection(self) -> tuple[list[int], list[int]]: - """ - Get projection to x and y axes - - :returns: Two sequences, indicating where there are non-zero - pixels along the X-axis and the Y-axis, respectively. - """ - - self.load() - x, y = self.im.getprojection() - return list(x), list(y) - - def histogram( - self, mask: Image | None = None, extrema: tuple[float, float] | None = None - ) -> list[int]: - """ - Returns a histogram for the image. The histogram is returned as a - list of pixel counts, one for each pixel value in the source - image. Counts are grouped into 256 bins for each band, even if - the image has more than 8 bits per band. If the image has more - than one band, the histograms for all bands are concatenated (for - example, the histogram for an "RGB" image contains 768 values). - - A bilevel image (mode "1") is treated as a grayscale ("L") image - by this method. - - If a mask is provided, the method returns a histogram for those - parts of the image where the mask image is non-zero. The mask - image must have the same size as the image, and be either a - bi-level image (mode "1") or a grayscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A list containing pixel counts. - """ - self.load() - if mask: - mask.load() - return self.im.histogram((0, 0), mask.im) - if self.mode in ("I", "F"): - return self.im.histogram( - extrema if extrema is not None else self.getextrema() - ) - return self.im.histogram() - - def entropy( - self, mask: Image | None = None, extrema: tuple[float, float] | None = None - ) -> float: - """ - Calculates and returns the entropy for the image. - - A bilevel image (mode "1") is treated as a grayscale ("L") - image by this method. - - If a mask is provided, the method employs the histogram for - those parts of the image where the mask image is non-zero. - The mask image must have the same size as the image, and be - either a bi-level image (mode "1") or a grayscale image ("L"). - - :param mask: An optional mask. - :param extrema: An optional tuple of manually-specified extrema. - :returns: A float value representing the image entropy - """ - self.load() - if mask: - mask.load() - return self.im.entropy((0, 0), mask.im) - if self.mode in ("I", "F"): - return self.im.entropy( - extrema if extrema is not None else self.getextrema() - ) - return self.im.entropy() - - def paste( - self, - im: Image | str | float | tuple[float, ...], - box: Image | tuple[int, int, int, int] | tuple[int, int] | None = None, - mask: Image | None = None, - ) -> None: - """ - Pastes another image into this image. The box argument is either - a 2-tuple giving the upper left corner, a 4-tuple defining the - left, upper, right, and lower pixel coordinate, or None (same as - (0, 0)). See :ref:`coordinate-system`. If a 4-tuple is given, the size - of the pasted image must match the size of the region. - - If the modes don't match, the pasted image is converted to the mode of - this image (see the :py:meth:`~PIL.Image.Image.convert` method for - details). - - Instead of an image, the source can be a integer or tuple - containing pixel values. The method then fills the region - with the given color. When creating RGB images, you can - also use color strings as supported by the ImageColor module. See - :ref:`colors` for more information. - - If a mask is given, this method updates only the regions - indicated by the mask. You can use either "1", "L", "LA", "RGBA" - or "RGBa" images (if present, the alpha band is used as mask). - Where the mask is 255, the given image is copied as is. Where - the mask is 0, the current value is preserved. Intermediate - values will mix the two images together, including their alpha - channels if they have them. - - See :py:meth:`~PIL.Image.Image.alpha_composite` if you want to - combine images with respect to their alpha channels. - - :param im: Source image or pixel value (integer, float or tuple). - :param box: An optional 4-tuple giving the region to paste into. - If a 2-tuple is used instead, it's treated as the upper left - corner. If omitted or None, the source is pasted into the - upper left corner. - - If an image is given as the second argument and there is no - third, the box defaults to (0, 0), and the second argument - is interpreted as a mask image. - :param mask: An optional mask image. - """ - - if isinstance(box, Image): - if mask is not None: - msg = "If using second argument as mask, third argument must be None" - raise ValueError(msg) - # abbreviated paste(im, mask) syntax - mask = box - box = None - - if box is None: - box = (0, 0) - - if len(box) == 2: - # upper left corner given; get size from image or mask - if isinstance(im, Image): - size = im.size - elif isinstance(mask, Image): - size = mask.size - else: - # FIXME: use self.size here? - msg = "cannot determine region size; use 4-item box" - raise ValueError(msg) - box += (box[0] + size[0], box[1] + size[1]) - - source: core.ImagingCore | str | float | tuple[float, ...] - if isinstance(im, str): - from . import ImageColor - - source = ImageColor.getcolor(im, self.mode) - elif isinstance(im, Image): - im.load() - if self.mode != im.mode: - if self.mode != "RGB" or im.mode not in ("LA", "RGBA", "RGBa"): - # should use an adapter for this! - im = im.convert(self.mode) - source = im.im - else: - source = im - - self._ensure_mutable() - - if mask: - mask.load() - self.im.paste(source, box, mask.im) - else: - self.im.paste(source, box) - - def alpha_composite( - self, im: Image, dest: Sequence[int] = (0, 0), source: Sequence[int] = (0, 0) - ) -> None: - """'In-place' analog of Image.alpha_composite. Composites an image - onto this image. - - :param im: image to composite over this one - :param dest: Optional 2 tuple (left, top) specifying the upper - left corner in this (destination) image. - :param source: Optional 2 (left, top) tuple for the upper left - corner in the overlay source image, or 4 tuple (left, top, right, - bottom) for the bounds of the source rectangle - - Performance Note: Not currently implemented in-place in the core layer. - """ - - if not isinstance(source, (list, tuple)): - msg = "Source must be a list or tuple" - raise ValueError(msg) - if not isinstance(dest, (list, tuple)): - msg = "Destination must be a list or tuple" - raise ValueError(msg) - - if len(source) == 4: - overlay_crop_box = tuple(source) - elif len(source) == 2: - overlay_crop_box = tuple(source) + im.size - else: - msg = "Source must be a sequence of length 2 or 4" - raise ValueError(msg) - - if not len(dest) == 2: - msg = "Destination must be a sequence of length 2" - raise ValueError(msg) - if min(source) < 0: - msg = "Source must be non-negative" - raise ValueError(msg) - - # over image, crop if it's not the whole image. - if overlay_crop_box == (0, 0) + im.size: - overlay = im - else: - overlay = im.crop(overlay_crop_box) - - # target for the paste - box = tuple(dest) + (dest[0] + overlay.width, dest[1] + overlay.height) - - # destination image. don't copy if we're using the whole image. - if box == (0, 0) + self.size: - background = self - else: - background = self.crop(box) - - result = alpha_composite(background, overlay) - self.paste(result, box) - - def point( - self, - lut: ( - Sequence[float] - | NumpyArray - | Callable[[int], float] - | Callable[[ImagePointTransform], ImagePointTransform | float] - | ImagePointHandler - ), - mode: str | None = None, - ) -> Image: - """ - Maps this image through a lookup table or function. - - :param lut: A lookup table, containing 256 (or 65536 if - self.mode=="I" and mode == "L") values per band in the - image. A function can be used instead, it should take a - single argument. The function is called once for each - possible pixel value, and the resulting table is applied to - all bands of the image. - - It may also be an :py:class:`~PIL.Image.ImagePointHandler` - object:: - - class Example(Image.ImagePointHandler): - def point(self, im: Image) -> Image: - # Return result - :param mode: Output mode (default is same as input). This can only be used if - the source image has mode "L" or "P", and the output has mode "1" or the - source image mode is "I" and the output mode is "L". - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - self.load() - - if isinstance(lut, ImagePointHandler): - return lut.point(self) - - if callable(lut): - # if it isn't a list, it should be a function - if self.mode in ("I", "I;16", "F"): - # check if the function can be used with point_transform - # UNDONE wiredfool -- I think this prevents us from ever doing - # a gamma function point transform on > 8bit images. - scale, offset = _getscaleoffset(lut) # type: ignore[arg-type] - return self._new(self.im.point_transform(scale, offset)) - # for other modes, convert the function to a table - flatLut = [lut(i) for i in range(256)] * self.im.bands # type: ignore[arg-type] - else: - flatLut = lut - - if self.mode == "F": - # FIXME: _imaging returns a confusing error message for this case - msg = "point operation not supported for this mode" - raise ValueError(msg) - - if mode != "F": - flatLut = [round(i) for i in flatLut] - return self._new(self.im.point(flatLut, mode)) - - def putalpha(self, alpha: Image | int) -> None: - """ - Adds or replaces the alpha layer in this image. If the image - does not have an alpha layer, it's converted to "LA" or "RGBA". - The new layer must be either "L" or "1". - - :param alpha: The new alpha layer. This can either be an "L" or "1" - image having the same size as this image, or an integer. - """ - - self._ensure_mutable() - - if self.mode not in ("LA", "PA", "RGBA"): - # attempt to promote self to a matching alpha mode - try: - mode = getmodebase(self.mode) + "A" - try: - self.im.setmode(mode) - except (AttributeError, ValueError) as e: - # do things the hard way - im = self.im.convert(mode) - if im.mode not in ("LA", "PA", "RGBA"): - msg = "alpha channel could not be added" - raise ValueError(msg) from e # sanity check - self.im = im - self._mode = self.im.mode - except KeyError as e: - msg = "illegal image mode" - raise ValueError(msg) from e - - if self.mode in ("LA", "PA"): - band = 1 - else: - band = 3 - - if isinstance(alpha, Image): - # alpha layer - if alpha.mode not in ("1", "L"): - msg = "illegal image mode" - raise ValueError(msg) - alpha.load() - if alpha.mode == "1": - alpha = alpha.convert("L") - else: - # constant alpha - try: - self.im.fillband(band, alpha) - except (AttributeError, ValueError): - # do things the hard way - alpha = new("L", self.size, alpha) - else: - return - - self.im.putband(alpha.im, band) - - def putdata( - self, - data: Sequence[float] | Sequence[Sequence[int]] | core.ImagingCore | NumpyArray, - scale: float = 1.0, - offset: float = 0.0, - ) -> None: - """ - Copies pixel data from a flattened sequence object into the image. The - values should start at the upper left corner (0, 0), continue to the - end of the line, followed directly by the first value of the second - line, and so on. Data will be read until either the image or the - sequence ends. The scale and offset values are used to adjust the - sequence values: **pixel = value*scale + offset**. - - :param data: A flattened sequence object. See :ref:`colors` for more - information about values. - :param scale: An optional scale value. The default is 1.0. - :param offset: An optional offset value. The default is 0.0. - """ - - self._ensure_mutable() - - self.im.putdata(data, scale, offset) - - def putpalette( - self, - data: ImagePalette.ImagePalette | bytes | Sequence[int], - rawmode: str = "RGB", - ) -> None: - """ - Attaches a palette to this image. The image must be a "P", "PA", "L" - or "LA" image. - - The palette sequence must contain at most 256 colors, made up of one - integer value for each channel in the raw mode. - For example, if the raw mode is "RGB", then it can contain at most 768 - values, made up of red, green and blue values for the corresponding pixel - index in the 256 colors. - If the raw mode is "RGBA", then it can contain at most 1024 values, - containing red, green, blue and alpha values. - - Alternatively, an 8-bit string may be used instead of an integer sequence. - - :param data: A palette sequence (either a list or a string). - :param rawmode: The raw mode of the palette. Either "RGB", "RGBA", or a mode - that can be transformed to "RGB" or "RGBA" (e.g. "R", "BGR;15", "RGBA;L"). - """ - from . import ImagePalette - - if self.mode not in ("L", "LA", "P", "PA"): - msg = "illegal image mode" - raise ValueError(msg) - if isinstance(data, ImagePalette.ImagePalette): - if data.rawmode is not None: - palette = ImagePalette.raw(data.rawmode, data.palette) - else: - palette = ImagePalette.ImagePalette(palette=data.palette) - palette.dirty = 1 - else: - if not isinstance(data, bytes): - data = bytes(data) - palette = ImagePalette.raw(rawmode, data) - self._mode = "PA" if "A" in self.mode else "P" - self.palette = palette - self.palette.mode = "RGBA" if "A" in rawmode else "RGB" - self.load() # install new palette - - def putpixel( - self, xy: tuple[int, int], value: float | tuple[int, ...] | list[int] - ) -> None: - """ - Modifies the pixel at the given position. The color is given as - a single numerical value for single-band images, and a tuple for - multi-band images. In addition to this, RGB and RGBA tuples are - accepted for P and PA images. See :ref:`colors` for more information. - - Note that this method is relatively slow. For more extensive changes, - use :py:meth:`~PIL.Image.Image.paste` or the :py:mod:`~PIL.ImageDraw` - module instead. - - See: - - * :py:meth:`~PIL.Image.Image.paste` - * :py:meth:`~PIL.Image.Image.putdata` - * :py:mod:`~PIL.ImageDraw` - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :param value: The pixel value. - """ - - self._ensure_mutable() - - if ( - self.mode in ("P", "PA") - and isinstance(value, (list, tuple)) - and len(value) in [3, 4] - ): - # RGB or RGBA value for a P or PA image - if self.mode == "PA": - alpha = value[3] if len(value) == 4 else 255 - value = value[:3] - assert self.palette is not None - palette_index = self.palette.getcolor(tuple(value), self) - value = (palette_index, alpha) if self.mode == "PA" else palette_index - return self.im.putpixel(xy, value) - - def remap_palette( - self, dest_map: list[int], source_palette: bytes | bytearray | None = None - ) -> Image: - """ - Rewrites the image to reorder the palette. - - :param dest_map: A list of indexes into the original palette. - e.g. ``[1,0]`` would swap a two item palette, and ``list(range(256))`` - is the identity transform. - :param source_palette: Bytes or None. - :returns: An :py:class:`~PIL.Image.Image` object. - - """ - from . import ImagePalette - - if self.mode not in ("L", "P"): - msg = "illegal image mode" - raise ValueError(msg) - - bands = 3 - palette_mode = "RGB" - if source_palette is None: - if self.mode == "P": - self.load() - palette_mode = self.im.getpalettemode() - if palette_mode == "RGBA": - bands = 4 - source_palette = self.im.getpalette(palette_mode, palette_mode) - else: # L-mode - source_palette = bytearray(i // 3 for i in range(768)) - elif len(source_palette) > 768: - bands = 4 - palette_mode = "RGBA" - - palette_bytes = b"" - new_positions = [0] * 256 - - # pick only the used colors from the palette - for i, oldPosition in enumerate(dest_map): - palette_bytes += source_palette[ - oldPosition * bands : oldPosition * bands + bands - ] - new_positions[oldPosition] = i - - # replace the palette color id of all pixel with the new id - - # Palette images are [0..255], mapped through a 1 or 3 - # byte/color map. We need to remap the whole image - # from palette 1 to palette 2. New_positions is - # an array of indexes into palette 1. Palette 2 is - # palette 1 with any holes removed. - - # We're going to leverage the convert mechanism to use the - # C code to remap the image from palette 1 to palette 2, - # by forcing the source image into 'L' mode and adding a - # mapping 'L' mode palette, then converting back to 'L' - # sans palette thus converting the image bytes, then - # assigning the optimized RGB palette. - - # perf reference, 9500x4000 gif, w/~135 colors - # 14 sec prepatch, 1 sec postpatch with optimization forced. - - mapping_palette = bytearray(new_positions) - - m_im = self.copy() - m_im._mode = "P" - - m_im.palette = ImagePalette.ImagePalette( - palette_mode, palette=mapping_palette * bands - ) - # possibly set palette dirty, then - # m_im.putpalette(mapping_palette, 'L') # converts to 'P' - # or just force it. - # UNDONE -- this is part of the general issue with palettes - m_im.im.putpalette(palette_mode, palette_mode + ";L", m_im.palette.tobytes()) - - m_im = m_im.convert("L") - - m_im.putpalette(palette_bytes, palette_mode) - m_im.palette = ImagePalette.ImagePalette(palette_mode, palette=palette_bytes) - - if "transparency" in self.info: - try: - m_im.info["transparency"] = dest_map.index(self.info["transparency"]) - except ValueError: - if "transparency" in m_im.info: - del m_im.info["transparency"] - - return m_im - - def _get_safe_box( - self, - size: tuple[int, int], - resample: Resampling, - box: tuple[float, float, float, float], - ) -> tuple[int, int, int, int]: - """Expands the box so it includes adjacent pixels - that may be used by resampling with the given resampling filter. - """ - filter_support = _filters_support[resample] - 0.5 - scale_x = (box[2] - box[0]) / size[0] - scale_y = (box[3] - box[1]) / size[1] - support_x = filter_support * scale_x - support_y = filter_support * scale_y - - return ( - max(0, int(box[0] - support_x)), - max(0, int(box[1] - support_y)), - min(self.size[0], math.ceil(box[2] + support_x)), - min(self.size[1], math.ceil(box[3] + support_y)), - ) - - def resize( - self, - size: tuple[int, int] | list[int] | NumpyArray, - resample: int | None = None, - box: tuple[float, float, float, float] | None = None, - reducing_gap: float | None = None, - ) -> Image: - """ - Returns a resized copy of this image. - - :param size: The requested size in pixels, as a tuple or array: - (width, height). - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If the image has mode "1" or "P", it is always set to - :py:data:`Resampling.NEAREST`. Otherwise, the default filter is - :py:data:`Resampling.BICUBIC`. See: :ref:`concept-filters`. - :param box: An optional 4-tuple of floats providing - the source image region to be scaled. - The values must be within (0, 0, width, height) rectangle. - If omitted or None, the entire source is used. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce`. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is None (no optimization). - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if resample is None: - resample = Resampling.BICUBIC - elif resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - Resampling.LANCZOS, - Resampling.BOX, - Resampling.HAMMING, - ): - msg = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.LANCZOS, "Image.Resampling.LANCZOS"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - (Resampling.BOX, "Image.Resampling.BOX"), - (Resampling.HAMMING, "Image.Resampling.HAMMING"), - ) - ] - msg += f" Use {', '.join(filters[:-1])} or {filters[-1]}" - raise ValueError(msg) - - if reducing_gap is not None and reducing_gap < 1.0: - msg = "reducing_gap must be 1.0 or greater" - raise ValueError(msg) - - if box is None: - box = (0, 0) + self.size - - size = tuple(size) - if self.size == size and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ("1", "P"): - resample = Resampling.NEAREST - - if self.mode in ["LA", "RGBA"] and resample != Resampling.NEAREST: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.resize(size, resample, box) - return im.convert(self.mode) - - self.load() - - if reducing_gap is not None and resample != Resampling.NEAREST: - factor_x = int((box[2] - box[0]) / size[0] / reducing_gap) or 1 - factor_y = int((box[3] - box[1]) / size[1] / reducing_gap) or 1 - if factor_x > 1 or factor_y > 1: - reduce_box = self._get_safe_box(size, cast(Resampling, resample), box) - factor = (factor_x, factor_y) - self = ( - self.reduce(factor, box=reduce_box) - if callable(self.reduce) - else Image.reduce(self, factor, box=reduce_box) - ) - box = ( - (box[0] - reduce_box[0]) / factor_x, - (box[1] - reduce_box[1]) / factor_y, - (box[2] - reduce_box[0]) / factor_x, - (box[3] - reduce_box[1]) / factor_y, - ) - - return self._new(self.im.resize(size, resample, box)) - - def reduce( - self, - factor: int | tuple[int, int], - box: tuple[int, int, int, int] | None = None, - ) -> Image: - """ - Returns a copy of the image reduced ``factor`` times. - If the size of the image is not dividable by ``factor``, - the resulting size will be rounded up. - - :param factor: A greater than 0 integer or tuple of two integers - for width and height separately. - :param box: An optional 4-tuple of ints providing - the source image region to be reduced. - The values must be within ``(0, 0, width, height)`` rectangle. - If omitted or ``None``, the entire source is used. - """ - if not isinstance(factor, (list, tuple)): - factor = (factor, factor) - - if box is None: - box = (0, 0) + self.size - - if factor == (1, 1) and box == (0, 0) + self.size: - return self.copy() - - if self.mode in ["LA", "RGBA"]: - im = self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - im = im.reduce(factor, box) - return im.convert(self.mode) - - self.load() - - return self._new(self.im.reduce(factor, box)) - - def rotate( - self, - angle: float, - resample: Resampling = Resampling.NEAREST, - expand: int | bool = False, - center: tuple[float, float] | None = None, - translate: tuple[int, int] | None = None, - fillcolor: float | tuple[float, ...] | str | None = None, - ) -> Image: - """ - Returns a rotated copy of this image. This method returns a - copy of this image, rotated the given number of degrees counter - clockwise around its centre. - - :param angle: In degrees counter clockwise. - :param resample: An optional resampling filter. This can be - one of :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image has - mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See :ref:`concept-filters`. - :param expand: Optional expansion flag. If true, expands the output - image to make it large enough to hold the entire rotated image. - If false or omitted, make the output image the same size as the - input image. Note that the expand flag assumes rotation around - the center and no translation. - :param center: Optional center of rotation (a 2-tuple). Origin is - the upper left corner. Default is the center of the image. - :param translate: An optional post-rotate translation (a 2-tuple). - :param fillcolor: An optional color for area outside the rotated image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - angle = angle % 360.0 - - # Fast paths regardless of filter, as long as we're not - # translating or changing the center. - if not (center or translate): - if angle == 0: - return self.copy() - if angle == 180: - return self.transpose(Transpose.ROTATE_180) - if angle in (90, 270) and (expand or self.width == self.height): - return self.transpose( - Transpose.ROTATE_90 if angle == 90 else Transpose.ROTATE_270 - ) - - # Calculate the affine matrix. Note that this is the reverse - # transformation (from destination image to source) because we - # want to interpolate the (discrete) destination pixel from - # the local area around the (floating) source pixel. - - # The matrix we actually want (note that it operates from the right): - # (1, 0, tx) (1, 0, cx) ( cos a, sin a, 0) (1, 0, -cx) - # (0, 1, ty) * (0, 1, cy) * (-sin a, cos a, 0) * (0, 1, -cy) - # (0, 0, 1) (0, 0, 1) ( 0, 0, 1) (0, 0, 1) - - # The reverse matrix is thus: - # (1, 0, cx) ( cos -a, sin -a, 0) (1, 0, -cx) (1, 0, -tx) - # (0, 1, cy) * (-sin -a, cos -a, 0) * (0, 1, -cy) * (0, 1, -ty) - # (0, 0, 1) ( 0, 0, 1) (0, 0, 1) (0, 0, 1) - - # In any case, the final translation may be updated at the end to - # compensate for the expand flag. - - w, h = self.size - - if translate is None: - post_trans = (0, 0) - else: - post_trans = translate - if center is None: - center = (w / 2, h / 2) - - angle = -math.radians(angle) - matrix = [ - round(math.cos(angle), 15), - round(math.sin(angle), 15), - 0.0, - round(-math.sin(angle), 15), - round(math.cos(angle), 15), - 0.0, - ] - - def transform(x: float, y: float, matrix: list[float]) -> tuple[float, float]: - (a, b, c, d, e, f) = matrix - return a * x + b * y + c, d * x + e * y + f - - matrix[2], matrix[5] = transform( - -center[0] - post_trans[0], -center[1] - post_trans[1], matrix - ) - matrix[2] += center[0] - matrix[5] += center[1] - - if expand: - # calculate output size - xx = [] - yy = [] - for x, y in ((0, 0), (w, 0), (w, h), (0, h)): - transformed_x, transformed_y = transform(x, y, matrix) - xx.append(transformed_x) - yy.append(transformed_y) - nw = math.ceil(max(xx)) - math.floor(min(xx)) - nh = math.ceil(max(yy)) - math.floor(min(yy)) - - # We multiply a translation matrix from the right. Because of its - # special form, this is the same as taking the image of the - # translation vector as new translation vector. - matrix[2], matrix[5] = transform(-(nw - w) / 2.0, -(nh - h) / 2.0, matrix) - w, h = nw, nh - - return self.transform( - (w, h), Transform.AFFINE, matrix, resample, fillcolor=fillcolor - ) - - def save( - self, fp: StrOrBytesPath | IO[bytes], format: str | None = None, **params: Any - ) -> None: - """ - Saves this image under the given filename. If no format is - specified, the format to use is determined from the filename - extension, if possible. - - Keyword options can be used to provide additional instructions - to the writer. If a writer doesn't recognise an option, it is - silently ignored. The available options are described in the - :doc:`image format documentation - <../handbook/image-file-formats>` for each writer. - - You can use a file object instead of a filename. In this case, - you must always specify the format. The file object must - implement the ``seek``, ``tell``, and ``write`` - methods, and be opened in binary mode. - - :param fp: A filename (string), os.PathLike object or file object. - :param format: Optional format override. If omitted, the - format to use is determined from the filename extension. - If a file object was used instead of a filename, this - parameter should always be used. - :param params: Extra parameters to the image writer. These can also be - set on the image itself through ``encoderinfo``. This is useful when - saving multiple images:: - - # Saving XMP data to a single image - from PIL import Image - red = Image.new("RGB", (1, 1), "#f00") - red.save("out.mpo", xmp=b"test") - - # Saving XMP data to the second frame of an image - from PIL import Image - black = Image.new("RGB", (1, 1)) - red = Image.new("RGB", (1, 1), "#f00") - red.encoderinfo = {"xmp": b"test"} - black.save("out.mpo", save_all=True, append_images=[red]) - :returns: None - :exception ValueError: If the output format could not be determined - from the file name. Use the format option to solve this. - :exception OSError: If the file could not be written. The file - may have been created, and may contain partial data. - """ - - filename: str | bytes = "" - open_fp = False - if is_path(fp): - filename = os.fspath(fp) - open_fp = True - elif fp == sys.stdout: - try: - fp = sys.stdout.buffer - except AttributeError: - pass - if not filename and hasattr(fp, "name") and is_path(fp.name): - # only set the name for metadata purposes - filename = os.fspath(fp.name) - - preinit() - - filename_ext = os.path.splitext(filename)[1].lower() - ext = filename_ext.decode() if isinstance(filename_ext, bytes) else filename_ext - - if not format: - if ext not in EXTENSION: - init() - try: - format = EXTENSION[ext] - except KeyError as e: - msg = f"unknown file extension: {ext}" - raise ValueError(msg) from e - - from . import ImageFile - - # may mutate self! - if isinstance(self, ImageFile.ImageFile) and os.path.abspath( - filename - ) == os.path.abspath(self.filename): - self._ensure_mutable() - else: - self.load() - - save_all = params.pop("save_all", None) - self._default_encoderinfo = params - encoderinfo = getattr(self, "encoderinfo", {}) - self._attach_default_encoderinfo(self) - self.encoderconfig: tuple[Any, ...] = () - - if format.upper() not in SAVE: - init() - if save_all or ( - save_all is None - and params.get("append_images") - and format.upper() in SAVE_ALL - ): - save_handler = SAVE_ALL[format.upper()] - else: - save_handler = SAVE[format.upper()] - - created = False - if open_fp: - created = not os.path.exists(filename) - if params.get("append", False): - # Open also for reading ("+"), because TIFF save_all - # writer needs to go back and edit the written data. - fp = builtins.open(filename, "r+b") - else: - fp = builtins.open(filename, "w+b") - else: - fp = cast(IO[bytes], fp) - - try: - save_handler(self, fp, filename) - except Exception: - if open_fp: - fp.close() - if created: - try: - os.remove(filename) - except PermissionError: - pass - raise - finally: - self.encoderinfo = encoderinfo - if open_fp: - fp.close() - - def _attach_default_encoderinfo(self, im: Image) -> dict[str, Any]: - encoderinfo = getattr(self, "encoderinfo", {}) - self.encoderinfo = {**im._default_encoderinfo, **encoderinfo} - return encoderinfo - - def seek(self, frame: int) -> None: - """ - Seeks to the given frame in this sequence file. If you seek - beyond the end of the sequence, the method raises an - ``EOFError`` exception. When a sequence file is opened, the - library automatically seeks to frame 0. - - See :py:meth:`~PIL.Image.Image.tell`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :param frame: Frame number, starting at 0. - :exception EOFError: If the call attempts to seek beyond the end - of the sequence. - """ - - # overridden by file handlers - if frame != 0: - msg = "no more images in file" - raise EOFError(msg) - - def show(self, title: str | None = None) -> None: - """ - Displays this image. This method is mainly intended for debugging purposes. - - This method calls :py:func:`PIL.ImageShow.show` internally. You can use - :py:func:`PIL.ImageShow.register` to override its default behaviour. - - The image is first saved to a temporary file. By default, it will be in - PNG format. - - On Unix, the image is then opened using the **xdg-open**, **display**, - **gm**, **eog** or **xv** utility, depending on which one can be found. - - On macOS, the image is opened with the native Preview application. - - On Windows, the image is opened with the standard PNG display utility. - - :param title: Optional title to use for the image window, where possible. - """ - - from . import ImageShow - - ImageShow.show(self, title) - - def split(self) -> tuple[Image, ...]: - """ - Split this image into individual bands. This method returns a - tuple of individual image bands from an image. For example, - splitting an "RGB" image creates three new images each - containing a copy of one of the original bands (red, green, - blue). - - If you need only one band, :py:meth:`~PIL.Image.Image.getchannel` - method can be more convenient and faster. - - :returns: A tuple containing bands. - """ - - self.load() - if self.im.bands == 1: - return (self.copy(),) - return tuple(map(self._new, self.im.split())) - - def getchannel(self, channel: int | str) -> Image: - """ - Returns an image containing a single channel of the source image. - - :param channel: What channel to return. Could be index - (0 for "R" channel of "RGB") or channel name - ("A" for alpha channel of "RGBA"). - :returns: An image in "L" mode. - - .. versionadded:: 4.3.0 - """ - self.load() - - if isinstance(channel, str): - try: - channel = self.getbands().index(channel) - except ValueError as e: - msg = f'The image has no channel "{channel}"' - raise ValueError(msg) from e - - return self._new(self.im.getband(channel)) - - def tell(self) -> int: - """ - Returns the current frame number. See :py:meth:`~PIL.Image.Image.seek`. - - If defined, :attr:`~PIL.Image.Image.n_frames` refers to the - number of available frames. - - :returns: Frame number, starting with 0. - """ - return 0 - - def thumbnail( - self, - size: tuple[float, float], - resample: Resampling = Resampling.BICUBIC, - reducing_gap: float | None = 2.0, - ) -> None: - """ - Make this image into a thumbnail. This method modifies the - image to contain a thumbnail version of itself, no larger than - the given size. This method calculates an appropriate thumbnail - size to preserve the aspect of the image, calls the - :py:meth:`~PIL.Image.Image.draft` method to configure the file reader - (where applicable), and finally resizes the image. - - Note that this function modifies the :py:class:`~PIL.Image.Image` - object in place. If you need to use the full resolution image as well, - apply this method to a :py:meth:`~PIL.Image.Image.copy` of the original - image. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param resample: Optional resampling filter. This can be one - of :py:data:`Resampling.NEAREST`, :py:data:`Resampling.BOX`, - :py:data:`Resampling.BILINEAR`, :py:data:`Resampling.HAMMING`, - :py:data:`Resampling.BICUBIC` or :py:data:`Resampling.LANCZOS`. - If omitted, it defaults to :py:data:`Resampling.BICUBIC`. - (was :py:data:`Resampling.NEAREST` prior to version 2.5.0). - See: :ref:`concept-filters`. - :param reducing_gap: Apply optimization by resizing the image - in two steps. First, reducing the image by integer times - using :py:meth:`~PIL.Image.Image.reduce` or - :py:meth:`~PIL.Image.Image.draft` for JPEG images. - Second, resizing using regular resampling. The last step - changes size no less than by ``reducing_gap`` times. - ``reducing_gap`` may be None (no first step is performed) - or should be greater than 1.0. The bigger ``reducing_gap``, - the closer the result to the fair resampling. - The smaller ``reducing_gap``, the faster resizing. - With ``reducing_gap`` greater or equal to 3.0, the result is - indistinguishable from fair resampling in most cases. - The default value is 2.0 (very close to fair resampling - while still being faster in many cases). - :returns: None - """ - - provided_size = tuple(map(math.floor, size)) - - def preserve_aspect_ratio() -> tuple[int, int] | None: - def round_aspect(number: float, key: Callable[[int], float]) -> int: - return max(min(math.floor(number), math.ceil(number), key=key), 1) - - x, y = provided_size - if x >= self.width and y >= self.height: - return None - - aspect = self.width / self.height - if x / y >= aspect: - x = round_aspect(y * aspect, key=lambda n: abs(aspect - n / y)) - else: - y = round_aspect( - x / aspect, key=lambda n: 0 if n == 0 else abs(aspect - x / n) - ) - return x, y - - preserved_size = preserve_aspect_ratio() - if preserved_size is None: - return - final_size = preserved_size - - box = None - if reducing_gap is not None: - res = self.draft( - None, (int(size[0] * reducing_gap), int(size[1] * reducing_gap)) - ) - if res is not None: - box = res[1] - - if self.size != final_size: - im = self.resize(final_size, resample, box=box, reducing_gap=reducing_gap) - - self.im = im.im - self._size = final_size - self._mode = self.im.mode - - self.readonly = 0 - - # FIXME: the different transform methods need further explanation - # instead of bloating the method docs, add a separate chapter. - def transform( - self, - size: tuple[int, int], - method: Transform | ImageTransformHandler | SupportsGetData, - data: Sequence[Any] | None = None, - resample: int = Resampling.NEAREST, - fill: int = 1, - fillcolor: float | tuple[float, ...] | str | None = None, - ) -> Image: - """ - Transforms this image. This method creates a new image with the - given size, and the same mode as the original, and copies data - to the new image using the given transform. - - :param size: The output size in pixels, as a 2-tuple: - (width, height). - :param method: The transformation method. This is one of - :py:data:`Transform.EXTENT` (cut out a rectangular subregion), - :py:data:`Transform.AFFINE` (affine transform), - :py:data:`Transform.PERSPECTIVE` (perspective transform), - :py:data:`Transform.QUAD` (map a quadrilateral to a rectangle), or - :py:data:`Transform.MESH` (map a number of source quadrilaterals - in one operation). - - It may also be an :py:class:`~PIL.Image.ImageTransformHandler` - object:: - - class Example(Image.ImageTransformHandler): - def transform(self, size, data, resample, fill=1): - # Return result - - Implementations of :py:class:`~PIL.Image.ImageTransformHandler` - for some of the :py:class:`Transform` methods are provided - in :py:mod:`~PIL.ImageTransform`. - - It may also be an object with a ``method.getdata`` method - that returns a tuple supplying new ``method`` and ``data`` values:: - - class Example: - def getdata(self): - method = Image.Transform.EXTENT - data = (0, 0, 100, 100) - return method, data - :param data: Extra data to the transformation method. - :param resample: Optional resampling filter. It can be one of - :py:data:`Resampling.NEAREST` (use nearest neighbour), - :py:data:`Resampling.BILINEAR` (linear interpolation in a 2x2 - environment), or :py:data:`Resampling.BICUBIC` (cubic spline - interpolation in a 4x4 environment). If omitted, or if the image - has mode "1" or "P", it is set to :py:data:`Resampling.NEAREST`. - See: :ref:`concept-filters`. - :param fill: If ``method`` is an - :py:class:`~PIL.Image.ImageTransformHandler` object, this is one of - the arguments passed to it. Otherwise, it is unused. - :param fillcolor: Optional fill color for the area outside the - transform in the output image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if self.mode in ("LA", "RGBA") and resample != Resampling.NEAREST: - return ( - self.convert({"LA": "La", "RGBA": "RGBa"}[self.mode]) - .transform(size, method, data, resample, fill, fillcolor) - .convert(self.mode) - ) - - if isinstance(method, ImageTransformHandler): - return method.transform(size, self, resample=resample, fill=fill) - - if hasattr(method, "getdata"): - # compatibility w. old-style transform objects - method, data = method.getdata() - - if data is None: - msg = "missing method data" - raise ValueError(msg) - - im = new(self.mode, size, fillcolor) - if self.mode == "P" and self.palette: - im.palette = self.palette.copy() - im.info = self.info.copy() - if method == Transform.MESH: - # list of quads - for box, quad in data: - im.__transformer( - box, self, Transform.QUAD, quad, resample, fillcolor is None - ) - else: - im.__transformer( - (0, 0) + size, self, method, data, resample, fillcolor is None - ) - - return im - - def __transformer( - self, - box: tuple[int, int, int, int], - image: Image, - method: Transform, - data: Sequence[float], - resample: int = Resampling.NEAREST, - fill: bool = True, - ) -> None: - w = box[2] - box[0] - h = box[3] - box[1] - - if method == Transform.AFFINE: - data = data[:6] - - elif method == Transform.EXTENT: - # convert extent to an affine transform - x0, y0, x1, y1 = data - xs = (x1 - x0) / w - ys = (y1 - y0) / h - method = Transform.AFFINE - data = (xs, 0, x0, 0, ys, y0) - - elif method == Transform.PERSPECTIVE: - data = data[:8] - - elif method == Transform.QUAD: - # quadrilateral warp. data specifies the four corners - # given as NW, SW, SE, and NE. - nw = data[:2] - sw = data[2:4] - se = data[4:6] - ne = data[6:8] - x0, y0 = nw - As = 1.0 / w - At = 1.0 / h - data = ( - x0, - (ne[0] - x0) * As, - (sw[0] - x0) * At, - (se[0] - sw[0] - ne[0] + x0) * As * At, - y0, - (ne[1] - y0) * As, - (sw[1] - y0) * At, - (se[1] - sw[1] - ne[1] + y0) * As * At, - ) - - else: - msg = "unknown transformation method" - raise ValueError(msg) - - if resample not in ( - Resampling.NEAREST, - Resampling.BILINEAR, - Resampling.BICUBIC, - ): - if resample in (Resampling.BOX, Resampling.HAMMING, Resampling.LANCZOS): - unusable: dict[int, str] = { - Resampling.BOX: "Image.Resampling.BOX", - Resampling.HAMMING: "Image.Resampling.HAMMING", - Resampling.LANCZOS: "Image.Resampling.LANCZOS", - } - msg = unusable[resample] + f" ({resample}) cannot be used." - else: - msg = f"Unknown resampling filter ({resample})." - - filters = [ - f"{filter[1]} ({filter[0]})" - for filter in ( - (Resampling.NEAREST, "Image.Resampling.NEAREST"), - (Resampling.BILINEAR, "Image.Resampling.BILINEAR"), - (Resampling.BICUBIC, "Image.Resampling.BICUBIC"), - ) - ] - msg += f" Use {', '.join(filters[:-1])} or {filters[-1]}" - raise ValueError(msg) - - image.load() - - self.load() - - if image.mode in ("1", "P"): - resample = Resampling.NEAREST - - self.im.transform(box, image.im, method, data, resample, fill) - - def transpose(self, method: Transpose) -> Image: - """ - Transpose image (flip or rotate in 90 degree steps) - - :param method: One of :py:data:`Transpose.FLIP_LEFT_RIGHT`, - :py:data:`Transpose.FLIP_TOP_BOTTOM`, :py:data:`Transpose.ROTATE_90`, - :py:data:`Transpose.ROTATE_180`, :py:data:`Transpose.ROTATE_270`, - :py:data:`Transpose.TRANSPOSE` or :py:data:`Transpose.TRANSVERSE`. - :returns: Returns a flipped or rotated copy of this image. - """ - - self.load() - return self._new(self.im.transpose(method)) - - def effect_spread(self, distance: int) -> Image: - """ - Randomly spread pixels in an image. - - :param distance: Distance to spread pixels. - """ - self.load() - return self._new(self.im.effect_spread(distance)) - - def toqimage(self) -> ImageQt.ImageQt: - """Returns a QImage copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.toqimage(self) - - def toqpixmap(self) -> ImageQt.QPixmap: - """Returns a QPixmap copy of this image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.toqpixmap(self) - - -# -------------------------------------------------------------------- -# Abstract handlers. - - -class ImagePointHandler(abc.ABC): - """ - Used as a mixin by point transforms - (for use with :py:meth:`~PIL.Image.Image.point`) - """ - - @abc.abstractmethod - def point(self, im: Image) -> Image: - pass - - -class ImageTransformHandler(abc.ABC): - """ - Used as a mixin by geometry transforms - (for use with :py:meth:`~PIL.Image.Image.transform`) - """ - - @abc.abstractmethod - def transform( - self, - size: tuple[int, int], - image: Image, - **options: Any, - ) -> Image: - pass - - -# -------------------------------------------------------------------- -# Factories - - -def _check_size(size: Any) -> None: - """ - Common check to enforce type and sanity check on size tuples - - :param size: Should be a 2 tuple of (width, height) - :returns: None, or raises a ValueError - """ - - if not isinstance(size, (list, tuple)): - msg = "Size must be a list or tuple" - raise ValueError(msg) - if len(size) != 2: - msg = "Size must be a sequence of length 2" - raise ValueError(msg) - if size[0] < 0 or size[1] < 0: - msg = "Width and height must be >= 0" - raise ValueError(msg) - - -def new( - mode: str, - size: tuple[int, int] | list[int], - color: float | tuple[float, ...] | str | None = 0, -) -> Image: - """ - Creates a new image with the given mode and size. - - :param mode: The mode to use for the new image. See: - :ref:`concept-modes`. - :param size: A 2-tuple, containing (width, height) in pixels. - :param color: What color to use for the image. Default is black. If given, - this should be a single integer or floating point value for single-band - modes, and a tuple for multi-band modes (one value per band). When - creating RGB or HSV images, you can also use color strings as supported - by the ImageColor module. See :ref:`colors` for more information. If the - color is None, the image is not initialised. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - if color is None: - # don't initialize - return Image()._new(core.new(mode, size)) - - if isinstance(color, str): - # css3-style specifier - - from . import ImageColor - - color = ImageColor.getcolor(color, mode) - - im = Image() - if ( - mode == "P" - and isinstance(color, (list, tuple)) - and all(isinstance(i, int) for i in color) - ): - color_ints: tuple[int, ...] = cast(tuple[int, ...], tuple(color)) - if len(color_ints) == 3 or len(color_ints) == 4: - # RGB or RGBA value for a P image - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette() - color = im.palette.getcolor(color_ints) - return im._new(core.fill(mode, size, color)) - - -def frombytes( - mode: str, - size: tuple[int, int], - data: bytes | bytearray | SupportsArrayInterface, - decoder_name: str = "raw", - *args: Any, -) -> Image: - """ - Creates a copy of an image memory from pixel data in a buffer. - - In its simplest form, this function takes three arguments - (mode, size, and unpacked pixel data). - - You can also use any pixel decoder supported by PIL. For more - information on available decoders, see the section - :ref:`Writing Your Own File Codec `. - - Note that this function decodes pixel data only, not entire images. - If you have an entire image in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load - it. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A byte buffer containing raw data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - _check_size(size) - - im = new(mode, size) - if im.width != 0 and im.height != 0: - decoder_args: Any = args - if len(decoder_args) == 1 and isinstance(decoder_args[0], tuple): - # may pass tuple instead of argument list - decoder_args = decoder_args[0] - - if decoder_name == "raw" and decoder_args == (): - decoder_args = mode - - im.frombytes(data, decoder_name, decoder_args) - return im - - -def frombuffer( - mode: str, - size: tuple[int, int], - data: bytes | SupportsArrayInterface, - decoder_name: str = "raw", - *args: Any, -) -> Image: - """ - Creates an image memory referencing pixel data in a byte buffer. - - This function is similar to :py:func:`~PIL.Image.frombytes`, but uses data - in the byte buffer, where possible. This means that changes to the - original buffer object are reflected in this image). Not all modes can - share memory; supported modes include "L", "RGBX", "RGBA", and "CMYK". - - Note that this function decodes pixel data only, not entire images. - If you have an entire image file in a string, wrap it in a - :py:class:`~io.BytesIO` object, and use :py:func:`~PIL.Image.open` to load it. - - The default parameters used for the "raw" decoder differs from that used for - :py:func:`~PIL.Image.frombytes`. This is a bug, and will probably be fixed in a - future release. The current release issues a warning if you do this; to disable - the warning, you should provide the full set of parameters. See below for details. - - :param mode: The image mode. See: :ref:`concept-modes`. - :param size: The image size. - :param data: A bytes or other buffer object containing raw - data for the given mode. - :param decoder_name: What decoder to use. - :param args: Additional parameters for the given decoder. For the - default encoder ("raw"), it's recommended that you provide the - full set of parameters:: - - frombuffer(mode, size, data, "raw", mode, 0, 1) - - :returns: An :py:class:`~PIL.Image.Image` object. - - .. versionadded:: 1.1.4 - """ - - _check_size(size) - - # may pass tuple instead of argument list - if len(args) == 1 and isinstance(args[0], tuple): - args = args[0] - - if decoder_name == "raw": - if args == (): - args = mode, 0, 1 - if args[0] in _MAPMODES: - im = new(mode, (0, 0)) - im = im._new(core.map_buffer(data, size, decoder_name, 0, args)) - if mode == "P": - from . import ImagePalette - - im.palette = ImagePalette.ImagePalette("RGB", im.im.getpalette("RGB")) - im.readonly = 1 - return im - - return frombytes(mode, size, data, decoder_name, args) - - -class SupportsArrayInterface(Protocol): - """ - An object that has an ``__array_interface__`` dictionary. - """ - - @property - def __array_interface__(self) -> dict[str, Any]: - raise NotImplementedError() - - -class SupportsArrowArrayInterface(Protocol): - """ - An object that has an ``__arrow_c_array__`` method corresponding to the arrow c - data interface. - """ - - def __arrow_c_array__( - self, requested_schema: "PyCapsule" = None # type: ignore[name-defined] # noqa: F821, UP037 - ) -> tuple["PyCapsule", "PyCapsule"]: # type: ignore[name-defined] # noqa: F821, UP037 - raise NotImplementedError() - - -def fromarray(obj: SupportsArrayInterface, mode: str | None = None) -> Image: - """ - Creates an image memory from an object exporting the array interface - (using the buffer protocol):: - - from PIL import Image - import numpy as np - a = np.zeros((5, 5)) - im = Image.fromarray(a) - - If ``obj`` is not contiguous, then the ``tobytes`` method is called - and :py:func:`~PIL.Image.frombuffer` is used. - - In the case of NumPy, be aware that Pillow modes do not always correspond - to NumPy dtypes. Pillow modes only offer 1-bit pixels, 8-bit pixels, - 32-bit signed integer pixels, and 32-bit floating point pixels. - - Pillow images can also be converted to arrays:: - - from PIL import Image - import numpy as np - im = Image.open("hopper.jpg") - a = np.asarray(im) - - When converting Pillow images to arrays however, only pixel values are - transferred. This means that P and PA mode images will lose their palette. - - :param obj: Object with array interface - :param mode: Optional mode to use when reading ``obj``. Since pixel values do not - contain information about palettes or color spaces, this can be used to place - grayscale L mode data within a P mode image, or read RGB data as YCbCr for - example. - - See: :ref:`concept-modes` for general information about modes. - :returns: An image object. - - .. versionadded:: 1.1.6 - """ - arr = obj.__array_interface__ - shape = arr["shape"] - ndim = len(shape) - strides = arr.get("strides", None) - try: - typekey = (1, 1) + shape[2:], arr["typestr"] - except KeyError as e: - if mode is not None: - typekey = None - color_modes: list[str] = [] - else: - msg = "Cannot handle this data type" - raise TypeError(msg) from e - if typekey is not None: - try: - typemode, rawmode, color_modes = _fromarray_typemap[typekey] - except KeyError as e: - typekey_shape, typestr = typekey - msg = f"Cannot handle this data type: {typekey_shape}, {typestr}" - raise TypeError(msg) from e - if mode is not None: - if mode != typemode and mode not in color_modes: - deprecate("'mode' parameter for changing data types", 13) - rawmode = mode - else: - mode = typemode - if mode in ["1", "L", "I", "P", "F"]: - ndmax = 2 - elif mode == "RGB": - ndmax = 3 - else: - ndmax = 4 - if ndim > ndmax: - msg = f"Too many dimensions: {ndim} > {ndmax}." - raise ValueError(msg) - - size = 1 if ndim == 1 else shape[1], shape[0] - if strides is not None: - if hasattr(obj, "tobytes"): - obj = obj.tobytes() - elif hasattr(obj, "tostring"): - obj = obj.tostring() - else: - msg = "'strides' requires either tobytes() or tostring()" - raise ValueError(msg) - - return frombuffer(mode, size, obj, "raw", rawmode, 0, 1) - - -def fromarrow( - obj: SupportsArrowArrayInterface, mode: str, size: tuple[int, int] -) -> Image: - """Creates an image with zero-copy shared memory from an object exporting - the arrow_c_array interface protocol:: - - from PIL import Image - import pyarrow as pa - arr = pa.array([0]*(5*5*4), type=pa.uint8()) - im = Image.fromarrow(arr, 'RGBA', (5, 5)) - - If the data representation of the ``obj`` is not compatible with - Pillow internal storage, a ValueError is raised. - - Pillow images can also be converted to Arrow objects:: - - from PIL import Image - import pyarrow as pa - im = Image.open('hopper.jpg') - arr = pa.array(im) - - As with array support, when converting Pillow images to arrays, - only pixel values are transferred. This means that P and PA mode - images will lose their palette. - - :param obj: Object with an arrow_c_array interface - :param mode: Image mode. - :param size: Image size. This must match the storage of the arrow object. - :returns: An Image object - - Note that according to the Arrow spec, both the producer and the - consumer should consider the exported array to be immutable, as - unsynchronized updates will potentially cause inconsistent data. - - See: :ref:`arrow-support` for more detailed information - - .. versionadded:: 11.2.1 - - """ - if not hasattr(obj, "__arrow_c_array__"): - msg = "arrow_c_array interface not found" - raise ValueError(msg) - - (schema_capsule, array_capsule) = obj.__arrow_c_array__() - _im = core.new_arrow(mode, size, schema_capsule, array_capsule) - if _im: - return Image()._new(_im) - - msg = "new_arrow returned None without an exception" - raise ValueError(msg) - - -def fromqimage(im: ImageQt.QImage) -> ImageFile.ImageFile: - """Creates an image instance from a QImage image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.fromqimage(im) - - -def fromqpixmap(im: ImageQt.QPixmap) -> ImageFile.ImageFile: - """Creates an image instance from a QPixmap image""" - from . import ImageQt - - if not ImageQt.qt_is_installed: - msg = "Qt bindings are not installed" - raise ImportError(msg) - return ImageQt.fromqpixmap(im) - - -_fromarray_typemap = { - # (shape, typestr) => mode, rawmode, color modes - # first two members of shape are set to one - ((1, 1), "|b1"): ("1", "1;8", []), - ((1, 1), "|u1"): ("L", "L", ["P"]), - ((1, 1), "|i1"): ("I", "I;8", []), - ((1, 1), "u2"): ("I", "I;16B", []), - ((1, 1), "i2"): ("I", "I;16BS", []), - ((1, 1), "u4"): ("I", "I;32B", []), - ((1, 1), "i4"): ("I", "I;32BS", []), - ((1, 1), "f4"): ("F", "F;32BF", []), - ((1, 1), "f8"): ("F", "F;64BF", []), - ((1, 1, 2), "|u1"): ("LA", "LA", ["La", "PA"]), - ((1, 1, 3), "|u1"): ("RGB", "RGB", ["YCbCr", "LAB", "HSV"]), - ((1, 1, 4), "|u1"): ("RGBA", "RGBA", ["RGBa", "RGBX", "CMYK"]), - # shortcuts: - ((1, 1), f"{_ENDIAN}i4"): ("I", "I", []), - ((1, 1), f"{_ENDIAN}f4"): ("F", "F", []), -} - - -def _decompression_bomb_check(size: tuple[int, int]) -> None: - if MAX_IMAGE_PIXELS is None: - return - - pixels = max(1, size[0]) * max(1, size[1]) - - if pixels > 2 * MAX_IMAGE_PIXELS: - msg = ( - f"Image size ({pixels} pixels) exceeds limit of {2 * MAX_IMAGE_PIXELS} " - "pixels, could be decompression bomb DOS attack." - ) - raise DecompressionBombError(msg) - - if pixels > MAX_IMAGE_PIXELS: - warnings.warn( - f"Image size ({pixels} pixels) exceeds limit of {MAX_IMAGE_PIXELS} pixels, " - "could be decompression bomb DOS attack.", - DecompressionBombWarning, - ) - - -def open( - fp: StrOrBytesPath | IO[bytes], - mode: Literal["r"] = "r", - formats: list[str] | tuple[str, ...] | None = None, -) -> ImageFile.ImageFile: - """ - Opens and identifies the given image file. - - This is a lazy operation; this function identifies the file, but - the file remains open and the actual image data is not read from - the file until you try to process the data (or call the - :py:meth:`~PIL.Image.Image.load` method). See - :py:func:`~PIL.Image.new`. See :ref:`file-handling`. - - :param fp: A filename (string), os.PathLike object or a file object. - The file object must implement ``file.read``, - ``file.seek``, and ``file.tell`` methods, - and be opened in binary mode. The file object will also seek to zero - before reading. - :param mode: The mode. If given, this argument must be "r". - :param formats: A list or tuple of formats to attempt to load the file in. - This can be used to restrict the set of formats checked. - Pass ``None`` to try all supported formats. You can print the set of - available formats by running ``python3 -m PIL`` or using - the :py:func:`PIL.features.pilinfo` function. - :returns: An :py:class:`~PIL.Image.Image` object. - :exception FileNotFoundError: If the file cannot be found. - :exception PIL.UnidentifiedImageError: If the image cannot be opened and - identified. - :exception ValueError: If the ``mode`` is not "r", or if a ``StringIO`` - instance is used for ``fp``. - :exception TypeError: If ``formats`` is not ``None``, a list or a tuple. - """ - - if mode != "r": - msg = f"bad mode {repr(mode)}" # type: ignore[unreachable] - raise ValueError(msg) - elif isinstance(fp, io.StringIO): - msg = ( # type: ignore[unreachable] - "StringIO cannot be used to open an image. " - "Binary data must be used instead." - ) - raise ValueError(msg) - - if formats is None: - formats = ID - elif not isinstance(formats, (list, tuple)): - msg = "formats must be a list or tuple" # type: ignore[unreachable] - raise TypeError(msg) - - exclusive_fp = False - filename: str | bytes = "" - if is_path(fp): - filename = os.fspath(fp) - fp = builtins.open(filename, "rb") - exclusive_fp = True - else: - fp = cast(IO[bytes], fp) - - try: - fp.seek(0) - except (AttributeError, io.UnsupportedOperation): - fp = io.BytesIO(fp.read()) - exclusive_fp = True - - prefix = fp.read(16) - - preinit() - - warning_messages: list[str] = [] - - def _open_core( - fp: IO[bytes], - filename: str | bytes, - prefix: bytes, - formats: list[str] | tuple[str, ...], - ) -> ImageFile.ImageFile | None: - for i in formats: - i = i.upper() - if i not in OPEN: - init() - try: - factory, accept = OPEN[i] - result = not accept or accept(prefix) - if isinstance(result, str): - warning_messages.append(result) - elif result: - fp.seek(0) - im = factory(fp, filename) - _decompression_bomb_check(im.size) - return im - except (SyntaxError, IndexError, TypeError, struct.error) as e: - if WARN_POSSIBLE_FORMATS: - warning_messages.append(i + " opening failed. " + str(e)) - except BaseException: - if exclusive_fp: - fp.close() - raise - return None - - im = _open_core(fp, filename, prefix, formats) - - if im is None and formats is ID: - checked_formats = ID.copy() - if init(): - im = _open_core( - fp, - filename, - prefix, - tuple(format for format in formats if format not in checked_formats), - ) - - if im: - im._exclusive_fp = exclusive_fp - return im - - if exclusive_fp: - fp.close() - for message in warning_messages: - warnings.warn(message) - msg = "cannot identify image file %r" % (filename if filename else fp) - raise UnidentifiedImageError(msg) - - -# -# Image processing. - - -def alpha_composite(im1: Image, im2: Image) -> Image: - """ - Alpha composite im2 over im1. - - :param im1: The first image. Must have mode RGBA or LA. - :param im2: The second image. Must have the same mode and size as the first image. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.alpha_composite(im1.im, im2.im)) - - -def blend(im1: Image, im2: Image, alpha: float) -> Image: - """ - Creates a new image by interpolating between two input images, using - a constant alpha:: - - out = image1 * (1.0 - alpha) + image2 * alpha - - :param im1: The first image. - :param im2: The second image. Must have the same mode and size as - the first image. - :param alpha: The interpolation alpha factor. If alpha is 0.0, a - copy of the first image is returned. If alpha is 1.0, a copy of - the second image is returned. There are no restrictions on the - alpha value. If necessary, the result is clipped to fit into - the allowed output range. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - im1.load() - im2.load() - return im1._new(core.blend(im1.im, im2.im, alpha)) - - -def composite(image1: Image, image2: Image, mask: Image) -> Image: - """ - Create composite image by blending images using a transparency mask. - - :param image1: The first image. - :param image2: The second image. Must have the same mode and - size as the first image. - :param mask: A mask image. This image can have mode - "1", "L", or "RGBA", and must have the same size as the - other two images. - """ - - image = image2.copy() - image.paste(image1, None, mask) - return image - - -def eval(image: Image, *args: Callable[[int], float]) -> Image: - """ - Applies the function (which should take one argument) to each pixel - in the given image. If the image has more than one band, the same - function is applied to each band. Note that the function is - evaluated once for each possible pixel value, so you cannot use - random components or other generators. - - :param image: The input image. - :param function: A function object, taking one integer argument. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - return image.point(args[0]) - - -def merge(mode: str, bands: Sequence[Image]) -> Image: - """ - Merge a set of single band images into a new multiband image. - - :param mode: The mode to use for the output image. See: - :ref:`concept-modes`. - :param bands: A sequence containing one single-band image for - each band in the output image. All bands must have the - same size. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - - if getmodebands(mode) != len(bands) or "*" in mode: - msg = "wrong number of bands" - raise ValueError(msg) - for band in bands[1:]: - if band.mode != getmodetype(mode): - msg = "mode mismatch" - raise ValueError(msg) - if band.size != bands[0].size: - msg = "size mismatch" - raise ValueError(msg) - for band in bands: - band.load() - return bands[0]._new(core.merge(mode, *[b.im for b in bands])) - - -# -------------------------------------------------------------------- -# Plugin registry - - -def register_open( - id: str, - factory: ( - Callable[[IO[bytes], str | bytes], ImageFile.ImageFile] - | type[ImageFile.ImageFile] - ), - accept: Callable[[bytes], bool | str] | None = None, -) -> None: - """ - Register an image file plugin. This function should not be used - in application code. - - :param id: An image format identifier. - :param factory: An image file factory method. - :param accept: An optional function that can be used to quickly - reject images having another format. - """ - id = id.upper() - if id not in ID: - ID.append(id) - OPEN[id] = factory, accept - - -def register_mime(id: str, mimetype: str) -> None: - """ - Registers an image MIME type by populating ``Image.MIME``. This function - should not be used in application code. - - ``Image.MIME`` provides a mapping from image format identifiers to mime - formats, but :py:meth:`~PIL.ImageFile.ImageFile.get_format_mimetype` can - provide a different result for specific images. - - :param id: An image format identifier. - :param mimetype: The image MIME type for this format. - """ - MIME[id.upper()] = mimetype - - -def register_save( - id: str, driver: Callable[[Image, IO[bytes], str | bytes], None] -) -> None: - """ - Registers an image save function. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE[id.upper()] = driver - - -def register_save_all( - id: str, driver: Callable[[Image, IO[bytes], str | bytes], None] -) -> None: - """ - Registers an image function to save all the frames - of a multiframe format. This function should not be - used in application code. - - :param id: An image format identifier. - :param driver: A function to save images in this format. - """ - SAVE_ALL[id.upper()] = driver - - -def register_extension(id: str, extension: str) -> None: - """ - Registers an image extension. This function should not be - used in application code. - - :param id: An image format identifier. - :param extension: An extension used for this format. - """ - EXTENSION[extension.lower()] = id.upper() - - -def register_extensions(id: str, extensions: list[str]) -> None: - """ - Registers image extensions. This function should not be - used in application code. - - :param id: An image format identifier. - :param extensions: A list of extensions used for this format. - """ - for extension in extensions: - register_extension(id, extension) - - -def registered_extensions() -> dict[str, str]: - """ - Returns a dictionary containing all file extensions belonging - to registered plugins - """ - init() - return EXTENSION - - -def register_decoder(name: str, decoder: type[ImageFile.PyDecoder]) -> None: - """ - Registers an image decoder. This function should not be - used in application code. - - :param name: The name of the decoder - :param decoder: An ImageFile.PyDecoder object - - .. versionadded:: 4.1.0 - """ - DECODERS[name] = decoder - - -def register_encoder(name: str, encoder: type[ImageFile.PyEncoder]) -> None: - """ - Registers an image encoder. This function should not be - used in application code. - - :param name: The name of the encoder - :param encoder: An ImageFile.PyEncoder object - - .. versionadded:: 4.1.0 - """ - ENCODERS[name] = encoder - - -# -------------------------------------------------------------------- -# Simple display support. - - -def _show(image: Image, **options: Any) -> None: - from . import ImageShow - - deprecate("Image._show", 13, "ImageShow.show") - ImageShow.show(image, **options) - - -# -------------------------------------------------------------------- -# Effects - - -def effect_mandelbrot( - size: tuple[int, int], extent: tuple[float, float, float, float], quality: int -) -> Image: - """ - Generate a Mandelbrot set covering the given extent. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param extent: The extent to cover, as a 4-tuple: - (x0, y0, x1, y1). - :param quality: Quality. - """ - return Image()._new(core.effect_mandelbrot(size, extent, quality)) - - -def effect_noise(size: tuple[int, int], sigma: float) -> Image: - """ - Generate Gaussian noise centered around 128. - - :param size: The requested size in pixels, as a 2-tuple: - (width, height). - :param sigma: Standard deviation of noise. - """ - return Image()._new(core.effect_noise(size, sigma)) - - -def linear_gradient(mode: str) -> Image: - """ - Generate 256x256 linear gradient from black to white, top to bottom. - - :param mode: Input mode. - """ - return Image()._new(core.linear_gradient(mode)) - - -def radial_gradient(mode: str) -> Image: - """ - Generate 256x256 radial gradient from black to white, centre to edge. - - :param mode: Input mode. - """ - return Image()._new(core.radial_gradient(mode)) - - -# -------------------------------------------------------------------- -# Resources - - -def _apply_env_variables(env: dict[str, str] | None = None) -> None: - env_dict = env if env is not None else os.environ - - for var_name, setter in [ - ("PILLOW_ALIGNMENT", core.set_alignment), - ("PILLOW_BLOCK_SIZE", core.set_block_size), - ("PILLOW_BLOCKS_MAX", core.set_blocks_max), - ]: - if var_name not in env_dict: - continue - - var = env_dict[var_name].lower() - - units = 1 - for postfix, mul in [("k", 1024), ("m", 1024 * 1024)]: - if var.endswith(postfix): - units = mul - var = var[: -len(postfix)] - - try: - var_int = int(var) * units - except ValueError: - warnings.warn(f"{var_name} is not int") - continue - - try: - setter(var_int) - except ValueError as e: - warnings.warn(f"{var_name}: {e}") - - -_apply_env_variables() -atexit.register(core.clear_cache) - - -if TYPE_CHECKING: - _ExifBase = MutableMapping[int, Any] -else: - _ExifBase = MutableMapping - - -class Exif(_ExifBase): - """ - This class provides read and write access to EXIF image data:: - - from PIL import Image - im = Image.open("exif.png") - exif = im.getexif() # Returns an instance of this class - - Information can be read and written, iterated over or deleted:: - - print(exif[274]) # 1 - exif[274] = 2 - for k, v in exif.items(): - print("Tag", k, "Value", v) # Tag 274 Value 2 - del exif[274] - - To access information beyond IFD0, :py:meth:`~PIL.Image.Exif.get_ifd` - returns a dictionary:: - - from PIL import ExifTags - im = Image.open("exif_gps.jpg") - exif = im.getexif() - gps_ifd = exif.get_ifd(ExifTags.IFD.GPSInfo) - print(gps_ifd) - - Other IFDs include ``ExifTags.IFD.Exif``, ``ExifTags.IFD.MakerNote``, - ``ExifTags.IFD.Interop`` and ``ExifTags.IFD.IFD1``. - - :py:mod:`~PIL.ExifTags` also has enum classes to provide names for data:: - - print(exif[ExifTags.Base.Software]) # PIL - print(gps_ifd[ExifTags.GPS.GPSDateStamp]) # 1999:99:99 99:99:99 - """ - - endian: str | None = None - bigtiff = False - _loaded = False - - def __init__(self) -> None: - self._data: dict[int, Any] = {} - self._hidden_data: dict[int, Any] = {} - self._ifds: dict[int, dict[int, Any]] = {} - self._info: TiffImagePlugin.ImageFileDirectory_v2 | None = None - self._loaded_exif: bytes | None = None - - def _fixup(self, value: Any) -> Any: - try: - if len(value) == 1 and isinstance(value, tuple): - return value[0] - except Exception: - pass - return value - - def _fixup_dict(self, src_dict: dict[int, Any]) -> dict[int, Any]: - # Helper function - # returns a dict with any single item tuples/lists as individual values - return {k: self._fixup(v) for k, v in src_dict.items()} - - def _get_ifd_dict( - self, offset: int, group: int | None = None - ) -> dict[int, Any] | None: - try: - # an offset pointer to the location of the nested embedded IFD. - # It should be a long, but may be corrupted. - self.fp.seek(offset) - except (KeyError, TypeError): - return None - else: - from . import TiffImagePlugin - - info = TiffImagePlugin.ImageFileDirectory_v2(self.head, group=group) - info.load(self.fp) - return self._fixup_dict(dict(info)) - - def _get_head(self) -> bytes: - version = b"\x2b" if self.bigtiff else b"\x2a" - if self.endian == "<": - head = b"II" + version + b"\x00" + o32le(8) - else: - head = b"MM\x00" + version + o32be(8) - if self.bigtiff: - head += o32le(8) if self.endian == "<" else o32be(8) - head += b"\x00\x00\x00\x00" - return head - - def load(self, data: bytes) -> None: - # Extract EXIF information. This is highly experimental, - # and is likely to be replaced with something better in a future - # version. - - # The EXIF record consists of a TIFF file embedded in a JPEG - # application marker (!). - if data == self._loaded_exif: - return - self._loaded_exif = data - self._data.clear() - self._hidden_data.clear() - self._ifds.clear() - while data and data.startswith(b"Exif\x00\x00"): - data = data[6:] - if not data: - self._info = None - return - - self.fp: IO[bytes] = io.BytesIO(data) - self.head = self.fp.read(8) - # process dictionary - from . import TiffImagePlugin - - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - self.endian = self._info._endian - self.fp.seek(self._info.next) - self._info.load(self.fp) - - def load_from_fp(self, fp: IO[bytes], offset: int | None = None) -> None: - self._loaded_exif = None - self._data.clear() - self._hidden_data.clear() - self._ifds.clear() - - # process dictionary - from . import TiffImagePlugin - - self.fp = fp - if offset is not None: - self.head = self._get_head() - else: - self.head = self.fp.read(8) - self._info = TiffImagePlugin.ImageFileDirectory_v2(self.head) - if self.endian is None: - self.endian = self._info._endian - if offset is None: - offset = self._info.next - self.fp.tell() - self.fp.seek(offset) - self._info.load(self.fp) - - def _get_merged_dict(self) -> dict[int, Any]: - merged_dict = dict(self) - - # get EXIF extension - if ExifTags.IFD.Exif in self: - ifd = self._get_ifd_dict(self[ExifTags.IFD.Exif], ExifTags.IFD.Exif) - if ifd: - merged_dict.update(ifd) - - # GPS - if ExifTags.IFD.GPSInfo in self: - merged_dict[ExifTags.IFD.GPSInfo] = self._get_ifd_dict( - self[ExifTags.IFD.GPSInfo], ExifTags.IFD.GPSInfo - ) - - return merged_dict - - def tobytes(self, offset: int = 8) -> bytes: - from . import TiffImagePlugin - - head = self._get_head() - ifd = TiffImagePlugin.ImageFileDirectory_v2(ifh=head) - for tag, ifd_dict in self._ifds.items(): - if tag not in self: - ifd[tag] = ifd_dict - for tag, value in self.items(): - if tag in [ - ExifTags.IFD.Exif, - ExifTags.IFD.GPSInfo, - ] and not isinstance(value, dict): - value = self.get_ifd(tag) - if ( - tag == ExifTags.IFD.Exif - and ExifTags.IFD.Interop in value - and not isinstance(value[ExifTags.IFD.Interop], dict) - ): - value = value.copy() - value[ExifTags.IFD.Interop] = self.get_ifd(ExifTags.IFD.Interop) - ifd[tag] = value - return b"Exif\x00\x00" + head + ifd.tobytes(offset) - - def get_ifd(self, tag: int) -> dict[int, Any]: - if tag not in self._ifds: - if tag == ExifTags.IFD.IFD1: - if self._info is not None and self._info.next != 0: - ifd = self._get_ifd_dict(self._info.next) - if ifd is not None: - self._ifds[tag] = ifd - elif tag in [ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo]: - offset = self._hidden_data.get(tag, self.get(tag)) - if offset is not None: - ifd = self._get_ifd_dict(offset, tag) - if ifd is not None: - self._ifds[tag] = ifd - elif tag in [ExifTags.IFD.Interop, ExifTags.IFD.MakerNote]: - if ExifTags.IFD.Exif not in self._ifds: - self.get_ifd(ExifTags.IFD.Exif) - tag_data = self._ifds[ExifTags.IFD.Exif][tag] - if tag == ExifTags.IFD.MakerNote: - from .TiffImagePlugin import ImageFileDirectory_v2 - - if tag_data.startswith(b"FUJIFILM"): - ifd_offset = i32le(tag_data, 8) - ifd_data = tag_data[ifd_offset:] - - makernote = {} - for i in range(struct.unpack(" 4: - (offset,) = struct.unpack("H", tag_data[:2])[0]): - ifd_tag, typ, count, data = struct.unpack( - ">HHL4s", tag_data[i * 12 + 2 : (i + 1) * 12 + 2] - ) - if ifd_tag == 0x1101: - # CameraInfo - (offset,) = struct.unpack(">L", data) - self.fp.seek(offset) - - camerainfo: dict[str, int | bytes] = { - "ModelID": self.fp.read(4) - } - - self.fp.read(4) - # Seconds since 2000 - camerainfo["TimeStamp"] = i32le(self.fp.read(12)) - - self.fp.read(4) - camerainfo["InternalSerialNumber"] = self.fp.read(4) - - self.fp.read(12) - parallax = self.fp.read(4) - handler = ImageFileDirectory_v2._load_dispatch[ - TiffTags.FLOAT - ][1] - camerainfo["Parallax"] = handler( - ImageFileDirectory_v2(), parallax, False - )[0] - - self.fp.read(4) - camerainfo["Category"] = self.fp.read(2) - - makernote = {0x1101: camerainfo} - self._ifds[tag] = makernote - else: - # Interop - ifd = self._get_ifd_dict(tag_data, tag) - if ifd is not None: - self._ifds[tag] = ifd - ifd = self._ifds.setdefault(tag, {}) - if tag == ExifTags.IFD.Exif and self._hidden_data: - ifd = { - k: v - for (k, v) in ifd.items() - if k not in (ExifTags.IFD.Interop, ExifTags.IFD.MakerNote) - } - return ifd - - def hide_offsets(self) -> None: - for tag in (ExifTags.IFD.Exif, ExifTags.IFD.GPSInfo): - if tag in self: - self._hidden_data[tag] = self[tag] - del self[tag] - - def __str__(self) -> str: - if self._info is not None: - # Load all keys into self._data - for tag in self._info: - self[tag] - - return str(self._data) - - def __len__(self) -> int: - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return len(keys) - - def __getitem__(self, tag: int) -> Any: - if self._info is not None and tag not in self._data and tag in self._info: - self._data[tag] = self._fixup(self._info[tag]) - del self._info[tag] - return self._data[tag] - - def __contains__(self, tag: object) -> bool: - return tag in self._data or (self._info is not None and tag in self._info) - - def __setitem__(self, tag: int, value: Any) -> None: - if self._info is not None and tag in self._info: - del self._info[tag] - self._data[tag] = value - - def __delitem__(self, tag: int) -> None: - if self._info is not None and tag in self._info: - del self._info[tag] - else: - del self._data[tag] - if tag in self._ifds: - del self._ifds[tag] - - def __iter__(self) -> Iterator[int]: - keys = set(self._data) - if self._info is not None: - keys.update(self._info) - return iter(keys) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageChops.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageChops.py deleted file mode 100644 index 29a5c995..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageChops.py +++ /dev/null @@ -1,311 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard channel operations -# -# History: -# 1996-03-24 fl Created -# 1996-08-13 fl Added logical operations (for "1" images) -# 2000-10-12 fl Added offset method (from Image.py) -# -# Copyright (c) 1997-2000 by Secret Labs AB -# Copyright (c) 1996-2000 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from __future__ import annotations - -from . import Image - - -def constant(image: Image.Image, value: int) -> Image.Image: - """Fill a channel with a given gray level. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.new("L", image.size, value) - - -def duplicate(image: Image.Image) -> Image.Image: - """Copy a channel. Alias for :py:meth:`PIL.Image.Image.copy`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return image.copy() - - -def invert(image: Image.Image) -> Image.Image: - """ - Invert an image (channel). :: - - out = MAX - image - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image.load() - return image._new(image.im.chop_invert()) - - -def lighter(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Compares the two images, pixel by pixel, and returns a new image containing - the lighter values. :: - - out = max(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_lighter(image2.im)) - - -def darker(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Compares the two images, pixel by pixel, and returns a new image containing - the darker values. :: - - out = min(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_darker(image2.im)) - - -def difference(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Returns the absolute value of the pixel-by-pixel difference between the two - images. :: - - out = abs(image1 - image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_difference(image2.im)) - - -def multiply(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Superimposes two images on top of each other. - - If you multiply an image with a solid black image, the result is black. If - you multiply with a solid white image, the image is unaffected. :: - - out = image1 * image2 / MAX - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_multiply(image2.im)) - - -def screen(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Superimposes two inverted images on top of each other. :: - - out = MAX - ((MAX - image1) * (MAX - image2) / MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_screen(image2.im)) - - -def soft_light(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Superimposes two images on top of each other using the Soft Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_soft_light(image2.im)) - - -def hard_light(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Superimposes two images on top of each other using the Hard Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_hard_light(image2.im)) - - -def overlay(image1: Image.Image, image2: Image.Image) -> Image.Image: - """ - Superimposes two images on top of each other using the Overlay algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_overlay(image2.im)) - - -def add( - image1: Image.Image, image2: Image.Image, scale: float = 1.0, offset: float = 0 -) -> Image.Image: - """ - Adds two images, dividing the result by scale and adding the - offset. If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 + image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add(image2.im, scale, offset)) - - -def subtract( - image1: Image.Image, image2: Image.Image, scale: float = 1.0, offset: float = 0 -) -> Image.Image: - """ - Subtracts two images, dividing the result by scale and adding the offset. - If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 - image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract(image2.im, scale, offset)) - - -def add_modulo(image1: Image.Image, image2: Image.Image) -> Image.Image: - """Add two images, without clipping the result. :: - - out = ((image1 + image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add_modulo(image2.im)) - - -def subtract_modulo(image1: Image.Image, image2: Image.Image) -> Image.Image: - """Subtract two images, without clipping the result. :: - - out = ((image1 - image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract_modulo(image2.im)) - - -def logical_and(image1: Image.Image, image2: Image.Image) -> Image.Image: - """Logical AND between two images. - - Both of the images must have mode "1". If you would like to perform a - logical AND on an image with a mode other than "1", try - :py:meth:`~PIL.ImageChops.multiply` instead, using a black-and-white mask - as the second image. :: - - out = ((image1 and image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_and(image2.im)) - - -def logical_or(image1: Image.Image, image2: Image.Image) -> Image.Image: - """Logical OR between two images. - - Both of the images must have mode "1". :: - - out = ((image1 or image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_or(image2.im)) - - -def logical_xor(image1: Image.Image, image2: Image.Image) -> Image.Image: - """Logical XOR between two images. - - Both of the images must have mode "1". :: - - out = ((bool(image1) != bool(image2)) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_xor(image2.im)) - - -def blend(image1: Image.Image, image2: Image.Image, alpha: float) -> Image.Image: - """Blend images using constant transparency weight. Alias for - :py:func:`PIL.Image.blend`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.blend(image1, image2, alpha) - - -def composite( - image1: Image.Image, image2: Image.Image, mask: Image.Image -) -> Image.Image: - """Create composite using transparency mask. Alias for - :py:func:`PIL.Image.composite`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.composite(image1, image2, mask) - - -def offset(image: Image.Image, xoffset: int, yoffset: int | None = None) -> Image.Image: - """Returns a copy of the image where data has been offset by the given - distances. Data wraps around the edges. If ``yoffset`` is omitted, it - is assumed to be equal to ``xoffset``. - - :param image: Input image. - :param xoffset: The horizontal distance. - :param yoffset: The vertical distance. If omitted, both - distances are set to the same value. - :rtype: :py:class:`~PIL.Image.Image` - """ - - if yoffset is None: - yoffset = xoffset - image.load() - return image._new(image.im.offset(xoffset, yoffset)) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageCms.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageCms.py deleted file mode 100644 index 513e28ac..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageCms.py +++ /dev/null @@ -1,1076 +0,0 @@ -# The Python Imaging Library. -# $Id$ - -# Optional color management support, based on Kevin Cazabon's PyCMS -# library. - -# Originally released under LGPL. Graciously donated to PIL in -# March 2009, for distribution under the standard PIL license - -# History: - -# 2009-03-08 fl Added to PIL. - -# Copyright (C) 2002-2003 Kevin Cazabon -# Copyright (c) 2009 by Fredrik Lundh -# Copyright (c) 2013 by Eric Soroos - -# See the README file for information on usage and redistribution. See -# below for the original description. -from __future__ import annotations - -import operator -import sys -from enum import IntEnum, IntFlag -from functools import reduce -from typing import Any, Literal, SupportsFloat, SupportsInt, Union - -from . import Image -from ._deprecate import deprecate -from ._typing import SupportsRead - -try: - from . import _imagingcms as core - - _CmsProfileCompatible = Union[ - str, SupportsRead[bytes], core.CmsProfile, "ImageCmsProfile" - ] -except ImportError as ex: - # Allow error import for doc purposes, but error out when accessing - # anything in core. - from ._util import DeferredError - - core = DeferredError.new(ex) - -_DESCRIPTION = """ -pyCMS - - a Python / PIL interface to the littleCMS ICC Color Management System - Copyright (C) 2002-2003 Kevin Cazabon - kevin@cazabon.com - https://www.cazabon.com - - pyCMS home page: https://www.cazabon.com/pyCMS - littleCMS home page: https://www.littlecms.com - (littleCMS is Copyright (C) 1998-2001 Marti Maria) - - Originally released under LGPL. Graciously donated to PIL in - March 2009, for distribution under the standard PIL license - - The pyCMS.py module provides a "clean" interface between Python/PIL and - pyCMSdll, taking care of some of the more complex handling of the direct - pyCMSdll functions, as well as error-checking and making sure that all - relevant data is kept together. - - While it is possible to call pyCMSdll functions directly, it's not highly - recommended. - - Version History: - - 1.0.0 pil Oct 2013 Port to LCMS 2. - - 0.1.0 pil mod March 10, 2009 - - Renamed display profile to proof profile. The proof - profile is the profile of the device that is being - simulated, not the profile of the device which is - actually used to display/print the final simulation - (that'd be the output profile) - also see LCMSAPI.txt - input colorspace -> using 'renderingIntent' -> proof - colorspace -> using 'proofRenderingIntent' -> output - colorspace - - Added LCMS FLAGS support. - Added FLAGS["SOFTPROOFING"] as default flag for - buildProofTransform (otherwise the proof profile/intent - would be ignored). - - 0.1.0 pil March 2009 - added to PIL, as PIL.ImageCms - - 0.0.2 alpha Jan 6, 2002 - - Added try/except statements around type() checks of - potential CObjects... Python won't let you use type() - on them, and raises a TypeError (stupid, if you ask - me!) - - Added buildProofTransformFromOpenProfiles() function. - Additional fixes in DLL, see DLL code for details. - - 0.0.1 alpha first public release, Dec. 26, 2002 - - Known to-do list with current version (of Python interface, not pyCMSdll): - - none - -""" - -_VERSION = "1.0.0 pil" - - -# --------------------------------------------------------------------. - - -# -# intent/direction values - - -class Intent(IntEnum): - PERCEPTUAL = 0 - RELATIVE_COLORIMETRIC = 1 - SATURATION = 2 - ABSOLUTE_COLORIMETRIC = 3 - - -class Direction(IntEnum): - INPUT = 0 - OUTPUT = 1 - PROOF = 2 - - -# -# flags - - -class Flags(IntFlag): - """Flags and documentation are taken from ``lcms2.h``.""" - - NONE = 0 - NOCACHE = 0x0040 - """Inhibit 1-pixel cache""" - NOOPTIMIZE = 0x0100 - """Inhibit optimizations""" - NULLTRANSFORM = 0x0200 - """Don't transform anyway""" - GAMUTCHECK = 0x1000 - """Out of Gamut alarm""" - SOFTPROOFING = 0x4000 - """Do softproofing""" - BLACKPOINTCOMPENSATION = 0x2000 - NOWHITEONWHITEFIXUP = 0x0004 - """Don't fix scum dot""" - HIGHRESPRECALC = 0x0400 - """Use more memory to give better accuracy""" - LOWRESPRECALC = 0x0800 - """Use less memory to minimize resources""" - # this should be 8BITS_DEVICELINK, but that is not a valid name in Python: - USE_8BITS_DEVICELINK = 0x0008 - """Create 8 bits devicelinks""" - GUESSDEVICECLASS = 0x0020 - """Guess device class (for ``transform2devicelink``)""" - KEEP_SEQUENCE = 0x0080 - """Keep profile sequence for devicelink creation""" - FORCE_CLUT = 0x0002 - """Force CLUT optimization""" - CLUT_POST_LINEARIZATION = 0x0001 - """create postlinearization tables if possible""" - CLUT_PRE_LINEARIZATION = 0x0010 - """create prelinearization tables if possible""" - NONEGATIVES = 0x8000 - """Prevent negative numbers in floating point transforms""" - COPY_ALPHA = 0x04000000 - """Alpha channels are copied on ``cmsDoTransform()``""" - NODEFAULTRESOURCEDEF = 0x01000000 - - _GRIDPOINTS_1 = 1 << 16 - _GRIDPOINTS_2 = 2 << 16 - _GRIDPOINTS_4 = 4 << 16 - _GRIDPOINTS_8 = 8 << 16 - _GRIDPOINTS_16 = 16 << 16 - _GRIDPOINTS_32 = 32 << 16 - _GRIDPOINTS_64 = 64 << 16 - _GRIDPOINTS_128 = 128 << 16 - - @staticmethod - def GRIDPOINTS(n: int) -> Flags: - """ - Fine-tune control over number of gridpoints - - :param n: :py:class:`int` in range ``0 <= n <= 255`` - """ - return Flags.NONE | ((n & 0xFF) << 16) - - -_MAX_FLAG = reduce(operator.or_, Flags) - - -_FLAGS = { - "MATRIXINPUT": 1, - "MATRIXOUTPUT": 2, - "MATRIXONLY": (1 | 2), - "NOWHITEONWHITEFIXUP": 4, # Don't hot fix scum dot - # Don't create prelinearization tables on precalculated transforms - # (internal use): - "NOPRELINEARIZATION": 16, - "GUESSDEVICECLASS": 32, # Guess device class (for transform2devicelink) - "NOTCACHE": 64, # Inhibit 1-pixel cache - "NOTPRECALC": 256, - "NULLTRANSFORM": 512, # Don't transform anyway - "HIGHRESPRECALC": 1024, # Use more memory to give better accuracy - "LOWRESPRECALC": 2048, # Use less memory to minimize resources - "WHITEBLACKCOMPENSATION": 8192, - "BLACKPOINTCOMPENSATION": 8192, - "GAMUTCHECK": 4096, # Out of Gamut alarm - "SOFTPROOFING": 16384, # Do softproofing - "PRESERVEBLACK": 32768, # Black preservation - "NODEFAULTRESOURCEDEF": 16777216, # CRD special - "GRIDPOINTS": lambda n: (n & 0xFF) << 16, # Gridpoints -} - - -# --------------------------------------------------------------------. -# Experimental PIL-level API -# --------------------------------------------------------------------. - -## -# Profile. - - -class ImageCmsProfile: - def __init__(self, profile: str | SupportsRead[bytes] | core.CmsProfile) -> None: - """ - :param profile: Either a string representing a filename, - a file like object containing a profile or a - low-level profile object - - """ - self.filename: str | None = None - - if isinstance(profile, str): - if sys.platform == "win32": - profile_bytes_path = profile.encode() - try: - profile_bytes_path.decode("ascii") - except UnicodeDecodeError: - with open(profile, "rb") as f: - self.profile = core.profile_frombytes(f.read()) - return - self.filename = profile - self.profile = core.profile_open(profile) - elif hasattr(profile, "read"): - self.profile = core.profile_frombytes(profile.read()) - elif isinstance(profile, core.CmsProfile): - self.profile = profile - else: - msg = "Invalid type for Profile" # type: ignore[unreachable] - raise TypeError(msg) - - def __getattr__(self, name: str) -> Any: - if name in ("product_name", "product_info"): - deprecate(f"ImageCms.ImageCmsProfile.{name}", 13) - return None - msg = f"'{self.__class__.__name__}' object has no attribute '{name}'" - raise AttributeError(msg) - - def tobytes(self) -> bytes: - """ - Returns the profile in a format suitable for embedding in - saved images. - - :returns: a bytes object containing the ICC profile. - """ - - return core.profile_tobytes(self.profile) - - -class ImageCmsTransform(Image.ImagePointHandler): - """ - Transform. This can be used with the procedural API, or with the standard - :py:func:`~PIL.Image.Image.point` method. - - Will return the output profile in the ``output.info['icc_profile']``. - """ - - def __init__( - self, - input: ImageCmsProfile, - output: ImageCmsProfile, - input_mode: str, - output_mode: str, - intent: Intent = Intent.PERCEPTUAL, - proof: ImageCmsProfile | None = None, - proof_intent: Intent = Intent.ABSOLUTE_COLORIMETRIC, - flags: Flags = Flags.NONE, - ): - if proof is None: - self.transform = core.buildTransform( - input.profile, output.profile, input_mode, output_mode, intent, flags - ) - else: - self.transform = core.buildProofTransform( - input.profile, - output.profile, - proof.profile, - input_mode, - output_mode, - intent, - proof_intent, - flags, - ) - # Note: inputMode and outputMode are for pyCMS compatibility only - self.input_mode = self.inputMode = input_mode - self.output_mode = self.outputMode = output_mode - - self.output_profile = output - - def point(self, im: Image.Image) -> Image.Image: - return self.apply(im) - - def apply(self, im: Image.Image, imOut: Image.Image | None = None) -> Image.Image: - if imOut is None: - imOut = Image.new(self.output_mode, im.size, None) - self.transform.apply(im.getim(), imOut.getim()) - imOut.info["icc_profile"] = self.output_profile.tobytes() - return imOut - - def apply_in_place(self, im: Image.Image) -> Image.Image: - if im.mode != self.output_mode: - msg = "mode mismatch" - raise ValueError(msg) # wrong output mode - self.transform.apply(im.getim(), im.getim()) - im.info["icc_profile"] = self.output_profile.tobytes() - return im - - -def get_display_profile(handle: SupportsInt | None = None) -> ImageCmsProfile | None: - """ - (experimental) Fetches the profile for the current display device. - - :returns: ``None`` if the profile is not known. - """ - - if sys.platform != "win32": - return None - - from . import ImageWin # type: ignore[unused-ignore, unreachable] - - if isinstance(handle, ImageWin.HDC): - profile = core.get_display_profile_win32(int(handle), 1) - else: - profile = core.get_display_profile_win32(int(handle or 0)) - if profile is None: - return None - return ImageCmsProfile(profile) - - -# --------------------------------------------------------------------. -# pyCMS compatible layer -# --------------------------------------------------------------------. - - -class PyCMSError(Exception): - """(pyCMS) Exception class. - This is used for all errors in the pyCMS API.""" - - pass - - -def profileToProfile( - im: Image.Image, - inputProfile: _CmsProfileCompatible, - outputProfile: _CmsProfileCompatible, - renderingIntent: Intent = Intent.PERCEPTUAL, - outputMode: str | None = None, - inPlace: bool = False, - flags: Flags = Flags.NONE, -) -> Image.Image | None: - """ - (pyCMS) Applies an ICC transformation to a given image, mapping from - ``inputProfile`` to ``outputProfile``. - - If the input or output profiles specified are not valid filenames, a - :exc:`PyCMSError` will be raised. If ``inPlace`` is ``True`` and - ``outputMode != im.mode``, a :exc:`PyCMSError` will be raised. - If an error occurs during application of the profiles, - a :exc:`PyCMSError` will be raised. - If ``outputMode`` is not a mode supported by the ``outputProfile`` (or by pyCMS), - a :exc:`PyCMSError` will be raised. - - This function applies an ICC transformation to im from ``inputProfile``'s - color space to ``outputProfile``'s color space using the specified rendering - intent to decide how to handle out-of-gamut colors. - - ``outputMode`` can be used to specify that a color mode conversion is to - be done using these profiles, but the specified profiles must be able - to handle that mode. I.e., if converting im from RGB to CMYK using - profiles, the input profile must handle RGB data, and the output - profile must handle CMYK data. - - :param im: An open :py:class:`~PIL.Image.Image` object (i.e. Image.new(...) - or Image.open(...), etc.) - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this image, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - profile you wish to use for this image, or a profile object - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param outputMode: A valid PIL mode for the output image (i.e. "RGB", - "CMYK", etc.). Note: if rendering the image "inPlace", outputMode - MUST be the same mode as the input, or omitted completely. If - omitted, the outputMode will be the same as the mode of the input - image (im.mode) - :param inPlace: Boolean. If ``True``, the original image is modified in-place, - and ``None`` is returned. If ``False`` (default), a new - :py:class:`~PIL.Image.Image` object is returned with the transform applied. - :param flags: Integer (0-...) specifying additional flags - :returns: Either None or a new :py:class:`~PIL.Image.Image` object, depending on - the value of ``inPlace`` - :exception PyCMSError: - """ - - if outputMode is None: - outputMode = im.mode - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = f"flags must be an integer between 0 and {_MAX_FLAG}" - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - transform = ImageCmsTransform( - inputProfile, - outputProfile, - im.mode, - outputMode, - renderingIntent, - flags=flags, - ) - if inPlace: - transform.apply_in_place(im) - imOut = None - else: - imOut = transform.apply(im) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - return imOut - - -def getOpenProfile( - profileFilename: str | SupportsRead[bytes] | core.CmsProfile, -) -> ImageCmsProfile: - """ - (pyCMS) Opens an ICC profile file. - - The PyCMSProfile object can be passed back into pyCMS for use in creating - transforms and such (as in ImageCms.buildTransformFromOpenProfiles()). - - If ``profileFilename`` is not a valid filename for an ICC profile, - a :exc:`PyCMSError` will be raised. - - :param profileFilename: String, as a valid filename path to the ICC profile - you wish to open, or a file-like object. - :returns: A CmsProfile class object. - :exception PyCMSError: - """ - - try: - return ImageCmsProfile(profileFilename) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def buildTransform( - inputProfile: _CmsProfileCompatible, - outputProfile: _CmsProfileCompatible, - inMode: str, - outMode: str, - renderingIntent: Intent = Intent.PERCEPTUAL, - flags: Flags = Flags.NONE, -) -> ImageCmsTransform: - """ - (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the - ``outputProfile``. Use applyTransform to apply the transform to a given - image. - - If the input or output profiles specified are not valid filenames, a - :exc:`PyCMSError` will be raised. If an error occurs during creation - of the transform, a :exc:`PyCMSError` will be raised. - - If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile`` - (or by pyCMS), a :exc:`PyCMSError` will be raised. - - This function builds and returns an ICC transform from the ``inputProfile`` - to the ``outputProfile`` using the ``renderingIntent`` to determine what to do - with out-of-gamut colors. It will ONLY work for converting images that - are in ``inMode`` to images that are in ``outMode`` color format (PIL mode, - i.e. "RGB", "RGBA", "CMYK", etc.). - - Building the transform is a fair part of the overhead in - ImageCms.profileToProfile(), so if you're planning on converting multiple - images using the same input/output settings, this can save you time. - Once you have a transform object, it can be used with - ImageCms.applyProfile() to convert images without the need to re-compute - the lookup table for the transform. - - The reason pyCMS returns a class object rather than a handle directly - to the transform is that it needs to keep track of the PIL input/output - modes that the transform is meant for. These attributes are stored in - the ``inMode`` and ``outMode`` attributes of the object (which can be - manually overridden if you really want to, but I don't know of any - time that would be of use, or would even work). - - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this transform, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - profile you wish to use for this transform, or a profile object - :param inMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param outMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param flags: Integer (0-...) specifying additional flags - :returns: A CmsTransform class object. - :exception PyCMSError: - """ - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = f"flags must be an integer between 0 and {_MAX_FLAG}" - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - return ImageCmsTransform( - inputProfile, outputProfile, inMode, outMode, renderingIntent, flags=flags - ) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def buildProofTransform( - inputProfile: _CmsProfileCompatible, - outputProfile: _CmsProfileCompatible, - proofProfile: _CmsProfileCompatible, - inMode: str, - outMode: str, - renderingIntent: Intent = Intent.PERCEPTUAL, - proofRenderingIntent: Intent = Intent.ABSOLUTE_COLORIMETRIC, - flags: Flags = Flags.SOFTPROOFING, -) -> ImageCmsTransform: - """ - (pyCMS) Builds an ICC transform mapping from the ``inputProfile`` to the - ``outputProfile``, but tries to simulate the result that would be - obtained on the ``proofProfile`` device. - - If the input, output, or proof profiles specified are not valid - filenames, a :exc:`PyCMSError` will be raised. - - If an error occurs during creation of the transform, - a :exc:`PyCMSError` will be raised. - - If ``inMode`` or ``outMode`` are not a mode supported by the ``outputProfile`` - (or by pyCMS), a :exc:`PyCMSError` will be raised. - - This function builds and returns an ICC transform from the ``inputProfile`` - to the ``outputProfile``, but tries to simulate the result that would be - obtained on the ``proofProfile`` device using ``renderingIntent`` and - ``proofRenderingIntent`` to determine what to do with out-of-gamut - colors. This is known as "soft-proofing". It will ONLY work for - converting images that are in ``inMode`` to images that are in outMode - color format (PIL mode, i.e. "RGB", "RGBA", "CMYK", etc.). - - Usage of the resulting transform object is exactly the same as with - ImageCms.buildTransform(). - - Proof profiling is generally used when using an output device to get a - good idea of what the final printed/displayed image would look like on - the ``proofProfile`` device when it's quicker and easier to use the - output device for judging color. Generally, this means that the - output device is a monitor, or a dye-sub printer (etc.), and the simulated - device is something more expensive, complicated, or time consuming - (making it difficult to make a real print for color judgement purposes). - - Soft-proofing basically functions by adjusting the colors on the - output device to match the colors of the device being simulated. However, - when the simulated device has a much wider gamut than the output - device, you may obtain marginal results. - - :param inputProfile: String, as a valid filename path to the ICC input - profile you wish to use for this transform, or a profile object - :param outputProfile: String, as a valid filename path to the ICC output - (monitor, usually) profile you wish to use for this transform, or a - profile object - :param proofProfile: String, as a valid filename path to the ICC proof - profile you wish to use for this transform, or a profile object - :param inMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param outMode: String, as a valid PIL mode that the appropriate profile - also supports (i.e. "RGB", "RGBA", "CMYK", etc.) - :param renderingIntent: Integer (0-3) specifying the rendering intent you - wish to use for the input->proof (simulated) transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param proofRenderingIntent: Integer (0-3) specifying the rendering intent - you wish to use for proof->output transform - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param flags: Integer (0-...) specifying additional flags - :returns: A CmsTransform class object. - :exception PyCMSError: - """ - - if not isinstance(renderingIntent, int) or not (0 <= renderingIntent <= 3): - msg = "renderingIntent must be an integer between 0 and 3" - raise PyCMSError(msg) - - if not isinstance(flags, int) or not (0 <= flags <= _MAX_FLAG): - msg = f"flags must be an integer between 0 and {_MAX_FLAG}" - raise PyCMSError(msg) - - try: - if not isinstance(inputProfile, ImageCmsProfile): - inputProfile = ImageCmsProfile(inputProfile) - if not isinstance(outputProfile, ImageCmsProfile): - outputProfile = ImageCmsProfile(outputProfile) - if not isinstance(proofProfile, ImageCmsProfile): - proofProfile = ImageCmsProfile(proofProfile) - return ImageCmsTransform( - inputProfile, - outputProfile, - inMode, - outMode, - renderingIntent, - proofProfile, - proofRenderingIntent, - flags, - ) - except (OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -buildTransformFromOpenProfiles = buildTransform -buildProofTransformFromOpenProfiles = buildProofTransform - - -def applyTransform( - im: Image.Image, transform: ImageCmsTransform, inPlace: bool = False -) -> Image.Image | None: - """ - (pyCMS) Applies a transform to a given image. - - If ``im.mode != transform.input_mode``, a :exc:`PyCMSError` is raised. - - If ``inPlace`` is ``True`` and ``transform.input_mode != transform.output_mode``, a - :exc:`PyCMSError` is raised. - - If ``im.mode``, ``transform.input_mode`` or ``transform.output_mode`` is not - supported by pyCMSdll or the profiles you used for the transform, a - :exc:`PyCMSError` is raised. - - If an error occurs while the transform is being applied, - a :exc:`PyCMSError` is raised. - - This function applies a pre-calculated transform (from - ImageCms.buildTransform() or ImageCms.buildTransformFromOpenProfiles()) - to an image. The transform can be used for multiple images, saving - considerable calculation time if doing the same conversion multiple times. - - If you want to modify im in-place instead of receiving a new image as - the return value, set ``inPlace`` to ``True``. This can only be done if - ``transform.input_mode`` and ``transform.output_mode`` are the same, because we - can't change the mode in-place (the buffer sizes for some modes are - different). The default behavior is to return a new :py:class:`~PIL.Image.Image` - object of the same dimensions in mode ``transform.output_mode``. - - :param im: An :py:class:`~PIL.Image.Image` object, and ``im.mode`` must be the same - as the ``input_mode`` supported by the transform. - :param transform: A valid CmsTransform class object - :param inPlace: Bool. If ``True``, ``im`` is modified in place and ``None`` is - returned, if ``False``, a new :py:class:`~PIL.Image.Image` object with the - transform applied is returned (and ``im`` is not changed). The default is - ``False``. - :returns: Either ``None``, or a new :py:class:`~PIL.Image.Image` object, - depending on the value of ``inPlace``. The profile will be returned in - the image's ``info['icc_profile']``. - :exception PyCMSError: - """ - - try: - if inPlace: - transform.apply_in_place(im) - imOut = None - else: - imOut = transform.apply(im) - except (TypeError, ValueError) as v: - raise PyCMSError(v) from v - - return imOut - - -def createProfile( - colorSpace: Literal["LAB", "XYZ", "sRGB"], colorTemp: SupportsFloat = 0 -) -> core.CmsProfile: - """ - (pyCMS) Creates a profile. - - If colorSpace not in ``["LAB", "XYZ", "sRGB"]``, - a :exc:`PyCMSError` is raised. - - If using LAB and ``colorTemp`` is not a positive integer, - a :exc:`PyCMSError` is raised. - - If an error occurs while creating the profile, - a :exc:`PyCMSError` is raised. - - Use this function to create common profiles on-the-fly instead of - having to supply a profile on disk and knowing the path to it. It - returns a normal CmsProfile object that can be passed to - ImageCms.buildTransformFromOpenProfiles() to create a transform to apply - to images. - - :param colorSpace: String, the color space of the profile you wish to - create. - Currently only "LAB", "XYZ", and "sRGB" are supported. - :param colorTemp: Positive number for the white point for the profile, in - degrees Kelvin (i.e. 5000, 6500, 9600, etc.). The default is for D50 - illuminant if omitted (5000k). colorTemp is ONLY applied to LAB - profiles, and is ignored for XYZ and sRGB. - :returns: A CmsProfile class object - :exception PyCMSError: - """ - - if colorSpace not in ["LAB", "XYZ", "sRGB"]: - msg = ( - f"Color space not supported for on-the-fly profile creation ({colorSpace})" - ) - raise PyCMSError(msg) - - if colorSpace == "LAB": - try: - colorTemp = float(colorTemp) - except (TypeError, ValueError) as e: - msg = f'Color temperature must be numeric, "{colorTemp}" not valid' - raise PyCMSError(msg) from e - - try: - return core.createProfile(colorSpace, colorTemp) - except (TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileName(profile: _CmsProfileCompatible) -> str: - """ - - (pyCMS) Gets the internal product name for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, - a :exc:`PyCMSError` is raised If an error occurs while trying - to obtain the name tag, a :exc:`PyCMSError` is raised. - - Use this function to obtain the INTERNAL name of the profile (stored - in an ICC tag in the profile itself), usually the one used when the - profile was originally created. Sometimes this tag also contains - additional information supplied by the creator. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal name of the profile as stored - in an ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # do it in python, not c. - # // name was "%s - %s" (model, manufacturer) || Description , - # // but if the Model and Manufacturer were the same or the model - # // was long, Just the model, in 1.x - model = profile.profile.model - manufacturer = profile.profile.manufacturer - - if not (model or manufacturer): - return (profile.profile.profile_description or "") + "\n" - if not manufacturer or (model and len(model) > 30): - return f"{model}\n" - return f"{model} - {manufacturer}\n" - - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileInfo(profile: _CmsProfileCompatible) -> str: - """ - (pyCMS) Gets the internal product information for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, - a :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the info tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - info tag. This often contains details about the profile, and how it - was created, as supplied by the creator. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # add an extra newline to preserve pyCMS compatibility - # Python, not C. the white point bits weren't working well, - # so skipping. - # info was description \r\n\r\n copyright \r\n\r\n K007 tag \r\n\r\n whitepoint - description = profile.profile.profile_description - cpright = profile.profile.copyright - elements = [element for element in (description, cpright) if element] - return "\r\n\r\n".join(elements) + "\r\n\r\n" - - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileCopyright(profile: _CmsProfileCompatible) -> str: - """ - (pyCMS) Gets the copyright for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the copyright tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - copyright tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.copyright or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileManufacturer(profile: _CmsProfileCompatible) -> str: - """ - (pyCMS) Gets the manufacturer for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the manufacturer tag, a - :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - manufacturer tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.manufacturer or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileModel(profile: _CmsProfileCompatible) -> str: - """ - (pyCMS) Gets the model for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the model tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - model tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in - an ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.model or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getProfileDescription(profile: _CmsProfileCompatible) -> str: - """ - (pyCMS) Gets the description for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the description tag, - a :exc:`PyCMSError` is raised. - - Use this function to obtain the information stored in the profile's - description tag. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: A string containing the internal profile information stored in an - ICC tag. - :exception PyCMSError: - """ - - try: - # add an extra newline to preserve pyCMS compatibility - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return (profile.profile.profile_description or "") + "\n" - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def getDefaultIntent(profile: _CmsProfileCompatible) -> int: - """ - (pyCMS) Gets the default intent name for the given profile. - - If ``profile`` isn't a valid CmsProfile object or filename to a profile, a - :exc:`PyCMSError` is raised. - - If an error occurs while trying to obtain the default intent, a - :exc:`PyCMSError` is raised. - - Use this function to determine the default (and usually best optimized) - rendering intent for this profile. Most profiles support multiple - rendering intents, but are intended mostly for one type of conversion. - If you wish to use a different intent than returned, use - ImageCms.isIntentSupported() to verify it will work first. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :returns: Integer 0-3 specifying the default rendering intent for this - profile. - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - return profile.profile.rendering_intent - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v - - -def isIntentSupported( - profile: _CmsProfileCompatible, intent: Intent, direction: Direction -) -> Literal[-1, 1]: - """ - (pyCMS) Checks if a given intent is supported. - - Use this function to verify that you can use your desired - ``intent`` with ``profile``, and that ``profile`` can be used for the - input/output/proof profile as you desire. - - Some profiles are created specifically for one "direction", can cannot - be used for others. Some profiles can only be used for certain - rendering intents, so it's best to either verify this before trying - to create a transform with them (using this function), or catch the - potential :exc:`PyCMSError` that will occur if they don't - support the modes you select. - - :param profile: EITHER a valid CmsProfile object, OR a string of the - filename of an ICC profile. - :param intent: Integer (0-3) specifying the rendering intent you wish to - use with this profile - - ImageCms.Intent.PERCEPTUAL = 0 (DEFAULT) - ImageCms.Intent.RELATIVE_COLORIMETRIC = 1 - ImageCms.Intent.SATURATION = 2 - ImageCms.Intent.ABSOLUTE_COLORIMETRIC = 3 - - see the pyCMS documentation for details on rendering intents and what - they do. - :param direction: Integer specifying if the profile is to be used for - input, output, or proof - - INPUT = 0 (or use ImageCms.Direction.INPUT) - OUTPUT = 1 (or use ImageCms.Direction.OUTPUT) - PROOF = 2 (or use ImageCms.Direction.PROOF) - - :returns: 1 if the intent/direction are supported, -1 if they are not. - :exception PyCMSError: - """ - - try: - if not isinstance(profile, ImageCmsProfile): - profile = ImageCmsProfile(profile) - # FIXME: I get different results for the same data w. different - # compilers. Bug in LittleCMS or in the binding? - if profile.profile.is_intent_supported(intent, direction): - return 1 - else: - return -1 - except (AttributeError, OSError, TypeError, ValueError) as v: - raise PyCMSError(v) from v diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageColor.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageColor.py deleted file mode 100644 index 9a15a8eb..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageColor.py +++ /dev/null @@ -1,320 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# map CSS3-style colour description strings to RGB -# -# History: -# 2002-10-24 fl Added support for CSS-style color strings -# 2002-12-15 fl Added RGBA support -# 2004-03-27 fl Fixed remaining int() problems for Python 1.5.2 -# 2004-07-19 fl Fixed gray/grey spelling issues -# 2009-03-05 fl Fixed rounding error in grayscale calculation -# -# Copyright (c) 2002-2004 by Secret Labs AB -# Copyright (c) 2002-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import re -from functools import lru_cache - -from . import Image - - -@lru_cache -def getrgb(color: str) -> tuple[int, int, int] | tuple[int, int, int, int]: - """ - Convert a color string to an RGB or RGBA tuple. If the string cannot be - parsed, this function raises a :py:exc:`ValueError` exception. - - .. versionadded:: 1.1.4 - - :param color: A color string - :return: ``(red, green, blue[, alpha])`` - """ - if len(color) > 100: - msg = "color specifier is too long" - raise ValueError(msg) - color = color.lower() - - rgb = colormap.get(color, None) - if rgb: - if isinstance(rgb, tuple): - return rgb - rgb_tuple = getrgb(rgb) - assert len(rgb_tuple) == 3 - colormap[color] = rgb_tuple - return rgb_tuple - - # check for known string formats - if re.match("#[a-f0-9]{3}$", color): - return int(color[1] * 2, 16), int(color[2] * 2, 16), int(color[3] * 2, 16) - - if re.match("#[a-f0-9]{4}$", color): - return ( - int(color[1] * 2, 16), - int(color[2] * 2, 16), - int(color[3] * 2, 16), - int(color[4] * 2, 16), - ) - - if re.match("#[a-f0-9]{6}$", color): - return int(color[1:3], 16), int(color[3:5], 16), int(color[5:7], 16) - - if re.match("#[a-f0-9]{8}$", color): - return ( - int(color[1:3], 16), - int(color[3:5], 16), - int(color[5:7], 16), - int(color[7:9], 16), - ) - - m = re.match(r"rgb\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color) - if m: - return int(m.group(1)), int(m.group(2)), int(m.group(3)) - - m = re.match(r"rgb\(\s*(\d+)%\s*,\s*(\d+)%\s*,\s*(\d+)%\s*\)$", color) - if m: - return ( - int((int(m.group(1)) * 255) / 100.0 + 0.5), - int((int(m.group(2)) * 255) / 100.0 + 0.5), - int((int(m.group(3)) * 255) / 100.0 + 0.5), - ) - - m = re.match( - r"hsl\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color - ) - if m: - from colorsys import hls_to_rgb - - rgb_floats = hls_to_rgb( - float(m.group(1)) / 360.0, - float(m.group(3)) / 100.0, - float(m.group(2)) / 100.0, - ) - return ( - int(rgb_floats[0] * 255 + 0.5), - int(rgb_floats[1] * 255 + 0.5), - int(rgb_floats[2] * 255 + 0.5), - ) - - m = re.match( - r"hs[bv]\(\s*(\d+\.?\d*)\s*,\s*(\d+\.?\d*)%\s*,\s*(\d+\.?\d*)%\s*\)$", color - ) - if m: - from colorsys import hsv_to_rgb - - rgb_floats = hsv_to_rgb( - float(m.group(1)) / 360.0, - float(m.group(2)) / 100.0, - float(m.group(3)) / 100.0, - ) - return ( - int(rgb_floats[0] * 255 + 0.5), - int(rgb_floats[1] * 255 + 0.5), - int(rgb_floats[2] * 255 + 0.5), - ) - - m = re.match(r"rgba\(\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*,\s*(\d+)\s*\)$", color) - if m: - return int(m.group(1)), int(m.group(2)), int(m.group(3)), int(m.group(4)) - msg = f"unknown color specifier: {repr(color)}" - raise ValueError(msg) - - -@lru_cache -def getcolor(color: str, mode: str) -> int | tuple[int, ...]: - """ - Same as :py:func:`~PIL.ImageColor.getrgb` for most modes. However, if - ``mode`` is HSV, converts the RGB value to a HSV value, or if ``mode`` is - not color or a palette image, converts the RGB value to a grayscale value. - If the string cannot be parsed, this function raises a :py:exc:`ValueError` - exception. - - .. versionadded:: 1.1.4 - - :param color: A color string - :param mode: Convert result to this mode - :return: ``graylevel, (graylevel, alpha) or (red, green, blue[, alpha])`` - """ - # same as getrgb, but converts the result to the given mode - rgb, alpha = getrgb(color), 255 - if len(rgb) == 4: - alpha = rgb[3] - rgb = rgb[:3] - - if mode == "HSV": - from colorsys import rgb_to_hsv - - r, g, b = rgb - h, s, v = rgb_to_hsv(r / 255, g / 255, b / 255) - return int(h * 255), int(s * 255), int(v * 255) - elif Image.getmodebase(mode) == "L": - r, g, b = rgb - # ITU-R Recommendation 601-2 for nonlinear RGB - # scaled to 24 bits to match the convert's implementation. - graylevel = (r * 19595 + g * 38470 + b * 7471 + 0x8000) >> 16 - if mode[-1] == "A": - return graylevel, alpha - return graylevel - elif mode[-1] == "A": - return rgb + (alpha,) - return rgb - - -colormap: dict[str, str | tuple[int, int, int]] = { - # X11 colour table from https://drafts.csswg.org/css-color-4/, with - # gray/grey spelling issues fixed. This is a superset of HTML 4.0 - # colour names used in CSS 1. - "aliceblue": "#f0f8ff", - "antiquewhite": "#faebd7", - "aqua": "#00ffff", - "aquamarine": "#7fffd4", - "azure": "#f0ffff", - "beige": "#f5f5dc", - "bisque": "#ffe4c4", - "black": "#000000", - "blanchedalmond": "#ffebcd", - "blue": "#0000ff", - "blueviolet": "#8a2be2", - "brown": "#a52a2a", - "burlywood": "#deb887", - "cadetblue": "#5f9ea0", - "chartreuse": "#7fff00", - "chocolate": "#d2691e", - "coral": "#ff7f50", - "cornflowerblue": "#6495ed", - "cornsilk": "#fff8dc", - "crimson": "#dc143c", - "cyan": "#00ffff", - "darkblue": "#00008b", - "darkcyan": "#008b8b", - "darkgoldenrod": "#b8860b", - "darkgray": "#a9a9a9", - "darkgrey": "#a9a9a9", - "darkgreen": "#006400", - "darkkhaki": "#bdb76b", - "darkmagenta": "#8b008b", - "darkolivegreen": "#556b2f", - "darkorange": "#ff8c00", - "darkorchid": "#9932cc", - "darkred": "#8b0000", - "darksalmon": "#e9967a", - "darkseagreen": "#8fbc8f", - "darkslateblue": "#483d8b", - "darkslategray": "#2f4f4f", - "darkslategrey": "#2f4f4f", - "darkturquoise": "#00ced1", - "darkviolet": "#9400d3", - "deeppink": "#ff1493", - "deepskyblue": "#00bfff", - "dimgray": "#696969", - "dimgrey": "#696969", - "dodgerblue": "#1e90ff", - "firebrick": "#b22222", - "floralwhite": "#fffaf0", - "forestgreen": "#228b22", - "fuchsia": "#ff00ff", - "gainsboro": "#dcdcdc", - "ghostwhite": "#f8f8ff", - "gold": "#ffd700", - "goldenrod": "#daa520", - "gray": "#808080", - "grey": "#808080", - "green": "#008000", - "greenyellow": "#adff2f", - "honeydew": "#f0fff0", - "hotpink": "#ff69b4", - "indianred": "#cd5c5c", - "indigo": "#4b0082", - "ivory": "#fffff0", - "khaki": "#f0e68c", - "lavender": "#e6e6fa", - "lavenderblush": "#fff0f5", - "lawngreen": "#7cfc00", - "lemonchiffon": "#fffacd", - "lightblue": "#add8e6", - "lightcoral": "#f08080", - "lightcyan": "#e0ffff", - "lightgoldenrodyellow": "#fafad2", - "lightgreen": "#90ee90", - "lightgray": "#d3d3d3", - "lightgrey": "#d3d3d3", - "lightpink": "#ffb6c1", - "lightsalmon": "#ffa07a", - "lightseagreen": "#20b2aa", - "lightskyblue": "#87cefa", - "lightslategray": "#778899", - "lightslategrey": "#778899", - "lightsteelblue": "#b0c4de", - "lightyellow": "#ffffe0", - "lime": "#00ff00", - "limegreen": "#32cd32", - "linen": "#faf0e6", - "magenta": "#ff00ff", - "maroon": "#800000", - "mediumaquamarine": "#66cdaa", - "mediumblue": "#0000cd", - "mediumorchid": "#ba55d3", - "mediumpurple": "#9370db", - "mediumseagreen": "#3cb371", - "mediumslateblue": "#7b68ee", - "mediumspringgreen": "#00fa9a", - "mediumturquoise": "#48d1cc", - "mediumvioletred": "#c71585", - "midnightblue": "#191970", - "mintcream": "#f5fffa", - "mistyrose": "#ffe4e1", - "moccasin": "#ffe4b5", - "navajowhite": "#ffdead", - "navy": "#000080", - "oldlace": "#fdf5e6", - "olive": "#808000", - "olivedrab": "#6b8e23", - "orange": "#ffa500", - "orangered": "#ff4500", - "orchid": "#da70d6", - "palegoldenrod": "#eee8aa", - "palegreen": "#98fb98", - "paleturquoise": "#afeeee", - "palevioletred": "#db7093", - "papayawhip": "#ffefd5", - "peachpuff": "#ffdab9", - "peru": "#cd853f", - "pink": "#ffc0cb", - "plum": "#dda0dd", - "powderblue": "#b0e0e6", - "purple": "#800080", - "rebeccapurple": "#663399", - "red": "#ff0000", - "rosybrown": "#bc8f8f", - "royalblue": "#4169e1", - "saddlebrown": "#8b4513", - "salmon": "#fa8072", - "sandybrown": "#f4a460", - "seagreen": "#2e8b57", - "seashell": "#fff5ee", - "sienna": "#a0522d", - "silver": "#c0c0c0", - "skyblue": "#87ceeb", - "slateblue": "#6a5acd", - "slategray": "#708090", - "slategrey": "#708090", - "snow": "#fffafa", - "springgreen": "#00ff7f", - "steelblue": "#4682b4", - "tan": "#d2b48c", - "teal": "#008080", - "thistle": "#d8bfd8", - "tomato": "#ff6347", - "turquoise": "#40e0d0", - "violet": "#ee82ee", - "wheat": "#f5deb3", - "white": "#ffffff", - "whitesmoke": "#f5f5f5", - "yellow": "#ffff00", - "yellowgreen": "#9acd32", -} diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageDraw.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageDraw.py deleted file mode 100644 index 8bcf2d8e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageDraw.py +++ /dev/null @@ -1,1036 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# drawing interface operations -# -# History: -# 1996-04-13 fl Created (experimental) -# 1996-08-07 fl Filled polygons, ellipses. -# 1996-08-13 fl Added text support -# 1998-06-28 fl Handle I and F images -# 1998-12-29 fl Added arc; use arc primitive to draw ellipses -# 1999-01-10 fl Added shape stuff (experimental) -# 1999-02-06 fl Added bitmap support -# 1999-02-11 fl Changed all primitives to take options -# 1999-02-20 fl Fixed backwards compatibility -# 2000-10-12 fl Copy on write, when necessary -# 2001-02-18 fl Use default ink for bitmap/text also in fill mode -# 2002-10-24 fl Added support for CSS-style color strings -# 2002-12-10 fl Added experimental support for RGBA-on-RGB drawing -# 2002-12-11 fl Refactored low-level drawing API (work in progress) -# 2004-08-26 fl Made Draw() a factory function, added getdraw() support -# 2004-09-04 fl Added width support to line primitive -# 2004-09-10 fl Added font mode handling -# 2006-06-19 fl Added font bearing support (getmask2) -# -# Copyright (c) 1997-2006 by Secret Labs AB -# Copyright (c) 1996-2006 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import math -import struct -from collections.abc import Sequence -from typing import cast - -from . import Image, ImageColor, ImageText - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from types import ModuleType - from typing import Any, AnyStr - - from . import ImageDraw2, ImageFont - from ._typing import Coords, _Ink - -# experimental access to the outline API -Outline: Callable[[], Image.core._Outline] = Image.core.outline - -""" -A simple 2D drawing interface for PIL images. -

-Application code should use the Draw factory, instead of -directly. -""" - - -class ImageDraw: - font: ( - ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont | None - ) = None - - def __init__(self, im: Image.Image, mode: str | None = None) -> None: - """ - Create a drawing instance. - - :param im: The image to draw in. - :param mode: Optional mode to use for color values. For RGB - images, this argument can be RGB or RGBA (to blend the - drawing into the image). For all other modes, this argument - must be the same as the image mode. If omitted, the mode - defaults to the mode of the image. - """ - im._ensure_mutable() - blend = 0 - if mode is None: - mode = im.mode - if mode != im.mode: - if mode == "RGBA" and im.mode == "RGB": - blend = 1 - else: - msg = "mode mismatch" - raise ValueError(msg) - if mode == "P": - self.palette = im.palette - else: - self.palette = None - self._image = im - self.im = im.im - self.draw = Image.core.draw(self.im, blend) - self.mode = mode - if mode in ("I", "F"): - self.ink = self.draw.draw_ink(1) - else: - self.ink = self.draw.draw_ink(-1) - if mode in ("1", "P", "I", "F"): - # FIXME: fix Fill2 to properly support matte for I+F images - self.fontmode = "1" - else: - self.fontmode = "L" # aliasing is okay for other modes - self.fill = False - - def getfont( - self, - ) -> ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont: - """ - Get the current default font. - - To set the default font for this ImageDraw instance:: - - from PIL import ImageDraw, ImageFont - draw.font = ImageFont.truetype("Tests/fonts/FreeMono.ttf") - - To set the default font for all future ImageDraw instances:: - - from PIL import ImageDraw, ImageFont - ImageDraw.ImageDraw.font = ImageFont.truetype("Tests/fonts/FreeMono.ttf") - - If the current default font is ``None``, - it is initialized with ``ImageFont.load_default()``. - - :returns: An image font.""" - if not self.font: - # FIXME: should add a font repository - from . import ImageFont - - self.font = ImageFont.load_default() - return self.font - - def _getfont( - self, font_size: float | None - ) -> ImageFont.ImageFont | ImageFont.FreeTypeFont | ImageFont.TransposedFont: - if font_size is not None: - from . import ImageFont - - return ImageFont.load_default(font_size) - else: - return self.getfont() - - def _getink( - self, ink: _Ink | None, fill: _Ink | None = None - ) -> tuple[int | None, int | None]: - result_ink = None - result_fill = None - if ink is None and fill is None: - if self.fill: - result_fill = self.ink - else: - result_ink = self.ink - else: - if ink is not None: - if isinstance(ink, str): - ink = ImageColor.getcolor(ink, self.mode) - if self.palette and isinstance(ink, tuple): - ink = self.palette.getcolor(ink, self._image) - result_ink = self.draw.draw_ink(ink) - if fill is not None: - if isinstance(fill, str): - fill = ImageColor.getcolor(fill, self.mode) - if self.palette and isinstance(fill, tuple): - fill = self.palette.getcolor(fill, self._image) - result_fill = self.draw.draw_ink(fill) - return result_ink, result_fill - - def arc( - self, - xy: Coords, - start: float, - end: float, - fill: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw an arc.""" - ink, fill = self._getink(fill) - if ink is not None: - self.draw.draw_arc(xy, start, end, ink, width) - - def bitmap( - self, xy: Sequence[int], bitmap: Image.Image, fill: _Ink | None = None - ) -> None: - """Draw a bitmap.""" - bitmap.load() - ink, fill = self._getink(fill) - if ink is None: - ink = fill - if ink is not None: - self.draw.draw_bitmap(xy, bitmap.im, ink) - - def chord( - self, - xy: Coords, - start: float, - end: float, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw a chord.""" - ink, fill_ink = self._getink(outline, fill) - if fill_ink is not None: - self.draw.draw_chord(xy, start, end, fill_ink, 1) - if ink is not None and ink != fill_ink and width != 0: - self.draw.draw_chord(xy, start, end, ink, 0, width) - - def ellipse( - self, - xy: Coords, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw an ellipse.""" - ink, fill_ink = self._getink(outline, fill) - if fill_ink is not None: - self.draw.draw_ellipse(xy, fill_ink, 1) - if ink is not None and ink != fill_ink and width != 0: - self.draw.draw_ellipse(xy, ink, 0, width) - - def circle( - self, - xy: Sequence[float], - radius: float, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw a circle given center coordinates and a radius.""" - ellipse_xy = (xy[0] - radius, xy[1] - radius, xy[0] + radius, xy[1] + radius) - self.ellipse(ellipse_xy, fill, outline, width) - - def line( - self, - xy: Coords, - fill: _Ink | None = None, - width: int = 0, - joint: str | None = None, - ) -> None: - """Draw a line, or a connected sequence of line segments.""" - ink = self._getink(fill)[0] - if ink is not None: - self.draw.draw_lines(xy, ink, width) - if joint == "curve" and width > 4: - points: Sequence[Sequence[float]] - if isinstance(xy[0], (list, tuple)): - points = cast(Sequence[Sequence[float]], xy) - else: - points = [ - cast(Sequence[float], tuple(xy[i : i + 2])) - for i in range(0, len(xy), 2) - ] - for i in range(1, len(points) - 1): - point = points[i] - angles = [ - math.degrees(math.atan2(end[0] - start[0], start[1] - end[1])) - % 360 - for start, end in ( - (points[i - 1], point), - (point, points[i + 1]), - ) - ] - if angles[0] == angles[1]: - # This is a straight line, so no joint is required - continue - - def coord_at_angle( - coord: Sequence[float], angle: float - ) -> tuple[float, ...]: - x, y = coord - angle -= 90 - distance = width / 2 - 1 - return tuple( - p + (math.floor(p_d) if p_d > 0 else math.ceil(p_d)) - for p, p_d in ( - (x, distance * math.cos(math.radians(angle))), - (y, distance * math.sin(math.radians(angle))), - ) - ) - - flipped = ( - angles[1] > angles[0] and angles[1] - 180 > angles[0] - ) or (angles[1] < angles[0] and angles[1] + 180 > angles[0]) - coords = [ - (point[0] - width / 2 + 1, point[1] - width / 2 + 1), - (point[0] + width / 2 - 1, point[1] + width / 2 - 1), - ] - if flipped: - start, end = (angles[1] + 90, angles[0] + 90) - else: - start, end = (angles[0] - 90, angles[1] - 90) - self.pieslice(coords, start - 90, end - 90, fill) - - if width > 8: - # Cover potential gaps between the line and the joint - if flipped: - gap_coords = [ - coord_at_angle(point, angles[0] + 90), - point, - coord_at_angle(point, angles[1] + 90), - ] - else: - gap_coords = [ - coord_at_angle(point, angles[0] - 90), - point, - coord_at_angle(point, angles[1] - 90), - ] - self.line(gap_coords, fill, width=3) - - def shape( - self, - shape: Image.core._Outline, - fill: _Ink | None = None, - outline: _Ink | None = None, - ) -> None: - """(Experimental) Draw a shape.""" - shape.close() - ink, fill_ink = self._getink(outline, fill) - if fill_ink is not None: - self.draw.draw_outline(shape, fill_ink, 1) - if ink is not None and ink != fill_ink: - self.draw.draw_outline(shape, ink, 0) - - def pieslice( - self, - xy: Coords, - start: float, - end: float, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw a pieslice.""" - ink, fill_ink = self._getink(outline, fill) - if fill_ink is not None: - self.draw.draw_pieslice(xy, start, end, fill_ink, 1) - if ink is not None and ink != fill_ink and width != 0: - self.draw.draw_pieslice(xy, start, end, ink, 0, width) - - def point(self, xy: Coords, fill: _Ink | None = None) -> None: - """Draw one or more individual pixels.""" - ink, fill = self._getink(fill) - if ink is not None: - self.draw.draw_points(xy, ink) - - def polygon( - self, - xy: Coords, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw a polygon.""" - ink, fill_ink = self._getink(outline, fill) - if fill_ink is not None: - self.draw.draw_polygon(xy, fill_ink, 1) - if ink is not None and ink != fill_ink and width != 0: - if width == 1: - self.draw.draw_polygon(xy, ink, 0, width) - elif self.im is not None: - # To avoid expanding the polygon outwards, - # use the fill as a mask - mask = Image.new("1", self.im.size) - mask_ink = self._getink(1)[0] - draw = Draw(mask) - draw.draw.draw_polygon(xy, mask_ink, 1) - - self.draw.draw_polygon(xy, ink, 0, width * 2 - 1, mask.im) - - def regular_polygon( - self, - bounding_circle: Sequence[Sequence[float] | float], - n_sides: int, - rotation: float = 0, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw a regular polygon.""" - xy = _compute_regular_polygon_vertices(bounding_circle, n_sides, rotation) - self.polygon(xy, fill, outline, width) - - def rectangle( - self, - xy: Coords, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - ) -> None: - """Draw a rectangle.""" - ink, fill_ink = self._getink(outline, fill) - if fill_ink is not None: - self.draw.draw_rectangle(xy, fill_ink, 1) - if ink is not None and ink != fill_ink and width != 0: - self.draw.draw_rectangle(xy, ink, 0, width) - - def rounded_rectangle( - self, - xy: Coords, - radius: float = 0, - fill: _Ink | None = None, - outline: _Ink | None = None, - width: int = 1, - *, - corners: tuple[bool, bool, bool, bool] | None = None, - ) -> None: - """Draw a rounded rectangle.""" - if isinstance(xy[0], (list, tuple)): - (x0, y0), (x1, y1) = cast(Sequence[Sequence[float]], xy) - else: - x0, y0, x1, y1 = cast(Sequence[float], xy) - if x1 < x0: - msg = "x1 must be greater than or equal to x0" - raise ValueError(msg) - if y1 < y0: - msg = "y1 must be greater than or equal to y0" - raise ValueError(msg) - if corners is None: - corners = (True, True, True, True) - - d = radius * 2 - - x0 = round(x0) - y0 = round(y0) - x1 = round(x1) - y1 = round(y1) - full_x, full_y = False, False - if all(corners): - full_x = d >= x1 - x0 - 1 - if full_x: - # The two left and two right corners are joined - d = x1 - x0 - full_y = d >= y1 - y0 - 1 - if full_y: - # The two top and two bottom corners are joined - d = y1 - y0 - if full_x and full_y: - # If all corners are joined, that is a circle - return self.ellipse(xy, fill, outline, width) - - if d == 0 or not any(corners): - # If the corners have no curve, - # or there are no corners, - # that is a rectangle - return self.rectangle(xy, fill, outline, width) - - r = int(d // 2) - ink, fill_ink = self._getink(outline, fill) - - def draw_corners(pieslice: bool) -> None: - parts: tuple[tuple[tuple[float, float, float, float], int, int], ...] - if full_x: - # Draw top and bottom halves - parts = ( - ((x0, y0, x0 + d, y0 + d), 180, 360), - ((x0, y1 - d, x0 + d, y1), 0, 180), - ) - elif full_y: - # Draw left and right halves - parts = ( - ((x0, y0, x0 + d, y0 + d), 90, 270), - ((x1 - d, y0, x1, y0 + d), 270, 90), - ) - else: - # Draw four separate corners - parts = tuple( - part - for i, part in enumerate( - ( - ((x0, y0, x0 + d, y0 + d), 180, 270), - ((x1 - d, y0, x1, y0 + d), 270, 360), - ((x1 - d, y1 - d, x1, y1), 0, 90), - ((x0, y1 - d, x0 + d, y1), 90, 180), - ) - ) - if corners[i] - ) - for part in parts: - if pieslice: - self.draw.draw_pieslice(*(part + (fill_ink, 1))) - else: - self.draw.draw_arc(*(part + (ink, width))) - - if fill_ink is not None: - draw_corners(True) - - if full_x: - self.draw.draw_rectangle((x0, y0 + r + 1, x1, y1 - r - 1), fill_ink, 1) - elif x1 - r - 1 > x0 + r + 1: - self.draw.draw_rectangle((x0 + r + 1, y0, x1 - r - 1, y1), fill_ink, 1) - if not full_x and not full_y: - left = [x0, y0, x0 + r, y1] - if corners[0]: - left[1] += r + 1 - if corners[3]: - left[3] -= r + 1 - self.draw.draw_rectangle(left, fill_ink, 1) - - right = [x1 - r, y0, x1, y1] - if corners[1]: - right[1] += r + 1 - if corners[2]: - right[3] -= r + 1 - self.draw.draw_rectangle(right, fill_ink, 1) - if ink is not None and ink != fill_ink and width != 0: - draw_corners(False) - - if not full_x: - top = [x0, y0, x1, y0 + width - 1] - if corners[0]: - top[0] += r + 1 - if corners[1]: - top[2] -= r + 1 - self.draw.draw_rectangle(top, ink, 1) - - bottom = [x0, y1 - width + 1, x1, y1] - if corners[3]: - bottom[0] += r + 1 - if corners[2]: - bottom[2] -= r + 1 - self.draw.draw_rectangle(bottom, ink, 1) - if not full_y: - left = [x0, y0, x0 + width - 1, y1] - if corners[0]: - left[1] += r + 1 - if corners[3]: - left[3] -= r + 1 - self.draw.draw_rectangle(left, ink, 1) - - right = [x1 - width + 1, y0, x1, y1] - if corners[1]: - right[1] += r + 1 - if corners[2]: - right[3] -= r + 1 - self.draw.draw_rectangle(right, ink, 1) - - def text( - self, - xy: tuple[float, float], - text: AnyStr | ImageText.Text, - fill: _Ink | None = None, - font: ( - ImageFont.ImageFont - | ImageFont.FreeTypeFont - | ImageFont.TransposedFont - | None - ) = None, - anchor: str | None = None, - spacing: float = 4, - align: str = "left", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - stroke_fill: _Ink | None = None, - embedded_color: bool = False, - *args: Any, - **kwargs: Any, - ) -> None: - """Draw text.""" - if isinstance(text, ImageText.Text): - image_text = text - else: - if font is None: - font = self._getfont(kwargs.get("font_size")) - image_text = ImageText.Text( - text, font, self.mode, spacing, direction, features, language - ) - if embedded_color: - image_text.embed_color() - if stroke_width: - image_text.stroke(stroke_width, stroke_fill) - - def getink(fill: _Ink | None) -> int: - ink, fill_ink = self._getink(fill) - if ink is None: - assert fill_ink is not None - return fill_ink - return ink - - ink = getink(fill) - if ink is None: - return - - stroke_ink = None - if image_text.stroke_width: - stroke_ink = ( - getink(image_text.stroke_fill) - if image_text.stroke_fill is not None - else ink - ) - - for xy, anchor, line in image_text._split(xy, anchor, align): - - def draw_text(ink: int, stroke_width: float = 0) -> None: - mode = self.fontmode - if stroke_width == 0 and embedded_color: - mode = "RGBA" - coord = [] - for i in range(2): - coord.append(int(xy[i])) - start = (math.modf(xy[0])[0], math.modf(xy[1])[0]) - try: - mask, offset = image_text.font.getmask2( # type: ignore[union-attr,misc] - line, - mode, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - stroke_filled=True, - anchor=anchor, - ink=ink, - start=start, - *args, - **kwargs, - ) - coord = [coord[0] + offset[0], coord[1] + offset[1]] - except AttributeError: - try: - mask = image_text.font.getmask( # type: ignore[misc] - line, - mode, - direction, - features, - language, - stroke_width, - anchor, - ink, - start=start, - *args, - **kwargs, - ) - except TypeError: - mask = image_text.font.getmask(line) - if mode == "RGBA": - # image_text.font.getmask2(mode="RGBA") - # returns color in RGB bands and mask in A - # extract mask and set text alpha - color, mask = mask, mask.getband(3) - ink_alpha = struct.pack("i", ink)[3] - color.fillband(3, ink_alpha) - x, y = coord - if self.im is not None: - self.im.paste( - color, (x, y, x + mask.size[0], y + mask.size[1]), mask - ) - else: - self.draw.draw_bitmap(coord, mask, ink) - - if stroke_ink is not None: - # Draw stroked text - draw_text(stroke_ink, image_text.stroke_width) - - # Draw normal text - if ink != stroke_ink: - draw_text(ink) - else: - # Only draw normal text - draw_text(ink) - - def multiline_text( - self, - xy: tuple[float, float], - text: AnyStr, - fill: _Ink | None = None, - font: ( - ImageFont.ImageFont - | ImageFont.FreeTypeFont - | ImageFont.TransposedFont - | None - ) = None, - anchor: str | None = None, - spacing: float = 4, - align: str = "left", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - stroke_fill: _Ink | None = None, - embedded_color: bool = False, - *, - font_size: float | None = None, - ) -> None: - return self.text( - xy, - text, - fill, - font, - anchor, - spacing, - align, - direction, - features, - language, - stroke_width, - stroke_fill, - embedded_color, - font_size=font_size, - ) - - def textlength( - self, - text: AnyStr, - font: ( - ImageFont.ImageFont - | ImageFont.FreeTypeFont - | ImageFont.TransposedFont - | None - ) = None, - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - embedded_color: bool = False, - *, - font_size: float | None = None, - ) -> float: - """Get the length of a given string, in pixels with 1/64 precision.""" - if font is None: - font = self._getfont(font_size) - image_text = ImageText.Text( - text, - font, - self.mode, - direction=direction, - features=features, - language=language, - ) - if embedded_color: - image_text.embed_color() - return image_text.get_length() - - def textbbox( - self, - xy: tuple[float, float], - text: AnyStr, - font: ( - ImageFont.ImageFont - | ImageFont.FreeTypeFont - | ImageFont.TransposedFont - | None - ) = None, - anchor: str | None = None, - spacing: float = 4, - align: str = "left", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - embedded_color: bool = False, - *, - font_size: float | None = None, - ) -> tuple[float, float, float, float]: - """Get the bounding box of a given string, in pixels.""" - if font is None: - font = self._getfont(font_size) - image_text = ImageText.Text( - text, font, self.mode, spacing, direction, features, language - ) - if embedded_color: - image_text.embed_color() - if stroke_width: - image_text.stroke(stroke_width) - return image_text.get_bbox(xy, anchor, align) - - def multiline_textbbox( - self, - xy: tuple[float, float], - text: AnyStr, - font: ( - ImageFont.ImageFont - | ImageFont.FreeTypeFont - | ImageFont.TransposedFont - | None - ) = None, - anchor: str | None = None, - spacing: float = 4, - align: str = "left", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - embedded_color: bool = False, - *, - font_size: float | None = None, - ) -> tuple[float, float, float, float]: - return self.textbbox( - xy, - text, - font, - anchor, - spacing, - align, - direction, - features, - language, - stroke_width, - embedded_color, - font_size=font_size, - ) - - -def Draw(im: Image.Image, mode: str | None = None) -> ImageDraw: - """ - A simple 2D drawing interface for PIL images. - - :param im: The image to draw in. - :param mode: Optional mode to use for color values. For RGB - images, this argument can be RGB or RGBA (to blend the - drawing into the image). For all other modes, this argument - must be the same as the image mode. If omitted, the mode - defaults to the mode of the image. - """ - try: - return getattr(im, "getdraw")(mode) - except AttributeError: - return ImageDraw(im, mode) - - -def getdraw(im: Image.Image | None = None) -> tuple[ImageDraw2.Draw | None, ModuleType]: - """ - :param im: The image to draw in. - :returns: A (drawing context, drawing resource factory) tuple. - """ - from . import ImageDraw2 - - draw = ImageDraw2.Draw(im) if im is not None else None - return draw, ImageDraw2 - - -def floodfill( - image: Image.Image, - xy: tuple[int, int], - value: float | tuple[int, ...], - border: float | tuple[int, ...] | None = None, - thresh: float = 0, -) -> None: - """ - .. warning:: This method is experimental. - - Fills a bounded region with a given color. - - :param image: Target image. - :param xy: Seed position (a 2-item coordinate tuple). See - :ref:`coordinate-system`. - :param value: Fill color. - :param border: Optional border value. If given, the region consists of - pixels with a color different from the border color. If not given, - the region consists of pixels having the same color as the seed - pixel. - :param thresh: Optional threshold value which specifies a maximum - tolerable difference of a pixel value from the 'background' in - order for it to be replaced. Useful for filling regions of - non-homogeneous, but similar, colors. - """ - # based on an implementation by Eric S. Raymond - # amended by yo1995 @20180806 - pixel = image.load() - assert pixel is not None - x, y = xy - try: - background = pixel[x, y] - if _color_diff(value, background) <= thresh: - return # seed point already has fill color - pixel[x, y] = value - except (ValueError, IndexError): - return # seed point outside image - edge = {(x, y)} - # use a set to keep record of current and previous edge pixels - # to reduce memory consumption - full_edge = set() - while edge: - new_edge = set() - for x, y in edge: # 4 adjacent method - for s, t in ((x + 1, y), (x - 1, y), (x, y + 1), (x, y - 1)): - # If already processed, or if a coordinate is negative, skip - if (s, t) in full_edge or s < 0 or t < 0: - continue - try: - p = pixel[s, t] - except (ValueError, IndexError): - pass - else: - full_edge.add((s, t)) - if border is None: - fill = _color_diff(p, background) <= thresh - else: - fill = p not in (value, border) - if fill: - pixel[s, t] = value - new_edge.add((s, t)) - full_edge = edge # discard pixels processed - edge = new_edge - - -def _compute_regular_polygon_vertices( - bounding_circle: Sequence[Sequence[float] | float], n_sides: int, rotation: float -) -> list[tuple[float, float]]: - """ - Generate a list of vertices for a 2D regular polygon. - - :param bounding_circle: The bounding circle is a sequence defined - by a point and radius. The polygon is inscribed in this circle. - (e.g. ``bounding_circle=(x, y, r)`` or ``((x, y), r)``) - :param n_sides: Number of sides - (e.g. ``n_sides=3`` for a triangle, ``6`` for a hexagon) - :param rotation: Apply an arbitrary rotation to the polygon - (e.g. ``rotation=90``, applies a 90 degree rotation) - :return: List of regular polygon vertices - (e.g. ``[(25, 50), (50, 50), (50, 25), (25, 25)]``) - - How are the vertices computed? - 1. Compute the following variables - - theta: Angle between the apothem & the nearest polygon vertex - - side_length: Length of each polygon edge - - centroid: Center of bounding circle (1st, 2nd elements of bounding_circle) - - polygon_radius: Polygon radius (last element of bounding_circle) - - angles: Location of each polygon vertex in polar grid - (e.g. A square with 0 degree rotation => [225.0, 315.0, 45.0, 135.0]) - - 2. For each angle in angles, get the polygon vertex at that angle - The vertex is computed using the equation below. - X= xcos(Ο†) + ysin(Ο†) - Y= βˆ’xsin(Ο†) + ycos(Ο†) - - Note: - Ο† = angle in degrees - x = 0 - y = polygon_radius - - The formula above assumes rotation around the origin. - In our case, we are rotating around the centroid. - To account for this, we use the formula below - X = xcos(Ο†) + ysin(Ο†) + centroid_x - Y = βˆ’xsin(Ο†) + ycos(Ο†) + centroid_y - """ - # 1. Error Handling - # 1.1 Check `n_sides` has an appropriate value - if not isinstance(n_sides, int): - msg = "n_sides should be an int" # type: ignore[unreachable] - raise TypeError(msg) - if n_sides < 3: - msg = "n_sides should be an int > 2" - raise ValueError(msg) - - # 1.2 Check `bounding_circle` has an appropriate value - if not isinstance(bounding_circle, (list, tuple)): - msg = "bounding_circle should be a sequence" - raise TypeError(msg) - - if len(bounding_circle) == 3: - if not all(isinstance(i, (int, float)) for i in bounding_circle): - msg = "bounding_circle should only contain numeric data" - raise ValueError(msg) - - *centroid, polygon_radius = cast(list[float], list(bounding_circle)) - elif len(bounding_circle) == 2 and isinstance(bounding_circle[0], (list, tuple)): - if not all( - isinstance(i, (int, float)) for i in bounding_circle[0] - ) or not isinstance(bounding_circle[1], (int, float)): - msg = "bounding_circle should only contain numeric data" - raise ValueError(msg) - - if len(bounding_circle[0]) != 2: - msg = "bounding_circle centre should contain 2D coordinates (e.g. (x, y))" - raise ValueError(msg) - - centroid = cast(list[float], list(bounding_circle[0])) - polygon_radius = cast(float, bounding_circle[1]) - else: - msg = ( - "bounding_circle should contain 2D coordinates " - "and a radius (e.g. (x, y, r) or ((x, y), r) )" - ) - raise ValueError(msg) - - if polygon_radius <= 0: - msg = "bounding_circle radius should be > 0" - raise ValueError(msg) - - # 1.3 Check `rotation` has an appropriate value - if not isinstance(rotation, (int, float)): - msg = "rotation should be an int or float" # type: ignore[unreachable] - raise ValueError(msg) - - # 2. Define Helper Functions - def _apply_rotation(point: list[float], degrees: float) -> tuple[float, float]: - return ( - round( - point[0] * math.cos(math.radians(360 - degrees)) - - point[1] * math.sin(math.radians(360 - degrees)) - + centroid[0], - 2, - ), - round( - point[1] * math.cos(math.radians(360 - degrees)) - + point[0] * math.sin(math.radians(360 - degrees)) - + centroid[1], - 2, - ), - ) - - def _compute_polygon_vertex(angle: float) -> tuple[float, float]: - start_point = [polygon_radius, 0] - return _apply_rotation(start_point, angle) - - def _get_angles(n_sides: int, rotation: float) -> list[float]: - angles = [] - degrees = 360 / n_sides - # Start with the bottom left polygon vertex - current_angle = (270 - 0.5 * degrees) + rotation - for _ in range(n_sides): - angles.append(current_angle) - current_angle += degrees - if current_angle > 360: - current_angle -= 360 - return angles - - # 3. Variable Declarations - angles = _get_angles(n_sides, rotation) - - # 4. Compute Vertices - return [_compute_polygon_vertex(angle) for angle in angles] - - -def _color_diff( - color1: float | tuple[int, ...], color2: float | tuple[int, ...] -) -> float: - """ - Uses 1-norm distance to calculate difference between two values. - """ - first = color1 if isinstance(color1, tuple) else (color1,) - second = color2 if isinstance(color2, tuple) else (color2,) - - return sum(abs(first[i] - second[i]) for i in range(len(second))) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageDraw2.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageDraw2.py deleted file mode 100644 index 3d68658e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageDraw2.py +++ /dev/null @@ -1,243 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# WCK-style drawing interface operations -# -# History: -# 2003-12-07 fl created -# 2005-05-15 fl updated; added to PIL as ImageDraw2 -# 2005-05-15 fl added text support -# 2005-05-20 fl added arc/chord/pieslice support -# -# Copyright (c) 2003-2005 by Secret Labs AB -# Copyright (c) 2003-2005 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - - -""" -(Experimental) WCK-style drawing interface operations - -.. seealso:: :py:mod:`PIL.ImageDraw` -""" -from __future__ import annotations - -from typing import Any, AnyStr, BinaryIO - -from . import Image, ImageColor, ImageDraw, ImageFont, ImagePath -from ._typing import Coords, StrOrBytesPath - - -class Pen: - """Stores an outline color and width.""" - - def __init__(self, color: str, width: int = 1, opacity: int = 255) -> None: - self.color = ImageColor.getrgb(color) - self.width = width - - -class Brush: - """Stores a fill color""" - - def __init__(self, color: str, opacity: int = 255) -> None: - self.color = ImageColor.getrgb(color) - - -class Font: - """Stores a TrueType font and color""" - - def __init__( - self, color: str, file: StrOrBytesPath | BinaryIO, size: float = 12 - ) -> None: - # FIXME: add support for bitmap fonts - self.color = ImageColor.getrgb(color) - self.font = ImageFont.truetype(file, size) - - -class Draw: - """ - (Experimental) WCK-style drawing interface - """ - - def __init__( - self, - image: Image.Image | str, - size: tuple[int, int] | list[int] | None = None, - color: float | tuple[float, ...] | str | None = None, - ) -> None: - if isinstance(image, str): - if size is None: - msg = "If image argument is mode string, size must be a list or tuple" - raise ValueError(msg) - image = Image.new(image, size, color) - self.draw = ImageDraw.Draw(image) - self.image = image - self.transform: tuple[float, float, float, float, float, float] | None = None - - def flush(self) -> Image.Image: - return self.image - - def render( - self, - op: str, - xy: Coords, - pen: Pen | Brush | None, - brush: Brush | Pen | None = None, - **kwargs: Any, - ) -> None: - # handle color arguments - outline = fill = None - width = 1 - if isinstance(pen, Pen): - outline = pen.color - width = pen.width - elif isinstance(brush, Pen): - outline = brush.color - width = brush.width - if isinstance(brush, Brush): - fill = brush.color - elif isinstance(pen, Brush): - fill = pen.color - # handle transformation - if self.transform: - path = ImagePath.Path(xy) - path.transform(self.transform) - xy = path - # render the item - if op in ("arc", "line"): - kwargs.setdefault("fill", outline) - else: - kwargs.setdefault("fill", fill) - kwargs.setdefault("outline", outline) - if op == "line": - kwargs.setdefault("width", width) - getattr(self.draw, op)(xy, **kwargs) - - def settransform(self, offset: tuple[float, float]) -> None: - """Sets a transformation offset.""" - (xoffset, yoffset) = offset - self.transform = (1, 0, xoffset, 0, 1, yoffset) - - def arc( - self, - xy: Coords, - pen: Pen | Brush | None, - start: float, - end: float, - *options: Any, - ) -> None: - """ - Draws an arc (a portion of a circle outline) between the start and end - angles, inside the given bounding box. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.arc` - """ - self.render("arc", xy, pen, *options, start=start, end=end) - - def chord( - self, - xy: Coords, - pen: Pen | Brush | None, - start: float, - end: float, - *options: Any, - ) -> None: - """ - Same as :py:meth:`~PIL.ImageDraw2.Draw.arc`, but connects the end points - with a straight line. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.chord` - """ - self.render("chord", xy, pen, *options, start=start, end=end) - - def ellipse(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None: - """ - Draws an ellipse inside the given bounding box. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.ellipse` - """ - self.render("ellipse", xy, pen, *options) - - def line(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None: - """ - Draws a line between the coordinates in the ``xy`` list. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.line` - """ - self.render("line", xy, pen, *options) - - def pieslice( - self, - xy: Coords, - pen: Pen | Brush | None, - start: float, - end: float, - *options: Any, - ) -> None: - """ - Same as arc, but also draws straight lines between the end points and the - center of the bounding box. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.pieslice` - """ - self.render("pieslice", xy, pen, *options, start=start, end=end) - - def polygon(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None: - """ - Draws a polygon. - - The polygon outline consists of straight lines between the given - coordinates, plus a straight line between the last and the first - coordinate. - - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.polygon` - """ - self.render("polygon", xy, pen, *options) - - def rectangle(self, xy: Coords, pen: Pen | Brush | None, *options: Any) -> None: - """ - Draws a rectangle. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.rectangle` - """ - self.render("rectangle", xy, pen, *options) - - def text(self, xy: tuple[float, float], text: AnyStr, font: Font) -> None: - """ - Draws the string at the given position. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.text` - """ - if self.transform: - path = ImagePath.Path(xy) - path.transform(self.transform) - xy = path - self.draw.text(xy, text, font=font.font, fill=font.color) - - def textbbox( - self, xy: tuple[float, float], text: AnyStr, font: Font - ) -> tuple[float, float, float, float]: - """ - Returns bounding box (in pixels) of given text. - - :return: ``(left, top, right, bottom)`` bounding box - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.textbbox` - """ - if self.transform: - path = ImagePath.Path(xy) - path.transform(self.transform) - xy = path - return self.draw.textbbox(xy, text, font=font.font) - - def textlength(self, text: AnyStr, font: Font) -> float: - """ - Returns length (in pixels) of given text. - This is the amount by which following text should be offset. - - .. seealso:: :py:meth:`PIL.ImageDraw.ImageDraw.textlength` - """ - return self.draw.textlength(text, font=font.font) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageEnhance.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageEnhance.py deleted file mode 100644 index 0e7e6dd8..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageEnhance.py +++ /dev/null @@ -1,113 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# image enhancement classes -# -# For a background, see "Image Processing By Interpolation and -# Extrapolation", Paul Haeberli and Douglas Voorhies. Available -# at http://www.graficaobscura.com/interp/index.html -# -# History: -# 1996-03-23 fl Created -# 2009-06-16 fl Fixed mean calculation -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image, ImageFilter, ImageStat - - -class _Enhance: - image: Image.Image - degenerate: Image.Image - - def enhance(self, factor: float) -> Image.Image: - """ - Returns an enhanced image. - - :param factor: A floating point value controlling the enhancement. - Factor 1.0 always returns a copy of the original image, - lower factors mean less color (brightness, contrast, - etc), and higher values more. There are no restrictions - on this value. - :rtype: :py:class:`~PIL.Image.Image` - """ - return Image.blend(self.degenerate, self.image, factor) - - -class Color(_Enhance): - """Adjust image color balance. - - This class can be used to adjust the colour balance of an image, in - a manner similar to the controls on a colour TV set. An enhancement - factor of 0.0 gives a black and white image. A factor of 1.0 gives - the original image. - """ - - def __init__(self, image: Image.Image) -> None: - self.image = image - self.intermediate_mode = "L" - if "A" in image.getbands(): - self.intermediate_mode = "LA" - - if self.intermediate_mode != image.mode: - image = image.convert(self.intermediate_mode).convert(image.mode) - self.degenerate = image - - -class Contrast(_Enhance): - """Adjust image contrast. - - This class can be used to control the contrast of an image, similar - to the contrast control on a TV set. An enhancement factor of 0.0 - gives a solid gray image. A factor of 1.0 gives the original image. - """ - - def __init__(self, image: Image.Image) -> None: - self.image = image - if image.mode != "L": - image = image.convert("L") - mean = int(ImageStat.Stat(image).mean[0] + 0.5) - self.degenerate = Image.new("L", image.size, mean) - if self.degenerate.mode != self.image.mode: - self.degenerate = self.degenerate.convert(self.image.mode) - - if "A" in self.image.getbands(): - self.degenerate.putalpha(self.image.getchannel("A")) - - -class Brightness(_Enhance): - """Adjust image brightness. - - This class can be used to control the brightness of an image. An - enhancement factor of 0.0 gives a black image. A factor of 1.0 gives the - original image. - """ - - def __init__(self, image: Image.Image) -> None: - self.image = image - self.degenerate = Image.new(image.mode, image.size, 0) - - if "A" in image.getbands(): - self.degenerate.putalpha(image.getchannel("A")) - - -class Sharpness(_Enhance): - """Adjust image sharpness. - - This class can be used to adjust the sharpness of an image. An - enhancement factor of 0.0 gives a blurred image, a factor of 1.0 gives the - original image, and a factor of 2.0 gives a sharpened image. - """ - - def __init__(self, image: Image.Image) -> None: - self.image = image - self.degenerate = image.filter(ImageFilter.SMOOTH) - - if "A" in image.getbands(): - self.degenerate.putalpha(image.getchannel("A")) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageFile.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageFile.py deleted file mode 100644 index a1d98bd5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageFile.py +++ /dev/null @@ -1,926 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# base class for image file handlers -# -# history: -# 1995-09-09 fl Created -# 1996-03-11 fl Fixed load mechanism. -# 1996-04-15 fl Added pcx/xbm decoders. -# 1996-04-30 fl Added encoders. -# 1996-12-14 fl Added load helpers -# 1997-01-11 fl Use encode_to_file where possible -# 1997-08-27 fl Flush output in _save -# 1998-03-05 fl Use memory mapping for some modes -# 1999-02-04 fl Use memory mapping also for "I;16" and "I;16B" -# 1999-05-31 fl Added image parser -# 2000-10-12 fl Set readonly flag on memory-mapped images -# 2002-03-20 fl Use better messages for common decoder errors -# 2003-04-21 fl Fall back on mmap/map_buffer if map is not available -# 2003-10-30 fl Added StubImageFile class -# 2004-02-25 fl Made incremental parser more robust -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1995-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import abc -import io -import itertools -import logging -import os -import struct -from typing import IO, Any, NamedTuple, cast - -from . import ExifTags, Image -from ._util import DeferredError, is_path - -TYPE_CHECKING = False -if TYPE_CHECKING: - from ._typing import StrOrBytesPath - -logger = logging.getLogger(__name__) - -MAXBLOCK = 65536 -""" -By default, Pillow processes image data in blocks. This helps to prevent excessive use -of resources. Codecs may disable this behaviour with ``_pulls_fd`` or ``_pushes_fd``. - -When reading an image, this is the number of bytes to read at once. - -When writing an image, this is the number of bytes to write at once. -If the image width times 4 is greater, then that will be used instead. -Plugins may also set a greater number. - -User code may set this to another number. -""" - -SAFEBLOCK = 1024 * 1024 - -LOAD_TRUNCATED_IMAGES = False -"""Whether or not to load truncated image files. User code may change this.""" - -ERRORS = { - -1: "image buffer overrun error", - -2: "decoding error", - -3: "unknown error", - -8: "bad configuration", - -9: "out of memory error", -} -""" -Dict of known error codes returned from :meth:`.PyDecoder.decode`, -:meth:`.PyEncoder.encode` :meth:`.PyEncoder.encode_to_pyfd` and -:meth:`.PyEncoder.encode_to_file`. -""" - - -# -# -------------------------------------------------------------------- -# Helpers - - -def _get_oserror(error: int, *, encoder: bool) -> OSError: - try: - msg = Image.core.getcodecstatus(error) - except AttributeError: - msg = ERRORS.get(error) - if not msg: - msg = f"{'encoder' if encoder else 'decoder'} error {error}" - msg += f" when {'writing' if encoder else 'reading'} image file" - return OSError(msg) - - -def _tilesort(t: _Tile) -> int: - # sort on offset - return t[2] - - -class _Tile(NamedTuple): - codec_name: str - extents: tuple[int, int, int, int] | None - offset: int = 0 - args: tuple[Any, ...] | str | None = None - - -# -# -------------------------------------------------------------------- -# ImageFile base class - - -class ImageFile(Image.Image): - """Base class for image file format handlers.""" - - def __init__( - self, fp: StrOrBytesPath | IO[bytes], filename: str | bytes | None = None - ) -> None: - super().__init__() - - self._min_frame = 0 - - self.custom_mimetype: str | None = None - - self.tile: list[_Tile] = [] - """ A list of tile descriptors """ - - self.readonly = 1 # until we know better - - self.decoderconfig: tuple[Any, ...] = () - self.decodermaxblock = MAXBLOCK - - if is_path(fp): - # filename - self.fp = open(fp, "rb") - self.filename = os.fspath(fp) - self._exclusive_fp = True - else: - # stream - self.fp = cast(IO[bytes], fp) - self.filename = filename if filename is not None else "" - # can be overridden - self._exclusive_fp = False - - try: - try: - self._open() - except ( - IndexError, # end of data - TypeError, # end of data (ord) - KeyError, # unsupported mode - EOFError, # got header but not the first frame - struct.error, - ) as v: - raise SyntaxError(v) from v - - if not self.mode or self.size[0] <= 0 or self.size[1] <= 0: - msg = "not identified by this driver" - raise SyntaxError(msg) - except BaseException: - # close the file only if we have opened it this constructor - if self._exclusive_fp: - self.fp.close() - raise - - def _open(self) -> None: - pass - - def _close_fp(self): - if getattr(self, "_fp", False) and not isinstance(self._fp, DeferredError): - if self._fp != self.fp: - self._fp.close() - self._fp = DeferredError(ValueError("Operation on closed image")) - if self.fp: - self.fp.close() - - def close(self) -> None: - """ - Closes the file pointer, if possible. - - This operation will destroy the image core and release its memory. - The image data will be unusable afterward. - - This function is required to close images that have multiple frames or - have not had their file read and closed by the - :py:meth:`~PIL.Image.Image.load` method. See :ref:`file-handling` for - more information. - """ - try: - self._close_fp() - self.fp = None - except Exception as msg: - logger.debug("Error closing: %s", msg) - - super().close() - - def get_child_images(self) -> list[ImageFile]: - child_images = [] - exif = self.getexif() - ifds = [] - if ExifTags.Base.SubIFDs in exif: - subifd_offsets = exif[ExifTags.Base.SubIFDs] - if subifd_offsets: - if not isinstance(subifd_offsets, tuple): - subifd_offsets = (subifd_offsets,) - for subifd_offset in subifd_offsets: - ifds.append((exif._get_ifd_dict(subifd_offset), subifd_offset)) - ifd1 = exif.get_ifd(ExifTags.IFD.IFD1) - if ifd1 and ifd1.get(ExifTags.Base.JpegIFOffset): - assert exif._info is not None - ifds.append((ifd1, exif._info.next)) - - offset = None - for ifd, ifd_offset in ifds: - assert self.fp is not None - current_offset = self.fp.tell() - if offset is None: - offset = current_offset - - fp = self.fp - if ifd is not None: - thumbnail_offset = ifd.get(ExifTags.Base.JpegIFOffset) - if thumbnail_offset is not None: - thumbnail_offset += getattr(self, "_exif_offset", 0) - self.fp.seek(thumbnail_offset) - - length = ifd.get(ExifTags.Base.JpegIFByteCount) - assert isinstance(length, int) - data = self.fp.read(length) - fp = io.BytesIO(data) - - with Image.open(fp) as im: - from . import TiffImagePlugin - - if thumbnail_offset is None and isinstance( - im, TiffImagePlugin.TiffImageFile - ): - im._frame_pos = [ifd_offset] - im._seek(0) - im.load() - child_images.append(im) - - if offset is not None: - assert self.fp is not None - self.fp.seek(offset) - return child_images - - def get_format_mimetype(self) -> str | None: - if self.custom_mimetype: - return self.custom_mimetype - if self.format is not None: - return Image.MIME.get(self.format.upper()) - return None - - def __getstate__(self) -> list[Any]: - return super().__getstate__() + [self.filename] - - def __setstate__(self, state: list[Any]) -> None: - self.tile = [] - if len(state) > 5: - self.filename = state[5] - super().__setstate__(state) - - def verify(self) -> None: - """Check file integrity""" - - # raise exception if something's wrong. must be called - # directly after open, and closes file when finished. - if self._exclusive_fp: - self.fp.close() - self.fp = None - - def load(self) -> Image.core.PixelAccess | None: - """Load image data based on tile list""" - - if not self.tile and self._im is None: - msg = "cannot load this image" - raise OSError(msg) - - pixel = Image.Image.load(self) - if not self.tile: - return pixel - - self.map: mmap.mmap | None = None - use_mmap = self.filename and len(self.tile) == 1 - - readonly = 0 - - # look for read/seek overrides - if hasattr(self, "load_read"): - read = self.load_read - # don't use mmap if there are custom read/seek functions - use_mmap = False - else: - read = self.fp.read - - if hasattr(self, "load_seek"): - seek = self.load_seek - use_mmap = False - else: - seek = self.fp.seek - - if use_mmap: - # try memory mapping - decoder_name, extents, offset, args = self.tile[0] - if isinstance(args, str): - args = (args, 0, 1) - if ( - decoder_name == "raw" - and isinstance(args, tuple) - and len(args) >= 3 - and args[0] == self.mode - and args[0] in Image._MAPMODES - ): - if offset < 0: - msg = "Tile offset cannot be negative" - raise ValueError(msg) - try: - # use mmap, if possible - import mmap - - with open(self.filename) as fp: - self.map = mmap.mmap(fp.fileno(), 0, access=mmap.ACCESS_READ) - if offset + self.size[1] * args[1] > self.map.size(): - msg = "buffer is not large enough" - raise OSError(msg) - self.im = Image.core.map_buffer( - self.map, self.size, decoder_name, offset, args - ) - readonly = 1 - # After trashing self.im, - # we might need to reload the palette data. - if self.palette: - self.palette.dirty = 1 - except (AttributeError, OSError, ImportError): - self.map = None - - self.load_prepare() - err_code = -3 # initialize to unknown error - if not self.map: - # sort tiles in file order - self.tile.sort(key=_tilesort) - - # FIXME: This is a hack to handle TIFF's JpegTables tag. - prefix = getattr(self, "tile_prefix", b"") - - # Remove consecutive duplicates that only differ by their offset - self.tile = [ - list(tiles)[-1] - for _, tiles in itertools.groupby( - self.tile, lambda tile: (tile[0], tile[1], tile[3]) - ) - ] - for i, (decoder_name, extents, offset, args) in enumerate(self.tile): - seek(offset) - decoder = Image._getdecoder( - self.mode, decoder_name, args, self.decoderconfig - ) - try: - decoder.setimage(self.im, extents) - if decoder.pulls_fd: - decoder.setfd(self.fp) - err_code = decoder.decode(b"")[1] - else: - b = prefix - while True: - read_bytes = self.decodermaxblock - if i + 1 < len(self.tile): - next_offset = self.tile[i + 1].offset - if next_offset > offset: - read_bytes = next_offset - offset - try: - s = read(read_bytes) - except (IndexError, struct.error) as e: - # truncated png/gif - if LOAD_TRUNCATED_IMAGES: - break - else: - msg = "image file is truncated" - raise OSError(msg) from e - - if not s: # truncated jpeg - if LOAD_TRUNCATED_IMAGES: - break - else: - msg = ( - "image file is truncated " - f"({len(b)} bytes not processed)" - ) - raise OSError(msg) - - b = b + s - n, err_code = decoder.decode(b) - if n < 0: - break - b = b[n:] - finally: - # Need to cleanup here to prevent leaks - decoder.cleanup() - - self.tile = [] - self.readonly = readonly - - self.load_end() - - if self._exclusive_fp and self._close_exclusive_fp_after_loading: - self.fp.close() - self.fp = None - - if not self.map and not LOAD_TRUNCATED_IMAGES and err_code < 0: - # still raised if decoder fails to return anything - raise _get_oserror(err_code, encoder=False) - - return Image.Image.load(self) - - def load_prepare(self) -> None: - # create image memory if necessary - if self._im is None: - self.im = Image.core.new(self.mode, self.size) - # create palette (optional) - if self.mode == "P": - Image.Image.load(self) - - def load_end(self) -> None: - # may be overridden - pass - - # may be defined for contained formats - # def load_seek(self, pos: int) -> None: - # pass - - # may be defined for blocked formats (e.g. PNG) - # def load_read(self, read_bytes: int) -> bytes: - # pass - - def _seek_check(self, frame: int) -> bool: - if ( - frame < self._min_frame - # Only check upper limit on frames if additional seek operations - # are not required to do so - or ( - not (hasattr(self, "_n_frames") and self._n_frames is None) - and frame >= getattr(self, "n_frames") + self._min_frame - ) - ): - msg = "attempt to seek outside sequence" - raise EOFError(msg) - - return self.tell() != frame - - -class StubHandler(abc.ABC): - def open(self, im: StubImageFile) -> None: - pass - - @abc.abstractmethod - def load(self, im: StubImageFile) -> Image.Image: - pass - - -class StubImageFile(ImageFile, metaclass=abc.ABCMeta): - """ - Base class for stub image loaders. - - A stub loader is an image loader that can identify files of a - certain format, but relies on external code to load the file. - """ - - @abc.abstractmethod - def _open(self) -> None: - pass - - def load(self) -> Image.core.PixelAccess | None: - loader = self._load() - if loader is None: - msg = f"cannot find loader for this {self.format} file" - raise OSError(msg) - image = loader.load(self) - assert image is not None - # become the other object (!) - self.__class__ = image.__class__ # type: ignore[assignment] - self.__dict__ = image.__dict__ - return image.load() - - @abc.abstractmethod - def _load(self) -> StubHandler | None: - """(Hook) Find actual image loader.""" - pass - - -class Parser: - """ - Incremental image parser. This class implements the standard - feed/close consumer interface. - """ - - incremental = None - image: Image.Image | None = None - data: bytes | None = None - decoder: Image.core.ImagingDecoder | PyDecoder | None = None - offset = 0 - finished = 0 - - def reset(self) -> None: - """ - (Consumer) Reset the parser. Note that you can only call this - method immediately after you've created a parser; parser - instances cannot be reused. - """ - assert self.data is None, "cannot reuse parsers" - - def feed(self, data: bytes) -> None: - """ - (Consumer) Feed data to the parser. - - :param data: A string buffer. - :exception OSError: If the parser failed to parse the image file. - """ - # collect data - - if self.finished: - return - - if self.data is None: - self.data = data - else: - self.data = self.data + data - - # parse what we have - if self.decoder: - if self.offset > 0: - # skip header - skip = min(len(self.data), self.offset) - self.data = self.data[skip:] - self.offset = self.offset - skip - if self.offset > 0 or not self.data: - return - - n, e = self.decoder.decode(self.data) - - if n < 0: - # end of stream - self.data = None - self.finished = 1 - if e < 0: - # decoding error - self.image = None - raise _get_oserror(e, encoder=False) - else: - # end of image - return - self.data = self.data[n:] - - elif self.image: - # if we end up here with no decoder, this file cannot - # be incrementally parsed. wait until we've gotten all - # available data - pass - - else: - # attempt to open this file - try: - with io.BytesIO(self.data) as fp: - im = Image.open(fp) - except OSError: - pass # not enough data - else: - flag = hasattr(im, "load_seek") or hasattr(im, "load_read") - if flag or len(im.tile) != 1: - # custom load code, or multiple tiles - self.decode = None - else: - # initialize decoder - im.load_prepare() - d, e, o, a = im.tile[0] - im.tile = [] - self.decoder = Image._getdecoder(im.mode, d, a, im.decoderconfig) - self.decoder.setimage(im.im, e) - - # calculate decoder offset - self.offset = o - if self.offset <= len(self.data): - self.data = self.data[self.offset :] - self.offset = 0 - - self.image = im - - def __enter__(self) -> Parser: - return self - - def __exit__(self, *args: object) -> None: - self.close() - - def close(self) -> Image.Image: - """ - (Consumer) Close the stream. - - :returns: An image object. - :exception OSError: If the parser failed to parse the image file either - because it cannot be identified or cannot be - decoded. - """ - # finish decoding - if self.decoder: - # get rid of what's left in the buffers - self.feed(b"") - self.data = self.decoder = None - if not self.finished: - msg = "image was incomplete" - raise OSError(msg) - if not self.image: - msg = "cannot parse this image" - raise OSError(msg) - if self.data: - # incremental parsing not possible; reopen the file - # not that we have all data - with io.BytesIO(self.data) as fp: - try: - self.image = Image.open(fp) - finally: - self.image.load() - return self.image - - -# -------------------------------------------------------------------- - - -def _save(im: Image.Image, fp: IO[bytes], tile: list[_Tile], bufsize: int = 0) -> None: - """Helper to save image based on tile list - - :param im: Image object. - :param fp: File object. - :param tile: Tile list. - :param bufsize: Optional buffer size - """ - - im.load() - if not hasattr(im, "encoderconfig"): - im.encoderconfig = () - tile.sort(key=_tilesort) - # FIXME: make MAXBLOCK a configuration parameter - # It would be great if we could have the encoder specify what it needs - # But, it would need at least the image size in most cases. RawEncode is - # a tricky case. - bufsize = max(MAXBLOCK, bufsize, im.size[0] * 4) # see RawEncode.c - try: - fh = fp.fileno() - fp.flush() - _encode_tile(im, fp, tile, bufsize, fh) - except (AttributeError, io.UnsupportedOperation) as exc: - _encode_tile(im, fp, tile, bufsize, None, exc) - if hasattr(fp, "flush"): - fp.flush() - - -def _encode_tile( - im: Image.Image, - fp: IO[bytes], - tile: list[_Tile], - bufsize: int, - fh: int | None, - exc: BaseException | None = None, -) -> None: - for encoder_name, extents, offset, args in tile: - if offset > 0: - fp.seek(offset) - encoder = Image._getencoder(im.mode, encoder_name, args, im.encoderconfig) - try: - encoder.setimage(im.im, extents) - if encoder.pushes_fd: - encoder.setfd(fp) - errcode = encoder.encode_to_pyfd()[1] - else: - if exc: - # compress to Python file-compatible object - while True: - errcode, data = encoder.encode(bufsize)[1:] - fp.write(data) - if errcode: - break - else: - # slight speedup: compress to real file object - assert fh is not None - errcode = encoder.encode_to_file(fh, bufsize) - if errcode < 0: - raise _get_oserror(errcode, encoder=True) from exc - finally: - encoder.cleanup() - - -def _safe_read(fp: IO[bytes], size: int) -> bytes: - """ - Reads large blocks in a safe way. Unlike fp.read(n), this function - doesn't trust the user. If the requested size is larger than - SAFEBLOCK, the file is read block by block. - - :param fp: File handle. Must implement a read method. - :param size: Number of bytes to read. - :returns: A string containing size bytes of data. - - Raises an OSError if the file is truncated and the read cannot be completed - - """ - if size <= 0: - return b"" - if size <= SAFEBLOCK: - data = fp.read(size) - if len(data) < size: - msg = "Truncated File Read" - raise OSError(msg) - return data - blocks: list[bytes] = [] - remaining_size = size - while remaining_size > 0: - block = fp.read(min(remaining_size, SAFEBLOCK)) - if not block: - break - blocks.append(block) - remaining_size -= len(block) - if sum(len(block) for block in blocks) < size: - msg = "Truncated File Read" - raise OSError(msg) - return b"".join(blocks) - - -class PyCodecState: - def __init__(self) -> None: - self.xsize = 0 - self.ysize = 0 - self.xoff = 0 - self.yoff = 0 - - def extents(self) -> tuple[int, int, int, int]: - return self.xoff, self.yoff, self.xoff + self.xsize, self.yoff + self.ysize - - -class PyCodec: - fd: IO[bytes] | None - - def __init__(self, mode: str, *args: Any) -> None: - self.im: Image.core.ImagingCore | None = None - self.state = PyCodecState() - self.fd = None - self.mode = mode - self.init(args) - - def init(self, args: tuple[Any, ...]) -> None: - """ - Override to perform codec specific initialization - - :param args: Tuple of arg items from the tile entry - :returns: None - """ - self.args = args - - def cleanup(self) -> None: - """ - Override to perform codec specific cleanup - - :returns: None - """ - pass - - def setfd(self, fd: IO[bytes]) -> None: - """ - Called from ImageFile to set the Python file-like object - - :param fd: A Python file-like object - :returns: None - """ - self.fd = fd - - def setimage( - self, - im: Image.core.ImagingCore, - extents: tuple[int, int, int, int] | None = None, - ) -> None: - """ - Called from ImageFile to set the core output image for the codec - - :param im: A core image object - :param extents: a 4 tuple of (x0, y0, x1, y1) defining the rectangle - for this tile - :returns: None - """ - - # following c code - self.im = im - - if extents: - (x0, y0, x1, y1) = extents - else: - (x0, y0, x1, y1) = (0, 0, 0, 0) - - if x0 == 0 and x1 == 0: - self.state.xsize, self.state.ysize = self.im.size - else: - self.state.xoff = x0 - self.state.yoff = y0 - self.state.xsize = x1 - x0 - self.state.ysize = y1 - y0 - - if self.state.xsize <= 0 or self.state.ysize <= 0: - msg = "Size cannot be negative" - raise ValueError(msg) - - if ( - self.state.xsize + self.state.xoff > self.im.size[0] - or self.state.ysize + self.state.yoff > self.im.size[1] - ): - msg = "Tile cannot extend outside image" - raise ValueError(msg) - - -class PyDecoder(PyCodec): - """ - Python implementation of a format decoder. Override this class and - add the decoding logic in the :meth:`decode` method. - - See :ref:`Writing Your Own File Codec in Python` - """ - - _pulls_fd = False - - @property - def pulls_fd(self) -> bool: - return self._pulls_fd - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - """ - Override to perform the decoding process. - - :param buffer: A bytes object with the data to be decoded. - :returns: A tuple of ``(bytes consumed, errcode)``. - If finished with decoding return -1 for the bytes consumed. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - msg = "unavailable in base decoder" - raise NotImplementedError(msg) - - def set_as_raw( - self, data: bytes, rawmode: str | None = None, extra: tuple[Any, ...] = () - ) -> None: - """ - Convenience method to set the internal image from a stream of raw data - - :param data: Bytes to be set - :param rawmode: The rawmode to be used for the decoder. - If not specified, it will default to the mode of the image - :param extra: Extra arguments for the decoder. - :returns: None - """ - - if not rawmode: - rawmode = self.mode - d = Image._getdecoder(self.mode, "raw", rawmode, extra) - assert self.im is not None - d.setimage(self.im, self.state.extents()) - s = d.decode(data) - - if s[0] >= 0: - msg = "not enough image data" - raise ValueError(msg) - if s[1] != 0: - msg = "cannot decode image data" - raise ValueError(msg) - - -class PyEncoder(PyCodec): - """ - Python implementation of a format encoder. Override this class and - add the decoding logic in the :meth:`encode` method. - - See :ref:`Writing Your Own File Codec in Python` - """ - - _pushes_fd = False - - @property - def pushes_fd(self) -> bool: - return self._pushes_fd - - def encode(self, bufsize: int) -> tuple[int, int, bytes]: - """ - Override to perform the encoding process. - - :param bufsize: Buffer size. - :returns: A tuple of ``(bytes encoded, errcode, bytes)``. - If finished with encoding return 1 for the error code. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - msg = "unavailable in base encoder" - raise NotImplementedError(msg) - - def encode_to_pyfd(self) -> tuple[int, int]: - """ - If ``pushes_fd`` is ``True``, then this method will be used, - and ``encode()`` will only be called once. - - :returns: A tuple of ``(bytes consumed, errcode)``. - Err codes are from :data:`.ImageFile.ERRORS`. - """ - if not self.pushes_fd: - return 0, -8 # bad configuration - bytes_consumed, errcode, data = self.encode(0) - if data: - assert self.fd is not None - self.fd.write(data) - return bytes_consumed, errcode - - def encode_to_file(self, fh: int, bufsize: int) -> int: - """ - :param fh: File handle. - :param bufsize: Buffer size. - - :returns: If finished successfully, return 0. - Otherwise, return an error code. Err codes are from - :data:`.ImageFile.ERRORS`. - """ - errcode = 0 - while errcode == 0: - status, errcode, buf = self.encode(bufsize) - if status > 0: - os.write(fh, buf[status:]) - return errcode diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageFilter.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageFilter.py deleted file mode 100644 index 9326eeed..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageFilter.py +++ /dev/null @@ -1,607 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard filters -# -# History: -# 1995-11-27 fl Created -# 2002-06-08 fl Added rank and mode filters -# 2003-09-15 fl Fixed rank calculation in rank filter; added expand call -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2002 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import abc -import functools -from collections.abc import Sequence -from typing import cast - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from types import ModuleType - from typing import Any - - from . import _imaging - from ._typing import NumpyArray - - -class Filter(abc.ABC): - @abc.abstractmethod - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - pass - - -class MultibandFilter(Filter): - pass - - -class BuiltinFilter(MultibandFilter): - filterargs: tuple[Any, ...] - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - if image.mode == "P": - msg = "cannot filter palette images" - raise ValueError(msg) - return image.filter(*self.filterargs) - - -class Kernel(BuiltinFilter): - """ - Create a convolution kernel. This only supports 3x3 and 5x5 integer and floating - point kernels. - - Kernels can only be applied to "L" and "RGB" images. - - :param size: Kernel size, given as (width, height). This must be (3,3) or (5,5). - :param kernel: A sequence containing kernel weights. The kernel will be flipped - vertically before being applied to the image. - :param scale: Scale factor. If given, the result for each pixel is divided by this - value. The default is the sum of the kernel weights. - :param offset: Offset. If given, this value is added to the result, after it has - been divided by the scale factor. - """ - - name = "Kernel" - - def __init__( - self, - size: tuple[int, int], - kernel: Sequence[float], - scale: float | None = None, - offset: float = 0, - ) -> None: - if scale is None: - # default scale is sum of kernel - scale = functools.reduce(lambda a, b: a + b, kernel) - if size[0] * size[1] != len(kernel): - msg = "not enough coefficients in kernel" - raise ValueError(msg) - self.filterargs = size, scale, offset, kernel - - -class RankFilter(Filter): - """ - Create a rank filter. The rank filter sorts all pixels in - a window of the given size, and returns the ``rank``'th value. - - :param size: The kernel size, in pixels. - :param rank: What pixel value to pick. Use 0 for a min filter, - ``size * size / 2`` for a median filter, ``size * size - 1`` - for a max filter, etc. - """ - - name = "Rank" - - def __init__(self, size: int, rank: int) -> None: - self.size = size - self.rank = rank - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - if image.mode == "P": - msg = "cannot filter palette images" - raise ValueError(msg) - image = image.expand(self.size // 2, self.size // 2) - return image.rankfilter(self.size, self.rank) - - -class MedianFilter(RankFilter): - """ - Create a median filter. Picks the median pixel value in a window with the - given size. - - :param size: The kernel size, in pixels. - """ - - name = "Median" - - def __init__(self, size: int = 3) -> None: - self.size = size - self.rank = size * size // 2 - - -class MinFilter(RankFilter): - """ - Create a min filter. Picks the lowest pixel value in a window with the - given size. - - :param size: The kernel size, in pixels. - """ - - name = "Min" - - def __init__(self, size: int = 3) -> None: - self.size = size - self.rank = 0 - - -class MaxFilter(RankFilter): - """ - Create a max filter. Picks the largest pixel value in a window with the - given size. - - :param size: The kernel size, in pixels. - """ - - name = "Max" - - def __init__(self, size: int = 3) -> None: - self.size = size - self.rank = size * size - 1 - - -class ModeFilter(Filter): - """ - Create a mode filter. Picks the most frequent pixel value in a box with the - given size. Pixel values that occur only once or twice are ignored; if no - pixel value occurs more than twice, the original pixel value is preserved. - - :param size: The kernel size, in pixels. - """ - - name = "Mode" - - def __init__(self, size: int = 3) -> None: - self.size = size - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - return image.modefilter(self.size) - - -class GaussianBlur(MultibandFilter): - """Blurs the image with a sequence of extended box filters, which - approximates a Gaussian kernel. For details on accuracy see - - - :param radius: Standard deviation of the Gaussian kernel. Either a sequence of two - numbers for x and y, or a single number for both. - """ - - name = "GaussianBlur" - - def __init__(self, radius: float | Sequence[float] = 2) -> None: - self.radius = radius - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - xy = self.radius - if isinstance(xy, (int, float)): - xy = (xy, xy) - if xy == (0, 0): - return image.copy() - return image.gaussian_blur(xy) - - -class BoxBlur(MultibandFilter): - """Blurs the image by setting each pixel to the average value of the pixels - in a square box extending radius pixels in each direction. - Supports float radius of arbitrary size. Uses an optimized implementation - which runs in linear time relative to the size of the image - for any radius value. - - :param radius: Size of the box in a direction. Either a sequence of two numbers for - x and y, or a single number for both. - - Radius 0 does not blur, returns an identical image. - Radius 1 takes 1 pixel in each direction, i.e. 9 pixels in total. - """ - - name = "BoxBlur" - - def __init__(self, radius: float | Sequence[float]) -> None: - xy = radius if isinstance(radius, (tuple, list)) else (radius, radius) - if xy[0] < 0 or xy[1] < 0: - msg = "radius must be >= 0" - raise ValueError(msg) - self.radius = radius - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - xy = self.radius - if isinstance(xy, (int, float)): - xy = (xy, xy) - if xy == (0, 0): - return image.copy() - return image.box_blur(xy) - - -class UnsharpMask(MultibandFilter): - """Unsharp mask filter. - - See Wikipedia's entry on `digital unsharp masking`_ for an explanation of - the parameters. - - :param radius: Blur Radius - :param percent: Unsharp strength, in percent - :param threshold: Threshold controls the minimum brightness change that - will be sharpened - - .. _digital unsharp masking: https://en.wikipedia.org/wiki/Unsharp_masking#Digital_unsharp_masking - - """ - - name = "UnsharpMask" - - def __init__( - self, radius: float = 2, percent: int = 150, threshold: int = 3 - ) -> None: - self.radius = radius - self.percent = percent - self.threshold = threshold - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - return image.unsharp_mask(self.radius, self.percent, self.threshold) - - -class BLUR(BuiltinFilter): - name = "Blur" - # fmt: off - filterargs = (5, 5), 16, 0, ( - 1, 1, 1, 1, 1, - 1, 0, 0, 0, 1, - 1, 0, 0, 0, 1, - 1, 0, 0, 0, 1, - 1, 1, 1, 1, 1, - ) - # fmt: on - - -class CONTOUR(BuiltinFilter): - name = "Contour" - # fmt: off - filterargs = (3, 3), 1, 255, ( - -1, -1, -1, - -1, 8, -1, - -1, -1, -1, - ) - # fmt: on - - -class DETAIL(BuiltinFilter): - name = "Detail" - # fmt: off - filterargs = (3, 3), 6, 0, ( - 0, -1, 0, - -1, 10, -1, - 0, -1, 0, - ) - # fmt: on - - -class EDGE_ENHANCE(BuiltinFilter): - name = "Edge-enhance" - # fmt: off - filterargs = (3, 3), 2, 0, ( - -1, -1, -1, - -1, 10, -1, - -1, -1, -1, - ) - # fmt: on - - -class EDGE_ENHANCE_MORE(BuiltinFilter): - name = "Edge-enhance More" - # fmt: off - filterargs = (3, 3), 1, 0, ( - -1, -1, -1, - -1, 9, -1, - -1, -1, -1, - ) - # fmt: on - - -class EMBOSS(BuiltinFilter): - name = "Emboss" - # fmt: off - filterargs = (3, 3), 1, 128, ( - -1, 0, 0, - 0, 1, 0, - 0, 0, 0, - ) - # fmt: on - - -class FIND_EDGES(BuiltinFilter): - name = "Find Edges" - # fmt: off - filterargs = (3, 3), 1, 0, ( - -1, -1, -1, - -1, 8, -1, - -1, -1, -1, - ) - # fmt: on - - -class SHARPEN(BuiltinFilter): - name = "Sharpen" - # fmt: off - filterargs = (3, 3), 16, 0, ( - -2, -2, -2, - -2, 32, -2, - -2, -2, -2, - ) - # fmt: on - - -class SMOOTH(BuiltinFilter): - name = "Smooth" - # fmt: off - filterargs = (3, 3), 13, 0, ( - 1, 1, 1, - 1, 5, 1, - 1, 1, 1, - ) - # fmt: on - - -class SMOOTH_MORE(BuiltinFilter): - name = "Smooth More" - # fmt: off - filterargs = (5, 5), 100, 0, ( - 1, 1, 1, 1, 1, - 1, 5, 5, 5, 1, - 1, 5, 44, 5, 1, - 1, 5, 5, 5, 1, - 1, 1, 1, 1, 1, - ) - # fmt: on - - -class Color3DLUT(MultibandFilter): - """Three-dimensional color lookup table. - - Transforms 3-channel pixels using the values of the channels as coordinates - in the 3D lookup table and interpolating the nearest elements. - - This method allows you to apply almost any color transformation - in constant time by using pre-calculated decimated tables. - - .. versionadded:: 5.2.0 - - :param size: Size of the table. One int or tuple of (int, int, int). - Minimal size in any dimension is 2, maximum is 65. - :param table: Flat lookup table. A list of ``channels * size**3`` - float elements or a list of ``size**3`` channels-sized - tuples with floats. Channels are changed first, - then first dimension, then second, then third. - Value 0.0 corresponds lowest value of output, 1.0 highest. - :param channels: Number of channels in the table. Could be 3 or 4. - Default is 3. - :param target_mode: A mode for the result image. Should have not less - than ``channels`` channels. Default is ``None``, - which means that mode wouldn't be changed. - """ - - name = "Color 3D LUT" - - def __init__( - self, - size: int | tuple[int, int, int], - table: Sequence[float] | Sequence[Sequence[int]] | NumpyArray, - channels: int = 3, - target_mode: str | None = None, - **kwargs: bool, - ) -> None: - if channels not in (3, 4): - msg = "Only 3 or 4 output channels are supported" - raise ValueError(msg) - self.size = size = self._check_size(size) - self.channels = channels - self.mode = target_mode - - # Hidden flag `_copy_table=False` could be used to avoid extra copying - # of the table if the table is specially made for the constructor. - copy_table = kwargs.get("_copy_table", True) - items = size[0] * size[1] * size[2] - wrong_size = False - - numpy: ModuleType | None = None - if hasattr(table, "shape"): - try: - import numpy - except ImportError: - pass - - if numpy and isinstance(table, numpy.ndarray): - numpy_table: NumpyArray = table - if copy_table: - numpy_table = numpy_table.copy() - - if numpy_table.shape in [ - (items * channels,), - (items, channels), - (size[2], size[1], size[0], channels), - ]: - table = numpy_table.reshape(items * channels) - else: - wrong_size = True - - else: - if copy_table: - table = list(table) - - # Convert to a flat list - if table and isinstance(table[0], (list, tuple)): - raw_table = cast(Sequence[Sequence[int]], table) - flat_table: list[int] = [] - for pixel in raw_table: - if len(pixel) != channels: - msg = ( - "The elements of the table should " - f"have a length of {channels}." - ) - raise ValueError(msg) - flat_table.extend(pixel) - table = flat_table - - if wrong_size or len(table) != items * channels: - msg = ( - "The table should have either channels * size**3 float items " - "or size**3 items of channels-sized tuples with floats. " - f"Table should be: {channels}x{size[0]}x{size[1]}x{size[2]}. " - f"Actual length: {len(table)}" - ) - raise ValueError(msg) - self.table = table - - @staticmethod - def _check_size(size: Any) -> tuple[int, int, int]: - try: - _, _, _ = size - except ValueError as e: - msg = "Size should be either an integer or a tuple of three integers." - raise ValueError(msg) from e - except TypeError: - size = (size, size, size) - size = tuple(int(x) for x in size) - for size_1d in size: - if not 2 <= size_1d <= 65: - msg = "Size should be in [2, 65] range." - raise ValueError(msg) - return size - - @classmethod - def generate( - cls, - size: int | tuple[int, int, int], - callback: Callable[[float, float, float], tuple[float, ...]], - channels: int = 3, - target_mode: str | None = None, - ) -> Color3DLUT: - """Generates new LUT using provided callback. - - :param size: Size of the table. Passed to the constructor. - :param callback: Function with three parameters which correspond - three color channels. Will be called ``size**3`` - times with values from 0.0 to 1.0 and should return - a tuple with ``channels`` elements. - :param channels: The number of channels which should return callback. - :param target_mode: Passed to the constructor of the resulting - lookup table. - """ - size_1d, size_2d, size_3d = cls._check_size(size) - if channels not in (3, 4): - msg = "Only 3 or 4 output channels are supported" - raise ValueError(msg) - - table: list[float] = [0] * (size_1d * size_2d * size_3d * channels) - idx_out = 0 - for b in range(size_3d): - for g in range(size_2d): - for r in range(size_1d): - table[idx_out : idx_out + channels] = callback( - r / (size_1d - 1), g / (size_2d - 1), b / (size_3d - 1) - ) - idx_out += channels - - return cls( - (size_1d, size_2d, size_3d), - table, - channels=channels, - target_mode=target_mode, - _copy_table=False, - ) - - def transform( - self, - callback: Callable[..., tuple[float, ...]], - with_normals: bool = False, - channels: int | None = None, - target_mode: str | None = None, - ) -> Color3DLUT: - """Transforms the table values using provided callback and returns - a new LUT with altered values. - - :param callback: A function which takes old lookup table values - and returns a new set of values. The number - of arguments which function should take is - ``self.channels`` or ``3 + self.channels`` - if ``with_normals`` flag is set. - Should return a tuple of ``self.channels`` or - ``channels`` elements if it is set. - :param with_normals: If true, ``callback`` will be called with - coordinates in the color cube as the first - three arguments. Otherwise, ``callback`` - will be called only with actual color values. - :param channels: The number of channels in the resulting lookup table. - :param target_mode: Passed to the constructor of the resulting - lookup table. - """ - if channels not in (None, 3, 4): - msg = "Only 3 or 4 output channels are supported" - raise ValueError(msg) - ch_in = self.channels - ch_out = channels or ch_in - size_1d, size_2d, size_3d = self.size - - table: list[float] = [0] * (size_1d * size_2d * size_3d * ch_out) - idx_in = 0 - idx_out = 0 - for b in range(size_3d): - for g in range(size_2d): - for r in range(size_1d): - values = self.table[idx_in : idx_in + ch_in] - if with_normals: - values = callback( - r / (size_1d - 1), - g / (size_2d - 1), - b / (size_3d - 1), - *values, - ) - else: - values = callback(*values) - table[idx_out : idx_out + ch_out] = values - idx_in += ch_in - idx_out += ch_out - - return type(self)( - self.size, - table, - channels=ch_out, - target_mode=target_mode or self.mode, - _copy_table=False, - ) - - def __repr__(self) -> str: - r = [ - f"{self.__class__.__name__} from {self.table.__class__.__name__}", - "size={:d}x{:d}x{:d}".format(*self.size), - f"channels={self.channels:d}", - ] - if self.mode: - r.append(f"target_mode={self.mode}") - return "<{}>".format(" ".join(r)) - - def filter(self, image: _imaging.ImagingCore) -> _imaging.ImagingCore: - from . import Image - - return image.color_lut_3d( - self.mode or image.mode, - Image.Resampling.BILINEAR, - self.channels, - self.size, - self.table, - ) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageFont.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageFont.py deleted file mode 100644 index 92eb763a..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageFont.py +++ /dev/null @@ -1,1312 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PIL raster font management -# -# History: -# 1996-08-07 fl created (experimental) -# 1997-08-25 fl minor adjustments to handle fonts from pilfont 0.3 -# 1999-02-06 fl rewrote most font management stuff in C -# 1999-03-17 fl take pth files into account in load_path (from Richard Jones) -# 2001-02-17 fl added freetype support -# 2001-05-09 fl added TransposedFont wrapper class -# 2002-03-04 fl make sure we have a "L" or "1" font -# 2002-12-04 fl skip non-directory entries in the system path -# 2003-04-29 fl add embedded default font -# 2003-09-27 fl added support for truetype charmap encodings -# -# Todo: -# Adapt to PILFONT2 format (16-bit fonts, compressed, single file) -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1996-2003 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from __future__ import annotations - -import base64 -import os -import sys -import warnings -from enum import IntEnum -from io import BytesIO -from types import ModuleType -from typing import IO, Any, BinaryIO, TypedDict, cast - -from . import Image -from ._typing import StrOrBytesPath -from ._util import DeferredError, is_path - -TYPE_CHECKING = False -if TYPE_CHECKING: - from . import ImageFile - from ._imaging import ImagingFont - from ._imagingft import Font - - -class Axis(TypedDict): - minimum: int | None - default: int | None - maximum: int | None - name: bytes | None - - -class Layout(IntEnum): - BASIC = 0 - RAQM = 1 - - -MAX_STRING_LENGTH = 1_000_000 - - -core: ModuleType | DeferredError -try: - from . import _imagingft as core -except ImportError as ex: - core = DeferredError.new(ex) - - -def _string_length_check(text: str | bytes | bytearray) -> None: - if MAX_STRING_LENGTH is not None and len(text) > MAX_STRING_LENGTH: - msg = "too many characters in string" - raise ValueError(msg) - - -# FIXME: add support for pilfont2 format (see FontFile.py) - -# -------------------------------------------------------------------- -# Font metrics format: -# "PILfont" LF -# fontdescriptor LF -# (optional) key=value... LF -# "DATA" LF -# binary data: 256*10*2 bytes (dx, dy, dstbox, srcbox) -# -# To place a character, cut out srcbox and paste at dstbox, -# relative to the character position. Then move the character -# position according to dx, dy. -# -------------------------------------------------------------------- - - -class ImageFont: - """PIL font wrapper""" - - font: ImagingFont - - def _load_pilfont(self, filename: str) -> None: - with open(filename, "rb") as fp: - image: ImageFile.ImageFile | None = None - root = os.path.splitext(filename)[0] - - for ext in (".png", ".gif", ".pbm"): - if image: - image.close() - try: - fullname = root + ext - image = Image.open(fullname) - except Exception: - pass - else: - if image and image.mode in ("1", "L"): - break - else: - if image: - image.close() - - msg = f"cannot find glyph data file {root}.{{gif|pbm|png}}" - raise OSError(msg) - - self.file = fullname - - self._load_pilfont_data(fp, image) - image.close() - - def _load_pilfont_data(self, file: IO[bytes], image: Image.Image) -> None: - # check image - if image.mode not in ("1", "L"): - msg = "invalid font image mode" - raise TypeError(msg) - - # read PILfont header - if file.read(8) != b"PILfont\n": - msg = "Not a PILfont file" - raise SyntaxError(msg) - file.readline() - self.info = [] # FIXME: should be a dictionary - while True: - s = file.readline() - if not s or s == b"DATA\n": - break - self.info.append(s) - - # read PILfont metrics - data = file.read(256 * 20) - - image.load() - - self.font = Image.core.font(image.im, data) - - def getmask( - self, text: str | bytes, mode: str = "", *args: Any, **kwargs: Any - ) -> Image.core.ImagingCore: - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :return: An internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module. - """ - _string_length_check(text) - Image._decompression_bomb_check(self.font.getsize(text)) - return self.font.getmask(text, mode) - - def getbbox( - self, text: str | bytes | bytearray, *args: Any, **kwargs: Any - ) -> tuple[int, int, int, int]: - """ - Returns bounding box (in pixels) of given text. - - .. versionadded:: 9.2.0 - - :param text: Text to render. - - :return: ``(left, top, right, bottom)`` bounding box - """ - _string_length_check(text) - width, height = self.font.getsize(text) - return 0, 0, width, height - - def getlength( - self, text: str | bytes | bytearray, *args: Any, **kwargs: Any - ) -> int: - """ - Returns length (in pixels) of given text. - This is the amount by which following text should be offset. - - .. versionadded:: 9.2.0 - """ - _string_length_check(text) - width, height = self.font.getsize(text) - return width - - -## -# Wrapper for FreeType fonts. Application code should use the -# truetype factory function to create font objects. - - -class FreeTypeFont: - """FreeType font wrapper (requires _imagingft service)""" - - font: Font - font_bytes: bytes - - def __init__( - self, - font: StrOrBytesPath | BinaryIO, - size: float = 10, - index: int = 0, - encoding: str = "", - layout_engine: Layout | None = None, - ) -> None: - # FIXME: use service provider instead - - if isinstance(core, DeferredError): - raise core.ex - - if size <= 0: - msg = f"font size must be greater than 0, not {size}" - raise ValueError(msg) - - self.path = font - self.size = size - self.index = index - self.encoding = encoding - - if layout_engine not in (Layout.BASIC, Layout.RAQM): - layout_engine = Layout.BASIC - if core.HAVE_RAQM: - layout_engine = Layout.RAQM - elif layout_engine == Layout.RAQM and not core.HAVE_RAQM: - warnings.warn( - "Raqm layout was requested, but Raqm is not available. " - "Falling back to basic layout." - ) - layout_engine = Layout.BASIC - - self.layout_engine = layout_engine - - def load_from_bytes(f: IO[bytes]) -> None: - self.font_bytes = f.read() - self.font = core.getfont( - "", size, index, encoding, self.font_bytes, layout_engine - ) - - if is_path(font): - font = os.fspath(font) - if sys.platform == "win32": - font_bytes_path = font if isinstance(font, bytes) else font.encode() - try: - font_bytes_path.decode("ascii") - except UnicodeDecodeError: - # FreeType cannot load fonts with non-ASCII characters on Windows - # So load it into memory first - with open(font, "rb") as f: - load_from_bytes(f) - return - self.font = core.getfont( - font, size, index, encoding, layout_engine=layout_engine - ) - else: - load_from_bytes(cast(IO[bytes], font)) - - def __getstate__(self) -> list[Any]: - return [self.path, self.size, self.index, self.encoding, self.layout_engine] - - def __setstate__(self, state: list[Any]) -> None: - path, size, index, encoding, layout_engine = state - FreeTypeFont.__init__(self, path, size, index, encoding, layout_engine) - - def getname(self) -> tuple[str | None, str | None]: - """ - :return: A tuple of the font family (e.g. Helvetica) and the font style - (e.g. Bold) - """ - return self.font.family, self.font.style - - def getmetrics(self) -> tuple[int, int]: - """ - :return: A tuple of the font ascent (the distance from the baseline to - the highest outline point) and descent (the distance from the - baseline to the lowest outline point, a negative value) - """ - return self.font.ascent, self.font.descent - - def getlength( - self, - text: str | bytes, - mode: str = "", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - ) -> float: - """ - Returns length (in pixels with 1/64 precision) of given text when rendered - in font with provided direction, features, and language. - - This is the amount by which following text should be offset. - Text bounding box may extend past the length in some fonts, - e.g. when using italics or accents. - - The result is returned as a float; it is a whole number if using basic layout. - - Note that the sum of two lengths may not equal the length of a concatenated - string due to kerning. If you need to adjust for kerning, include the following - character and subtract its length. - - For example, instead of :: - - hello = font.getlength("Hello") - world = font.getlength("World") - hello_world = hello + world # not adjusted for kerning - assert hello_world == font.getlength("HelloWorld") # may fail - - use :: - - hello = font.getlength("HelloW") - font.getlength("W") # adjusted for kerning - world = font.getlength("World") - hello_world = hello + world # adjusted for kerning - assert hello_world == font.getlength("HelloWorld") # True - - or disable kerning with (requires libraqm) :: - - hello = draw.textlength("Hello", font, features=["-kern"]) - world = draw.textlength("World", font, features=["-kern"]) - hello_world = hello + world # kerning is disabled, no need to adjust - assert hello_world == draw.textlength("HelloWorld", font, features=["-kern"]) - - .. versionadded:: 8.0.0 - - :param text: Text to measure. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - :return: Either width for horizontal text, or height for vertical text. - """ - _string_length_check(text) - return self.font.getlength(text, mode, direction, features, language) / 64 - - def getbbox( - self, - text: str | bytes, - mode: str = "", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - anchor: str | None = None, - ) -> tuple[float, float, float, float]: - """ - Returns bounding box (in pixels) of given text relative to given anchor - when rendered in font with provided direction, features, and language. - - Use :py:meth:`getlength()` to get the offset of following text with - 1/64 pixel precision. The bounding box includes extra margins for - some fonts, e.g. italics or accents. - - .. versionadded:: 8.0.0 - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - :param stroke_width: The width of the text stroke. - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left, - specifically ``la`` for horizontal text and ``lt`` for - vertical text. See :ref:`text-anchors` for details. - - :return: ``(left, top, right, bottom)`` bounding box - """ - _string_length_check(text) - size, offset = self.font.getsize( - text, mode, direction, features, language, anchor - ) - left, top = offset[0] - stroke_width, offset[1] - stroke_width - width, height = size[0] + 2 * stroke_width, size[1] + 2 * stroke_width - return left, top, left + width, top + height - - def getmask( - self, - text: str | bytes, - mode: str = "", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - anchor: str | None = None, - ink: int = 0, - start: tuple[float, float] | None = None, - ) -> Image.core.ImagingCore: - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. If the font has embedded color data, the bitmap - should have mode ``RGBA``. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - .. versionadded:: 6.0.0 - - :param stroke_width: The width of the text stroke. - - .. versionadded:: 6.2.0 - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left, - specifically ``la`` for horizontal text and ``lt`` for - vertical text. See :ref:`text-anchors` for details. - - .. versionadded:: 8.0.0 - - :param ink: Foreground ink for rendering in RGBA mode. - - .. versionadded:: 8.0.0 - - :param start: Tuple of horizontal and vertical offset, as text may render - differently when starting at fractional coordinates. - - .. versionadded:: 9.4.0 - - :return: An internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module. - """ - return self.getmask2( - text, - mode, - direction=direction, - features=features, - language=language, - stroke_width=stroke_width, - anchor=anchor, - ink=ink, - start=start, - )[0] - - def getmask2( - self, - text: str | bytes, - mode: str = "", - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - stroke_width: float = 0, - anchor: str | None = None, - ink: int = 0, - start: tuple[float, float] | None = None, - *args: Any, - **kwargs: Any, - ) -> tuple[Image.core.ImagingCore, tuple[int, int]]: - """ - Create a bitmap for the text. - - If the font uses antialiasing, the bitmap should have mode ``L`` and use a - maximum value of 255. If the font has embedded color data, the bitmap - should have mode ``RGBA``. Otherwise, it should have mode ``1``. - - :param text: Text to render. - :param mode: Used by some graphics drivers to indicate what mode the - driver prefers; if empty, the renderer may return either - mode. Note that the mode is always a string, to simplify - C-level implementations. - - .. versionadded:: 1.1.5 - - :param direction: Direction of the text. It can be 'rtl' (right to - left), 'ltr' (left to right) or 'ttb' (top to bottom). - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional - font features that are not enabled by default, - for example 'dlig' or 'ss01', but can be also - used to turn off default font features for - example '-liga' to disable ligatures or '-kern' - to disable kerning. To get all supported - features, see - https://learn.microsoft.com/en-us/typography/opentype/spec/featurelist - Requires libraqm. - - .. versionadded:: 4.2.0 - - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code - `_ - Requires libraqm. - - .. versionadded:: 6.0.0 - - :param stroke_width: The width of the text stroke. - - .. versionadded:: 6.2.0 - - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left, - specifically ``la`` for horizontal text and ``lt`` for - vertical text. See :ref:`text-anchors` for details. - - .. versionadded:: 8.0.0 - - :param ink: Foreground ink for rendering in RGBA mode. - - .. versionadded:: 8.0.0 - - :param start: Tuple of horizontal and vertical offset, as text may render - differently when starting at fractional coordinates. - - .. versionadded:: 9.4.0 - - :return: A tuple of an internal PIL storage memory instance as defined by the - :py:mod:`PIL.Image.core` interface module, and the text offset, the - gap between the starting coordinate and the first marking - """ - _string_length_check(text) - if start is None: - start = (0, 0) - - def fill(width: int, height: int) -> Image.core.ImagingCore: - size = (width, height) - Image._decompression_bomb_check(size) - return Image.core.fill("RGBA" if mode == "RGBA" else "L", size) - - return self.font.render( - text, - fill, - mode, - direction, - features, - language, - stroke_width, - kwargs.get("stroke_filled", False), - anchor, - ink, - start, - ) - - def font_variant( - self, - font: StrOrBytesPath | BinaryIO | None = None, - size: float | None = None, - index: int | None = None, - encoding: str | None = None, - layout_engine: Layout | None = None, - ) -> FreeTypeFont: - """ - Create a copy of this FreeTypeFont object, - using any specified arguments to override the settings. - - Parameters are identical to the parameters used to initialize this - object. - - :return: A FreeTypeFont object. - """ - if font is None: - try: - font = BytesIO(self.font_bytes) - except AttributeError: - font = self.path - return FreeTypeFont( - font=font, - size=self.size if size is None else size, - index=self.index if index is None else index, - encoding=self.encoding if encoding is None else encoding, - layout_engine=layout_engine or self.layout_engine, - ) - - def get_variation_names(self) -> list[bytes]: - """ - :returns: A list of the named styles in a variation font. - :exception OSError: If the font is not a variation font. - """ - names = self.font.getvarnames() - return [name.replace(b"\x00", b"") for name in names] - - def set_variation_by_name(self, name: str | bytes) -> None: - """ - :param name: The name of the style. - :exception OSError: If the font is not a variation font. - """ - names = self.get_variation_names() - if not isinstance(name, bytes): - name = name.encode() - index = names.index(name) + 1 - - if index == getattr(self, "_last_variation_index", None): - # When the same name is set twice in a row, - # there is an 'unknown freetype error' - # https://savannah.nongnu.org/bugs/?56186 - return - self._last_variation_index = index - - self.font.setvarname(index) - - def get_variation_axes(self) -> list[Axis]: - """ - :returns: A list of the axes in a variation font. - :exception OSError: If the font is not a variation font. - """ - axes = self.font.getvaraxes() - for axis in axes: - if axis["name"]: - axis["name"] = axis["name"].replace(b"\x00", b"") - return axes - - def set_variation_by_axes(self, axes: list[float]) -> None: - """ - :param axes: A list of values for each axis. - :exception OSError: If the font is not a variation font. - """ - self.font.setvaraxes(axes) - - -class TransposedFont: - """Wrapper for writing rotated or mirrored text""" - - def __init__( - self, font: ImageFont | FreeTypeFont, orientation: Image.Transpose | None = None - ): - """ - Wrapper that creates a transposed font from any existing font - object. - - :param font: A font object. - :param orientation: An optional orientation. If given, this should - be one of Image.Transpose.FLIP_LEFT_RIGHT, Image.Transpose.FLIP_TOP_BOTTOM, - Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_180, or - Image.Transpose.ROTATE_270. - """ - self.font = font - self.orientation = orientation # any 'transpose' argument, or None - - def getmask( - self, text: str | bytes, mode: str = "", *args: Any, **kwargs: Any - ) -> Image.core.ImagingCore: - im = self.font.getmask(text, mode, *args, **kwargs) - if self.orientation is not None: - return im.transpose(self.orientation) - return im - - def getbbox( - self, text: str | bytes, *args: Any, **kwargs: Any - ) -> tuple[int, int, float, float]: - # TransposedFont doesn't support getmask2, move top-left point to (0, 0) - # this has no effect on ImageFont and simulates anchor="lt" for FreeTypeFont - left, top, right, bottom = self.font.getbbox(text, *args, **kwargs) - width = right - left - height = bottom - top - if self.orientation in (Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_270): - return 0, 0, height, width - return 0, 0, width, height - - def getlength(self, text: str | bytes, *args: Any, **kwargs: Any) -> float: - if self.orientation in (Image.Transpose.ROTATE_90, Image.Transpose.ROTATE_270): - msg = "text length is undefined for text rotated by 90 or 270 degrees" - raise ValueError(msg) - return self.font.getlength(text, *args, **kwargs) - - -def load(filename: str) -> ImageFont: - """ - Load a font file. This function loads a font object from the given - bitmap font file, and returns the corresponding font object. For loading TrueType - or OpenType fonts instead, see :py:func:`~PIL.ImageFont.truetype`. - - :param filename: Name of font file. - :return: A font object. - :exception OSError: If the file could not be read. - """ - f = ImageFont() - f._load_pilfont(filename) - return f - - -def truetype( - font: StrOrBytesPath | BinaryIO, - size: float = 10, - index: int = 0, - encoding: str = "", - layout_engine: Layout | None = None, -) -> FreeTypeFont: - """ - Load a TrueType or OpenType font from a file or file-like object, - and create a font object. This function loads a font object from the given - file or file-like object, and creates a font object for a font of the given - size. For loading bitmap fonts instead, see :py:func:`~PIL.ImageFont.load` - and :py:func:`~PIL.ImageFont.load_path`. - - Pillow uses FreeType to open font files. On Windows, be aware that FreeType - will keep the file open as long as the FreeTypeFont object exists. Windows - limits the number of files that can be open in C at once to 512, so if many - fonts are opened simultaneously and that limit is approached, an - ``OSError`` may be thrown, reporting that FreeType "cannot open resource". - A workaround would be to copy the file(s) into memory, and open that instead. - - This function requires the _imagingft service. - - :param font: A filename or file-like object containing a TrueType font. - If the file is not found in this filename, the loader may also - search in other directories, such as: - - * The :file:`fonts/` directory on Windows, - * :file:`/Library/Fonts/`, :file:`/System/Library/Fonts/` - and :file:`~/Library/Fonts/` on macOS. - * :file:`~/.local/share/fonts`, :file:`/usr/local/share/fonts`, - and :file:`/usr/share/fonts` on Linux; or those specified by - the ``XDG_DATA_HOME`` and ``XDG_DATA_DIRS`` environment variables - for user-installed and system-wide fonts, respectively. - - :param size: The requested size, in pixels. - :param index: Which font face to load (default is first available face). - :param encoding: Which font encoding to use (default is Unicode). Possible - encodings include (see the FreeType documentation for more - information): - - * "unic" (Unicode) - * "symb" (Microsoft Symbol) - * "ADOB" (Adobe Standard) - * "ADBE" (Adobe Expert) - * "ADBC" (Adobe Custom) - * "armn" (Apple Roman) - * "sjis" (Shift JIS) - * "gb " (PRC) - * "big5" - * "wans" (Extended Wansung) - * "joha" (Johab) - * "lat1" (Latin-1) - - This specifies the character set to use. It does not alter the - encoding of any text provided in subsequent operations. - :param layout_engine: Which layout engine to use, if available: - :attr:`.ImageFont.Layout.BASIC` or :attr:`.ImageFont.Layout.RAQM`. - If it is available, Raqm layout will be used by default. - Otherwise, basic layout will be used. - - Raqm layout is recommended for all non-English text. If Raqm layout - is not required, basic layout will have better performance. - - You can check support for Raqm layout using - :py:func:`PIL.features.check_feature` with ``feature="raqm"``. - - .. versionadded:: 4.2.0 - :return: A font object. - :exception OSError: If the file could not be read. - :exception ValueError: If the font size is not greater than zero. - """ - - def freetype(font: StrOrBytesPath | BinaryIO) -> FreeTypeFont: - return FreeTypeFont(font, size, index, encoding, layout_engine) - - try: - return freetype(font) - except OSError: - if not is_path(font): - raise - ttf_filename = os.path.basename(font) - - dirs = [] - if sys.platform == "win32": - # check the windows font repository - # NOTE: must use uppercase WINDIR, to work around bugs in - # 1.5.2's os.environ.get() - windir = os.environ.get("WINDIR") - if windir: - dirs.append(os.path.join(windir, "fonts")) - elif sys.platform in ("linux", "linux2"): - data_home = os.environ.get("XDG_DATA_HOME") - if not data_home: - # The freedesktop spec defines the following default directory for - # when XDG_DATA_HOME is unset or empty. This user-level directory - # takes precedence over system-level directories. - data_home = os.path.expanduser("~/.local/share") - xdg_dirs = [data_home] - - data_dirs = os.environ.get("XDG_DATA_DIRS") - if not data_dirs: - # Similarly, defaults are defined for the system-level directories - data_dirs = "/usr/local/share:/usr/share" - xdg_dirs += data_dirs.split(":") - - dirs += [os.path.join(xdg_dir, "fonts") for xdg_dir in xdg_dirs] - elif sys.platform == "darwin": - dirs += [ - "/Library/Fonts", - "/System/Library/Fonts", - os.path.expanduser("~/Library/Fonts"), - ] - - ext = os.path.splitext(ttf_filename)[1] - first_font_with_a_different_extension = None - for directory in dirs: - for walkroot, walkdir, walkfilenames in os.walk(directory): - for walkfilename in walkfilenames: - if ext and walkfilename == ttf_filename: - return freetype(os.path.join(walkroot, walkfilename)) - elif not ext and os.path.splitext(walkfilename)[0] == ttf_filename: - fontpath = os.path.join(walkroot, walkfilename) - if os.path.splitext(fontpath)[1] == ".ttf": - return freetype(fontpath) - if not ext and first_font_with_a_different_extension is None: - first_font_with_a_different_extension = fontpath - if first_font_with_a_different_extension: - return freetype(first_font_with_a_different_extension) - raise - - -def load_path(filename: str | bytes) -> ImageFont: - """ - Load font file. Same as :py:func:`~PIL.ImageFont.load`, but searches for a - bitmap font along the Python path. - - :param filename: Name of font file. - :return: A font object. - :exception OSError: If the file could not be read. - """ - if not isinstance(filename, str): - filename = filename.decode("utf-8") - for directory in sys.path: - try: - return load(os.path.join(directory, filename)) - except OSError: - pass - msg = f'cannot find font file "{filename}" in sys.path' - if os.path.exists(filename): - msg += f', did you mean ImageFont.load("{filename}") instead?' - - raise OSError(msg) - - -def load_default_imagefont() -> ImageFont: - f = ImageFont() - f._load_pilfont_data( - # courB08 - BytesIO( - base64.b64decode( - b""" -UElMZm9udAo7Ozs7OzsxMDsKREFUQQoAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAA//8AAQAAAAAAAAABAAEA -BgAAAAH/+gADAAAAAQAAAAMABgAGAAAAAf/6AAT//QADAAAABgADAAYAAAAA//kABQABAAYAAAAL -AAgABgAAAAD/+AAFAAEACwAAABAACQAGAAAAAP/5AAUAAAAQAAAAFQAHAAYAAP////oABQAAABUA -AAAbAAYABgAAAAH/+QAE//wAGwAAAB4AAwAGAAAAAf/5AAQAAQAeAAAAIQAIAAYAAAAB//kABAAB -ACEAAAAkAAgABgAAAAD/+QAE//0AJAAAACgABAAGAAAAAP/6AAX//wAoAAAALQAFAAYAAAAB//8A -BAACAC0AAAAwAAMABgAAAAD//AAF//0AMAAAADUAAQAGAAAAAf//AAMAAAA1AAAANwABAAYAAAAB -//kABQABADcAAAA7AAgABgAAAAD/+QAFAAAAOwAAAEAABwAGAAAAAP/5AAYAAABAAAAARgAHAAYA -AAAA//kABQAAAEYAAABLAAcABgAAAAD/+QAFAAAASwAAAFAABwAGAAAAAP/5AAYAAABQAAAAVgAH -AAYAAAAA//kABQAAAFYAAABbAAcABgAAAAD/+QAFAAAAWwAAAGAABwAGAAAAAP/5AAUAAABgAAAA -ZQAHAAYAAAAA//kABQAAAGUAAABqAAcABgAAAAD/+QAFAAAAagAAAG8ABwAGAAAAAf/8AAMAAABv -AAAAcQAEAAYAAAAA//wAAwACAHEAAAB0AAYABgAAAAD/+gAE//8AdAAAAHgABQAGAAAAAP/7AAT/ -/gB4AAAAfAADAAYAAAAB//oABf//AHwAAACAAAUABgAAAAD/+gAFAAAAgAAAAIUABgAGAAAAAP/5 -AAYAAQCFAAAAiwAIAAYAAP////oABgAAAIsAAACSAAYABgAA////+gAFAAAAkgAAAJgABgAGAAAA -AP/6AAUAAACYAAAAnQAGAAYAAP////oABQAAAJ0AAACjAAYABgAA////+gAFAAAAowAAAKkABgAG -AAD////6AAUAAACpAAAArwAGAAYAAAAA//oABQAAAK8AAAC0AAYABgAA////+gAGAAAAtAAAALsA -BgAGAAAAAP/6AAQAAAC7AAAAvwAGAAYAAP////oABQAAAL8AAADFAAYABgAA////+gAGAAAAxQAA -AMwABgAGAAD////6AAUAAADMAAAA0gAGAAYAAP////oABQAAANIAAADYAAYABgAA////+gAGAAAA -2AAAAN8ABgAGAAAAAP/6AAUAAADfAAAA5AAGAAYAAP////oABQAAAOQAAADqAAYABgAAAAD/+gAF -AAEA6gAAAO8ABwAGAAD////6AAYAAADvAAAA9gAGAAYAAAAA//oABQAAAPYAAAD7AAYABgAA//// -+gAFAAAA+wAAAQEABgAGAAD////6AAYAAAEBAAABCAAGAAYAAP////oABgAAAQgAAAEPAAYABgAA -////+gAGAAABDwAAARYABgAGAAAAAP/6AAYAAAEWAAABHAAGAAYAAP////oABgAAARwAAAEjAAYA -BgAAAAD/+gAFAAABIwAAASgABgAGAAAAAf/5AAQAAQEoAAABKwAIAAYAAAAA//kABAABASsAAAEv -AAgABgAAAAH/+QAEAAEBLwAAATIACAAGAAAAAP/5AAX//AEyAAABNwADAAYAAAAAAAEABgACATcA -AAE9AAEABgAAAAH/+QAE//wBPQAAAUAAAwAGAAAAAP/7AAYAAAFAAAABRgAFAAYAAP////kABQAA -AUYAAAFMAAcABgAAAAD/+wAFAAABTAAAAVEABQAGAAAAAP/5AAYAAAFRAAABVwAHAAYAAAAA//sA -BQAAAVcAAAFcAAUABgAAAAD/+QAFAAABXAAAAWEABwAGAAAAAP/7AAYAAgFhAAABZwAHAAYAAP// -//kABQAAAWcAAAFtAAcABgAAAAD/+QAGAAABbQAAAXMABwAGAAAAAP/5AAQAAgFzAAABdwAJAAYA -AP////kABgAAAXcAAAF+AAcABgAAAAD/+QAGAAABfgAAAYQABwAGAAD////7AAUAAAGEAAABigAF -AAYAAP////sABQAAAYoAAAGQAAUABgAAAAD/+wAFAAABkAAAAZUABQAGAAD////7AAUAAgGVAAAB -mwAHAAYAAAAA//sABgACAZsAAAGhAAcABgAAAAD/+wAGAAABoQAAAacABQAGAAAAAP/7AAYAAAGn -AAABrQAFAAYAAAAA//kABgAAAa0AAAGzAAcABgAA////+wAGAAABswAAAboABQAGAAD////7AAUA -AAG6AAABwAAFAAYAAP////sABgAAAcAAAAHHAAUABgAAAAD/+wAGAAABxwAAAc0ABQAGAAD////7 -AAYAAgHNAAAB1AAHAAYAAAAA//sABQAAAdQAAAHZAAUABgAAAAH/+QAFAAEB2QAAAd0ACAAGAAAA -Av/6AAMAAQHdAAAB3gAHAAYAAAAA//kABAABAd4AAAHiAAgABgAAAAD/+wAF//0B4gAAAecAAgAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAYAAAAB -//sAAwACAecAAAHpAAcABgAAAAD/+QAFAAEB6QAAAe4ACAAGAAAAAP/5AAYAAAHuAAAB9AAHAAYA -AAAA//oABf//AfQAAAH5AAUABgAAAAD/+QAGAAAB+QAAAf8ABwAGAAAAAv/5AAMAAgH/AAACAAAJ -AAYAAAAA//kABQABAgAAAAIFAAgABgAAAAH/+gAE//sCBQAAAggAAQAGAAAAAP/5AAYAAAIIAAAC -DgAHAAYAAAAB//kABf/+Ag4AAAISAAUABgAA////+wAGAAACEgAAAhkABQAGAAAAAP/7AAX//gIZ -AAACHgADAAYAAAAA//wABf/9Ah4AAAIjAAEABgAAAAD/+QAHAAACIwAAAioABwAGAAAAAP/6AAT/ -+wIqAAACLgABAAYAAAAA//kABP/8Ai4AAAIyAAMABgAAAAD/+gAFAAACMgAAAjcABgAGAAAAAf/5 -AAT//QI3AAACOgAEAAYAAAAB//kABP/9AjoAAAI9AAQABgAAAAL/+QAE//sCPQAAAj8AAgAGAAD/ -///7AAYAAgI/AAACRgAHAAYAAAAA//kABgABAkYAAAJMAAgABgAAAAH//AAD//0CTAAAAk4AAQAG -AAAAAf//AAQAAgJOAAACUQADAAYAAAAB//kABP/9AlEAAAJUAAQABgAAAAH/+QAF//4CVAAAAlgA -BQAGAAD////7AAYAAAJYAAACXwAFAAYAAP////kABgAAAl8AAAJmAAcABgAA////+QAGAAACZgAA -Am0ABwAGAAD////5AAYAAAJtAAACdAAHAAYAAAAA//sABQACAnQAAAJ5AAcABgAA////9wAGAAAC -eQAAAoAACQAGAAD////3AAYAAAKAAAAChwAJAAYAAP////cABgAAAocAAAKOAAkABgAA////9wAG -AAACjgAAApUACQAGAAD////4AAYAAAKVAAACnAAIAAYAAP////cABgAAApwAAAKjAAkABgAA//// -+gAGAAACowAAAqoABgAGAAAAAP/6AAUAAgKqAAACrwAIAAYAAP////cABQAAAq8AAAK1AAkABgAA -////9wAFAAACtQAAArsACQAGAAD////3AAUAAAK7AAACwQAJAAYAAP////gABQAAAsEAAALHAAgA -BgAAAAD/9wAEAAACxwAAAssACQAGAAAAAP/3AAQAAALLAAACzwAJAAYAAAAA//cABAAAAs8AAALT -AAkABgAAAAD/+AAEAAAC0wAAAtcACAAGAAD////6AAUAAALXAAAC3QAGAAYAAP////cABgAAAt0A -AALkAAkABgAAAAD/9wAFAAAC5AAAAukACQAGAAAAAP/3AAUAAALpAAAC7gAJAAYAAAAA//cABQAA -Au4AAALzAAkABgAAAAD/9wAFAAAC8wAAAvgACQAGAAAAAP/4AAUAAAL4AAAC/QAIAAYAAAAA//oA -Bf//Av0AAAMCAAUABgAA////+gAGAAADAgAAAwkABgAGAAD////3AAYAAAMJAAADEAAJAAYAAP// -//cABgAAAxAAAAMXAAkABgAA////9wAGAAADFwAAAx4ACQAGAAD////4AAYAAAAAAAoABwASAAYA -AP////cABgAAAAcACgAOABMABgAA////+gAFAAAADgAKABQAEAAGAAD////6AAYAAAAUAAoAGwAQ -AAYAAAAA//gABgAAABsACgAhABIABgAAAAD/+AAGAAAAIQAKACcAEgAGAAAAAP/4AAYAAAAnAAoA -LQASAAYAAAAA//gABgAAAC0ACgAzABIABgAAAAD/+QAGAAAAMwAKADkAEQAGAAAAAP/3AAYAAAA5 -AAoAPwATAAYAAP////sABQAAAD8ACgBFAA8ABgAAAAD/+wAFAAIARQAKAEoAEQAGAAAAAP/4AAUA -AABKAAoATwASAAYAAAAA//gABQAAAE8ACgBUABIABgAAAAD/+AAFAAAAVAAKAFkAEgAGAAAAAP/5 -AAUAAABZAAoAXgARAAYAAAAA//gABgAAAF4ACgBkABIABgAAAAD/+AAGAAAAZAAKAGoAEgAGAAAA -AP/4AAYAAABqAAoAcAASAAYAAAAA//kABgAAAHAACgB2ABEABgAAAAD/+AAFAAAAdgAKAHsAEgAG -AAD////4AAYAAAB7AAoAggASAAYAAAAA//gABQAAAIIACgCHABIABgAAAAD/+AAFAAAAhwAKAIwA -EgAGAAAAAP/4AAUAAACMAAoAkQASAAYAAAAA//gABQAAAJEACgCWABIABgAAAAD/+QAFAAAAlgAK -AJsAEQAGAAAAAP/6AAX//wCbAAoAoAAPAAYAAAAA//oABQABAKAACgClABEABgAA////+AAGAAAA -pQAKAKwAEgAGAAD////4AAYAAACsAAoAswASAAYAAP////gABgAAALMACgC6ABIABgAA////+QAG -AAAAugAKAMEAEQAGAAD////4AAYAAgDBAAoAyAAUAAYAAP////kABQACAMgACgDOABMABgAA//// -+QAGAAIAzgAKANUAEw== -""" - ) - ), - Image.open( - BytesIO( - base64.b64decode( - b""" -iVBORw0KGgoAAAANSUhEUgAAAx4AAAAUAQAAAAArMtZoAAAEwElEQVR4nABlAJr/AHVE4czCI/4u -Mc4b7vuds/xzjz5/3/7u/n9vMe7vnfH/9++vPn/xyf5zhxzjt8GHw8+2d83u8x27199/nxuQ6Od9 -M43/5z2I+9n9ZtmDBwMQECDRQw/eQIQohJXxpBCNVE6QCCAAAAD//wBlAJr/AgALyj1t/wINwq0g -LeNZUworuN1cjTPIzrTX6ofHWeo3v336qPzfEwRmBnHTtf95/fglZK5N0PDgfRTslpGBvz7LFc4F -IUXBWQGjQ5MGCx34EDFPwXiY4YbYxavpnhHFrk14CDAAAAD//wBlAJr/AgKqRooH2gAgPeggvUAA -Bu2WfgPoAwzRAABAAAAAAACQgLz/3Uv4Gv+gX7BJgDeeGP6AAAD1NMDzKHD7ANWr3loYbxsAD791 -NAADfcoIDyP44K/jv4Y63/Z+t98Ovt+ub4T48LAAAAD//wBlAJr/AuplMlADJAAAAGuAphWpqhMx -in0A/fRvAYBABPgBwBUgABBQ/sYAyv9g0bCHgOLoGAAAAAAAREAAwI7nr0ArYpow7aX8//9LaP/9 -SjdavWA8ePHeBIKB//81/83ndznOaXx379wAAAD//wBlAJr/AqDxW+D3AABAAbUh/QMnbQag/gAY -AYDAAACgtgD/gOqAAAB5IA/8AAAk+n9w0AAA8AAAmFRJuPo27ciC0cD5oeW4E7KA/wD3ECMAn2tt -y8PgwH8AfAxFzC0JzeAMtratAsC/ffwAAAD//wBlAJr/BGKAyCAA4AAAAvgeYTAwHd1kmQF5chkG -ABoMIHcL5xVpTfQbUqzlAAAErwAQBgAAEOClA5D9il08AEh/tUzdCBsXkbgACED+woQg8Si9VeqY -lODCn7lmF6NhnAEYgAAA/NMIAAAAAAD//2JgjLZgVGBg5Pv/Tvpc8hwGBjYGJADjHDrAwPzAjv/H -/Wf3PzCwtzcwHmBgYGcwbZz8wHaCAQMDOwMDQ8MCBgYOC3W7mp+f0w+wHOYxO3OG+e376hsMZjk3 -AAAAAP//YmCMY2A4wMAIN5e5gQETPD6AZisDAwMDgzSDAAPjByiHcQMDAwMDg1nOze1lByRu5/47 -c4859311AYNZzg0AAAAA//9iYGDBYihOIIMuwIjGL39/fwffA8b//xv/P2BPtzzHwCBjUQAAAAD/ -/yLFBrIBAAAA//9i1HhcwdhizX7u8NZNzyLbvT97bfrMf/QHI8evOwcSqGUJAAAA//9iYBB81iSw -pEE170Qrg5MIYydHqwdDQRMrAwcVrQAAAAD//2J4x7j9AAMDn8Q/BgYLBoaiAwwMjPdvMDBYM1Tv -oJodAAAAAP//Yqo/83+dxePWlxl3npsel9lvLfPcqlE9725C+acfVLMEAAAA//9i+s9gwCoaaGMR -evta/58PTEWzr21hufPjA8N+qlnBwAAAAAD//2JiWLci5v1+HmFXDqcnULE/MxgYGBj+f6CaJQAA -AAD//2Ji2FrkY3iYpYC5qDeGgeEMAwPDvwQBBoYvcTwOVLMEAAAA//9isDBgkP///0EOg9z35v// -Gc/eeW7BwPj5+QGZhANUswMAAAD//2JgqGBgYGBgqEMXlvhMPUsAAAAA//8iYDd1AAAAAP//AwDR -w7IkEbzhVQAAAABJRU5ErkJggg== -""" - ) - ) - ), - ) - return f - - -def load_default(size: float | None = None) -> FreeTypeFont | ImageFont: - """If FreeType support is available, load a version of Aileron Regular, - https://dotcolon.net/fonts/aileron, with a more limited character set. - - Otherwise, load a "better than nothing" font. - - .. versionadded:: 1.1.4 - - :param size: The font size of Aileron Regular. - - .. versionadded:: 10.1.0 - - :return: A font object. - """ - if isinstance(core, ModuleType) or size is not None: - return truetype( - BytesIO( - base64.b64decode( - b""" -AAEAAAAPAIAAAwBwRkZUTYwDlUAAADFoAAAAHEdERUYAqADnAAAo8AAAACRHUE9ThhmITwAAKfgAA -AduR1NVQnHxefoAACkUAAAA4k9TLzJovoHLAAABeAAAAGBjbWFw5lFQMQAAA6gAAAGqZ2FzcP//AA -MAACjoAAAACGdseWYmRXoPAAAGQAAAHfhoZWFkE18ayQAAAPwAAAA2aGhlYQboArEAAAE0AAAAJGh -tdHjjERZ8AAAB2AAAAdBsb2NhuOexrgAABVQAAADqbWF4cAC7AEYAAAFYAAAAIG5hbWUr+h5lAAAk -OAAAA6Jwb3N0D3oPTQAAJ9wAAAEKAAEAAAABGhxJDqIhXw889QALA+gAAAAA0Bqf2QAAAADhCh2h/ -2r/LgOxAyAAAAAIAAIAAAAAAAAAAQAAA8r/GgAAA7j/av9qA7EAAQAAAAAAAAAAAAAAAAAAAHQAAQ -AAAHQAQwAFAAAAAAACAAAAAQABAAAAQAAAAAAAAAADAfoBkAAFAAgCigJYAAAASwKKAlgAAAFeADI -BPgAAAAAFAAAAAAAAAAAAAAcAAAAAAAAAAAAAAABVS1dOAEAAIPsCAwL/GgDIA8oA5iAAAJMAAAAA -AhICsgAAACAAAwH0AAAAAAAAAU0AAADYAAAA8gA5AVMAVgJEAEYCRAA1AuQAKQKOAEAAsAArATsAZ -AE7AB4CMABVAkQAUADc/+EBEgAgANwAJQEv//sCRAApAkQAggJEADwCRAAtAkQAIQJEADkCRAArAk -QAMgJEACwCRAAxANwAJQDc/+ECRABnAkQAUAJEAEQB8wAjA1QANgJ/AB0CcwBkArsALwLFAGQCSwB -kAjcAZALGAC8C2gBkAQgAZAIgADcCYQBkAj8AZANiAGQCzgBkAuEALwJWAGQC3QAvAmsAZAJJADQC -ZAAiAqoAXgJuACADuAAaAnEAGQJFABMCTwAuATMAYgEv//sBJwAiAkQAUAH0ADIBLAApAhMAJAJjA -EoCEQAeAmcAHgIlAB4BIgAVAmcAHgJRAEoA7gA+AOn/8wIKAEoA9wBGA1cASgJRAEoCSgAeAmMASg -JnAB4BSgBKAcsAGAE5ABQCUABCAgIAAQMRAAEB4v/6AgEAAQHOABQBLwBAAPoAYAEvACECRABNA0Y -AJAItAHgBKgAcAkQAUAEsAHQAygAgAi0AOQD3ADYA9wAWAaEANgGhABYCbAAlAYMAeAGDADkA6/9q -AhsAFAIKABUB/QAVAAAAAwAAAAMAAAAcAAEAAAAAAKQAAwABAAAAHAAEAIgAAAAeABAAAwAOAH4Aq -QCrALEAtAC3ALsgGSAdICYgOiBEISL7Av//AAAAIACpAKsAsAC0ALcAuyAYIBwgJiA5IEQhIvsB// -//4/+5/7j/tP+y/7D/reBR4E/gR+A14CzfTwVxAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAEGAAABAAAAAAAAAAECAAAAAgAAAAAAAAAAAAAAAAAAAAEAAAMEBQYHCAkKCwwNDg8QERIT -FBUWFxgZGhscHR4fICEiIyQlJicoKSorLC0uLzAxMjM0NTY3ODk6Ozw9Pj9AQUJDREVGR0hJSktMT -U5PUFFSU1RVVldYWVpbXF1eX2BhAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAGQAAA -AAAAAAYnFmAAAAAABlAAAAAAAAAAAAAAAAAAAAAAAAAAAAY2htAAAAAAAAAABrbGlqAAAAAHAAbm9 -ycwBnAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAmACYAJgAmAD4AUgCCAMoBCgFO -AVwBcgGIAaYBvAHKAdYB6AH2AgwCIAJKAogCpgLWAw4DIgNkA5wDugPUA+gD/AQQBEYEogS8BPoFJ -gVSBWoFgAWwBcoF1gX6BhQGJAZMBmgGiga0BuIHGgdUB2YHkAeiB8AH3AfyCAoIHAgqCDoITghcCG -oIogjSCPoJKglYCXwJwgnqCgIKKApACl4Klgq8CtwLDAs8C1YLjAuyC9oL7gwMDCYMSAxgDKAMrAz -qDQoNTA1mDYQNoA2uDcAN2g3oDfYODA4iDkoOXA5sDnoOnA7EDvwAAAAFAAAAAAH0ArwAAwAGAAkA -DAAPAAAxESERAxMhExcRASELARETAfT6qv6syKr+jgFUqsiqArz9RAGLAP/+1P8B/v3VAP8BLP4CA -P8AAgA5//IAuQKyAAMACwAANyMDMwIyFhQGIiY0oE4MZk84JCQ4JLQB/v3AJDgkJDgAAgBWAeUBPA -LfAAMABwAAEyMnMxcjJzOmRgpagkYKWgHl+vr6AAAAAAIARgAAAf4CsgAbAB8AAAEHMxUjByM3Iwc -jNyM1MzcjNTM3MwczNzMHMxUrAQczAZgdZXEvOi9bLzovWmYdZXEvOi9bLzovWp9bHlsBn4w429vb -2ziMONvb29s4jAAAAAMANf+mAg4DDAAfACYALAAAJRQGBxUjNS4BJzMeARcRLgE0Njc1MxUeARcjJ -icVHgEBFBYXNQ4BExU+ATU0Ag5xWDpgcgRcBz41Xl9oVTpVYwpcC1ttXP6cLTQuM5szOrVRZwlOTQ -ZqVzZECAEAGlukZAlOTQdrUG8O7iNlAQgxNhDlCDj+8/YGOjReAAAAAAUAKf/yArsCvAAHAAsAFQA -dACcAABIyFhQGIiY0EyMBMwQiBhUUFjI2NTQSMhYUBiImNDYiBhUUFjI2NTR5iFBQiFCVVwHAV/5c -OiMjOiPmiFBQiFCxOiMjOiMCvFaSVlaS/ZoCsjIzMC80NC8w/uNWklZWkhozMC80NC8wAAAAAgBA/ -/ICbgLAACIALgAAARUjEQYjIiY1NDY3LgE1NDYzMhcVJiMiBhUUFhcWOwE1MxUFFBYzMjc1IyIHDg -ECbmBcYYOOVkg7R4hsQjY4Q0RNRD4SLDxW/pJUXzksPCkUUk0BgUb+zBVUZ0BkDw5RO1huCkULQzp -COAMBcHDHRz0J/AIHRQAAAAEAKwHlAIUC3wADAAATIycze0YKWgHl+gAAAAABAGT/sAEXAwwACQAA -EzMGEBcjLgE0Nt06dXU6OUBAAwzG/jDGVePs4wAAAAEAHv+wANEDDAAJAAATMx4BFAYHIzYQHjo5Q -EA5OnUDDFXj7ONVxgHQAAAAAQBVAFIB2wHbAA4AAAE3FwcXBycHJzcnNxcnMwEtmxOfcTJjYzJxnx -ObCj4BKD07KYolmZkliik7PbMAAQBQAFUB9AIlAAsAAAEjFSM1IzUzNTMVMwH0tTq1tTq1AR/Kyjj -OzgAAAAAB/+H/iACMAGQABAAANwcjNzOMWlFOXVrS3AAAAQAgAP8A8gE3AAMAABMjNTPy0tIA/zgA -AQAl//IApQByAAcAADYyFhQGIiY0STgkJDgkciQ4JCQ4AAAAAf/7/+IBNALQAAMAABcjEzM5Pvs+H -gLuAAAAAAIAKf/yAhsCwAADAAcAABIgECA2IBAgKQHy/g5gATL+zgLA/TJEAkYAAAAAAQCCAAABlg -KyAAgAAAERIxEHNTc2MwGWVr6SIygCsv1OAldxW1sWAAEAPAAAAg4CwAAZAAA3IRUhNRM+ATU0JiM -iDwEjNz4BMzIWFRQGB7kBUv4x+kI2QTt+EAFWAQp8aGVtSl5GRjEA/0RVLzlLmAoKa3FsUkNxXQAA -AAEALf/yAhYCwAAqAAABHgEVFAYjIi8BMxceATMyNjU0KwE1MzI2NTQmIyIGDwEjNz4BMzIWFRQGA -YxBSZJo2RUBVgEHV0JBUaQREUBUQzc5TQcBVgEKfGhfcEMBbxJbQl1x0AoKRkZHPn9GSD80QUVCCg -pfbGBPOlgAAAACACEAAAIkArIACgAPAAAlIxUjNSE1ATMRMyMRBg8BAiRXVv6qAVZWV60dHLCurq4 -rAdn+QgFLMibzAAABADn/8gIZArIAHQAAATIWFRQGIyIvATMXFjMyNjU0JiMiByMTIRUhBzc2ATNv -d5Fl1RQBVgIad0VSTkVhL1IwAYj+vh8rMAHHgGdtgcUKCoFXTU5bYgGRRvAuHQAAAAACACv/8gITA -sAAFwAjAAABMhYVFAYjIhE0NjMyFh8BIycmIyIDNzYTMjY1NCYjIgYVFBYBLmp7imr0l3RZdAgBXA -IYZ5wKJzU6QVNJSz5SUAHSgWltiQFGxcNlVQoKdv7sPiz+ZF1LTmJbU0lhAAAAAQAyAAACGgKyAAY -AAAEVASMBITUCGv6oXAFL/oECsij9dgJsRgAAAAMALP/xAhgCwAAWACAALAAAAR4BFRQGIyImNTQ2 -Ny4BNTQ2MhYVFAYmIgYVFBYyNjU0AzI2NTQmIyIGFRQWAZQ5S5BmbIpPOjA7ecp5P2F8Q0J8RIVJS -0pLTEtOAW0TXTxpZ2ZqPF0SE1A3VWVlVTdQ/UU0N0RENzT9/ko+Ok1NOj1LAAIAMf/yAhkCwAAXAC -MAAAEyERQGIyImLwEzFxYzMhMHBiMiJjU0NhMyNjU0JiMiBhUUFgEl9Jd0WXQIAVwCGGecCic1SWp -7imo+UlBAQVNJAsD+usXDZVUKCnYBFD4sgWltif5kW1NJYV1LTmIAAAACACX/8gClAiAABwAPAAAS -MhYUBiImNBIyFhQGIiY0STgkJDgkJDgkJDgkAiAkOCQkOP52JDgkJDgAAAAC/+H/iAClAiAABwAMA -AASMhYUBiImNBMHIzczSTgkJDgkaFpSTl4CICQ4JCQ4/mba5gAAAQBnAB4B+AH0AAYAAAENARUlNS -UB+P6qAVb+bwGRAbCmpkbJRMkAAAIAUAC7AfQBuwADAAcAAAEhNSERITUhAfT+XAGk/lwBpAGDOP8 -AOAABAEQAHgHVAfQABgAAARUFNS0BNQHV/m8BVv6qAStEyUSmpkYAAAAAAgAj//IB1ALAABgAIAAA -ATIWFRQHDgEHIz4BNz4BNTQmIyIGByM+ARIyFhQGIiY0AQRibmktIAJWBSEqNig+NTlHBFoDezQ4J -CQ4JALAZ1BjaS03JS1DMD5LLDQ/SUVgcv2yJDgkJDgAAAAAAgA2/5gDFgKYADYAQgAAAQMGFRQzMj -Y1NCYjIg4CFRQWMzI2NxcGIyImNTQ+AjMyFhUUBiMiJwcGIyImNTQ2MzIfATcHNzYmIyIGFRQzMjY -Cej8EJjJJlnBAfGQ+oHtAhjUYg5OPx0h2k06Os3xRWQsVLjY5VHtdPBwJETcJDyUoOkZEJz8B0f74 -EQ8kZl6EkTFZjVOLlyknMVm1pmCiaTq4lX6CSCknTVRmmR8wPdYnQzxuSWVGAAIAHQAAAncCsgAHA -AoAACUjByMTMxMjATMDAcj+UVz4dO5d/sjPZPT0ArL9TgE6ATQAAAADAGQAAAJMArIAEAAbACcAAA -EeARUUBgcGKwERMzIXFhUUJRUzMjc2NTQnJiMTPgE1NCcmKwEVMzIBvkdHZkwiNt7LOSGq/oeFHBt -hahIlSTM+cB8Yj5UWAW8QT0VYYgwFArIEF5Fv1eMED2NfDAL93AU+N24PBP0AAAAAAQAv//ICjwLA -ABsAAAEyFh8BIycmIyIGFRQWMzI/ATMHDgEjIiY1NDYBdX+PCwFWAiKiaHx5ZaIiAlYBCpWBk6a0A -sCAagoKpqN/gaOmCgplhcicn8sAAAIAZAAAAp8CsgAMABkAAAEeARUUBgcGKwERMzITPgE1NCYnJi -sBETMyAY59lJp8IzXN0jUVWmdjWRs5d3I4Aq4QqJWUug8EArL9mQ+PeHGHDgX92gAAAAABAGQAAAI -vArIACwAAJRUhESEVIRUhFSEVAi/+NQHB/pUBTf6zRkYCskbwRvAAAAABAGQAAAIlArIACQAAExUh -FSERIxEhFboBQ/69VgHBAmzwRv7KArJGAAAAAAEAL//yAo8CwAAfAAABMxEjNQcGIyImNTQ2MzIWH -wEjJyYjIgYVFBYzMjY1IwGP90wfPnWTprSSf48LAVYCIqJofHllVG+hAU3+s3hARsicn8uAagoKpq -N/gaN1XAAAAAEAZAAAAowCsgALAAABESMRIREjETMRIRECjFb+hFZWAXwCsv1OAS7+0gKy/sQBPAA -AAAABAGQAAAC6ArIAAwAAMyMRM7pWVgKyAAABADf/8gHoArIAEwAAAREUBw4BIyImLwEzFxYzMjc2 -NREB6AIFcGpgbQIBVgIHfXQKAQKy/lYxIltob2EpKYyEFD0BpwAAAAABAGQAAAJ0ArIACwAACQEjA -wcVIxEzEQEzATsBJ3ntQlZWAVVlAWH+nwEnR+ACsv6RAW8AAQBkAAACLwKyAAUAACUVIREzEQIv/j -VWRkYCsv2UAAABAGQAAAMUArIAFAAAAREjETQ3BgcDIwMmJxYVESMRMxsBAxRWAiMxemx8NxsCVo7 -MywKy/U4BY7ZLco7+nAFmoFxLtP6dArL9lwJpAAAAAAEAZAAAAoACsgANAAAhIwEWFREjETMBJjUR -MwKAhP67A1aEAUUDVAJeeov+pwKy/aJ5jAFZAAAAAgAv//ICuwLAAAkAEwAAEiAWFRQGICY1NBIyN -jU0JiIGFRTbATSsrP7MrNrYenrYegLAxaKhxsahov47nIeIm5uIhwACAGQAAAJHArIADgAYAAABHg -EVFAYHBisBESMRMzITNjQnJisBETMyAZRUX2VOHzuAVtY7GlxcGDWIiDUCrgtnVlVpCgT+5gKy/rU -V1BUF/vgAAAACAC//zAK9AsAAEgAcAAAlFhcHJiMiBwYjIiY1NDYgFhUUJRQWMjY1NCYiBgI9PUMx -UDcfKh8omqysATSs/dR62Hp62HpICTg7NgkHxqGixcWitbWHnJyHiJubAAIAZAAAAlgCsgAXACMAA -CUWFyMmJyYnJisBESMRMzIXHgEVFAYHFiUzMjc+ATU0JyYrAQIqDCJfGQwNWhAhglbiOx9QXEY1Tv -6bhDATMj1lGSyMtYgtOXR0BwH+1wKyBApbU0BSESRAAgVAOGoQBAABADT/8gIoAsAAJQAAATIWFyM -uASMiBhUUFhceARUUBiMiJiczHgEzMjY1NCYnLgE1NDYBOmd2ClwGS0E6SUNRdW+HZnKKC1wPWkQ9 -Uk1cZGuEAsBwXUJHNjQ3OhIbZVZZbm5kREo+NT5DFRdYUFdrAAAAAAEAIgAAAmQCsgAHAAABIxEjE -SM1IQJk9lb2AkICbP2UAmxGAAEAXv/yAmQCsgAXAAABERQHDgEiJicmNREzERQXHgEyNjc2NRECZA -IIgfCBCAJWAgZYmlgGAgKy/k0qFFxzc1wUKgGz/lUrEkRQUEQSKwGrAAAAAAEAIAAAAnoCsgAGAAA -hIwMzGwEzAYJ07l3N1FwCsv2PAnEAAAEAGgAAA7ECsgAMAAABAyMLASMDMxsBMxsBA7HAcZyicrZi -kaB0nJkCsv1OAlP9rQKy/ZsCW/2kAmYAAAEAGQAAAm8CsgALAAAhCwEjEwMzGwEzAxMCCsrEY/bkY -re+Y/D6AST+3AFcAVb+5gEa/q3+oQAAAQATAAACUQKyAAgAAAERIxEDMxsBMwFdVvRjwLphARD+8A -EQAaL+sQFPAAABAC4AAAI5ArIACQAAJRUhNQEhNSEVAQI5/fUBof57Aen+YUZGQgIqRkX92QAAAAA -BAGL/sAEFAwwABwAAARUjETMVIxEBBWlpowMMOP0UOANcAAAB//v/4gE0AtAAAwAABSMDMwE0Pvs+ -HgLuAAAAAQAi/7AAxQMMAAcAABcjNTMRIzUzxaNpaaNQOALsOAABAFAA1wH0AmgABgAAJQsBIxMzE -wGwjY1GsESw1wFZ/qcBkf5vAAAAAQAy/6oBwv/iAAMAAAUhNSEBwv5wAZBWOAAAAAEAKQJEALYCsg -ADAAATIycztjhVUAJEbgAAAAACACT/8gHQAiAAHQAlAAAhJwcGIyImNTQ2OwE1NCcmIyIHIz4BMzI -XFh0BFBcnMjY9ASYVFAF6CR0wVUtgkJoiAgdgaQlaBm1Zrg4DCuQ9R+5MOSFQR1tbDiwUUXBUXowf -J8c9SjRORzYSgVwAAAAAAgBK//ICRQLfABEAHgAAATIWFRQGIyImLwEVIxEzETc2EzI2NTQmIyIGH -QEUFgFUcYCVbiNJEyNWVigySElcU01JXmECIJd4i5QTEDRJAt/+3jkq/hRuZV55ZWsdX14AAQAe// -IB9wIgABgAAAEyFhcjJiMiBhUUFjMyNjczDgEjIiY1NDYBF152DFocbEJXU0A1Rw1aE3pbaoKQAiB -oWH5qZm1tPDlaXYuLgZcAAAACAB7/8gIZAt8AEQAeAAABESM1BwYjIiY1NDYzMhYfAREDMjY9ATQm -IyIGFRQWAhlWKDJacYCVbiNJEyOnSV5hQUlcUwLf/SFVOSqXeIuUExA0ARb9VWVrHV9ebmVeeQACA -B7/8gH9AiAAFQAbAAABFAchHgEzMjY3Mw4BIyImNTQ2MzIWJyIGByEmAf0C/oAGUkA1SwlaD4FXbI -WObmt45UBVBwEqDQEYFhNjWD84W16Oh3+akU9aU60AAAEAFQAAARoC8gAWAAATBh0BMxUjESMRIzU -zNTQ3PgEzMhcVJqcDbW1WOTkDB0k8Hx5oAngVITRC/jQBzEIsJRs5PwVHEwAAAAIAHv8uAhkCIAAi -AC8AAAERFAcOASMiLwEzFx4BMzI2NzY9AQcGIyImNTQ2MzIWHwE1AzI2PQE0JiMiBhUUFgIZAQSEd -NwRAVcBBU5DTlUDASgyWnGAlW4jSRMjp0leYUFJXFMCEv5wSh1zeq8KCTI8VU0ZIQk5Kpd4i5QTED -RJ/iJlax1fXm5lXnkAAQBKAAACCgLkABcAAAEWFREjETQnLgEHDgEdASMRMxE3NjMyFgIIAlYCBDs -6RVRWViE5UVViAYUbQP7WASQxGzI7AQJyf+kC5P7TPSxUAAACAD4AAACsAsAABwALAAASMhYUBiIm -NBMjETNeLiAgLiBiVlYCwCAuICAu/WACEgAC//P/LgCnAsAABwAVAAASMhYUBiImNBcRFAcGIyInN -RY3NjURWS4gIC4gYgMLcRwNSgYCAsAgLiAgLo79wCUbZAJGBzMOHgJEAAAAAQBKAAACCALfAAsAAC -EnBxUjETMREzMHEwGTwTJWVvdu9/rgN6kC3/4oAQv6/ugAAQBG//wA3gLfAA8AABMRFBceATcVBiM -iJicmNRGcAQIcIxkkKi4CAQLf/bkhERoSBD4EJC8SNAJKAAAAAQBKAAADEAIgACQAAAEWFREjETQn -JiMiFREjETQnJiMiFREjETMVNzYzMhYXNzYzMhYDCwVWBAxedFYEDF50VlYiJko7ThAvJkpEVAGfI -jn+vAEcQyRZ1v76ARxDJFnW/voCEk08HzYtRB9HAAAAAAEASgAAAgoCIAAWAAABFhURIxE0JyYjIg -YdASMRMxU3NjMyFgIIAlYCCXBEVVZWITlRVWIBhRtA/tYBJDEbbHR/6QISWz0sVAAAAAACAB7/8gI -sAiAABwARAAASIBYUBiAmNBIyNjU0JiIGFRSlAQCHh/8Ah7ieWlqeWgIgn/Cfn/D+s3ZfYHV1YF8A -AgBK/zwCRQIgABEAHgAAATIWFRQGIyImLwERIxEzFTc2EzI2NTQmIyIGHQEUFgFUcYCVbiNJEyNWV -igySElcU01JXmECIJd4i5QTEDT+8wLWVTkq/hRuZV55ZWsdX14AAgAe/zwCGQIgABEAHgAAAREjEQ -cGIyImNTQ2MzIWHwE1AzI2PQE0JiMiBhUUFgIZVigyWnGAlW4jSRMjp0leYUFJXFMCEv0qARk5Kpd -4i5QTEDRJ/iJlax1fXm5lXnkAAQBKAAABPgIeAA0AAAEyFxUmBhURIxEzFTc2ARoWDkdXVlYwIwIe -B0EFVlf+0gISU0cYAAEAGP/yAa0CIAAjAAATMhYXIyYjIgYVFBYXHgEVFAYjIiYnMxYzMjY1NCYnL -gE1NDbkV2MJWhNdKy04PF1XbVhWbgxaE2ktOjlEUllkAiBaS2MrJCUoEBlPQkhOVFZoKCUmLhIWSE -BIUwAAAAEAFP/4ARQCiQAXAAATERQXHgE3FQYjIiYnJjURIzUzNTMVMxWxAQMmMx8qMjMEAUdHVmM -BzP7PGw4mFgY/BSwxDjQBNUJ7e0IAAAABAEL/8gICAhIAFwAAAREjNQcGIyImJyY1ETMRFBceATMy -Nj0BAgJWITlRT2EKBVYEBkA1RFECEv3uWj4qTToiOQE+/tIlJC43c4DpAAAAAAEAAQAAAfwCEgAGA -AABAyMDMxsBAfzJaclfop8CEv3uAhL+LQHTAAABAAEAAAMLAhIADAAAAQMjCwEjAzMbATMbAQMLqW -Z2dmapY3t0a3Z7AhL97gG+/kICEv5AAcD+QwG9AAAB//oAAAHWAhIACwAAARMjJwcjEwMzFzczARq -8ZIuKY763ZoWFYwEO/vLV1QEMAQbNzQAAAQAB/y4B+wISABEAAAEDDgEjIic1FjMyNj8BAzMbAQH7 -2iFZQB8NDRIpNhQH02GenQIS/cFVUAJGASozEwIt/i4B0gABABQAAAGxAg4ACQAAJRUhNQEhNSEVA -QGx/mMBNP7iAYL+zkREQgGIREX+ewAAAAABAED/sAEOAwwALAAAASMiBhUUFxYVFAYHHgEVFAcGFR -QWOwEVIyImNTQ3NjU0JzU2NTQnJjU0NjsBAQ4MKiMLDS4pKS4NCyMqDAtERAwLUlILDERECwLUGBk -WTlsgKzUFBTcrIFtOFhkYOC87GFVMIkUIOAhFIkxVGDsvAAAAAAEAYP84AJoDIAADAAAXIxEzmjo6 -yAPoAAEAIf+wAO8DDAAsAAATFQYVFBcWFRQGKwE1MzI2NTQnJjU0NjcuATU0NzY1NCYrATUzMhYVF -AcGFRTvUgsMREQLDCojCw0uKSkuDQsjKgwLREQMCwF6OAhFIkxVGDsvOBgZFk5bICs1BQU3KyBbTh -YZGDgvOxhVTCJFAAABAE0A3wH2AWQAEwAAATMUIyImJyYjIhUjNDMyFhcWMzIBvjhuGywtQR0xOG4 -bLC1BHTEBZIURGCNMhREYIwAAAwAk/94DIgLoAAcAEQApAAAAIBYQBiAmECQgBhUUFiA2NTQlMhYX -IyYjIgYUFjMyNjczDgEjIiY1NDYBAQFE3d3+vN0CB/7wubkBELn+xVBnD1wSWDo+QTcqOQZcEmZWX -HN2Aujg/rbg4AFKpr+Mjb6+jYxbWEldV5ZZNShLVn5na34AAgB4AFIB9AGeAAUACwAAAQcXIyc3Mw -cXIyc3AUqJiUmJifOJiUmJiQGepqampqampqYAAAIAHAHSAQ4CwAAHAA8AABIyFhQGIiY0NiIGFBY -yNjRgakREakSTNCEhNCECwEJqQkJqCiM4IyM4AAAAAAIAUAAAAfQCCwALAA8AAAEzFSMVIzUjNTM1 -MxMhNSEBP7W1OrW1OrX+XAGkAVs4tLQ4sP31OAAAAQB0AkQBAQKyAAMAABMjNzOsOD1QAkRuAAAAA -AEAIADsAKoBdgAHAAASMhYUBiImNEg6KCg6KAF2KDooKDoAAAIAOQBSAbUBngAFAAsAACUHIzcnMw -UHIzcnMwELiUmJiUkBM4lJiYlJ+KampqampqYAAAABADYB5QDhAt8ABAAAEzczByM2Xk1OXQHv8Po -AAQAWAeUAwQLfAAQAABMHIzczwV5NTl0C1fD6AAIANgHlAYsC3wAEAAkAABM3MwcjPwEzByM2Xk1O -XapeTU5dAe/w+grw+gAAAgAWAeUBawLfAAQACQAAEwcjNzMXByM3M8FeTU5dql5NTl0C1fD6CvD6A -AADACX/8gI1AHIABwAPABcAADYyFhQGIiY0NjIWFAYiJjQ2MhYUBiImNEk4JCQ4JOw4JCQ4JOw4JC -Q4JHIkOCQkOCQkOCQkOCQkOCQkOAAAAAEAeABSAUoBngAFAAABBxcjJzcBSomJSYmJAZ6mpqamAAA -AAAEAOQBSAQsBngAFAAAlByM3JzMBC4lJiYlJ+KampgAAAf9qAAABgQKyAAMAACsBATM/VwHAVwKy -AAAAAAIAFAHIAdwClAAHABQAABMVIxUjNSM1BRUjNwcjJxcjNTMXN9pKMkoByDICKzQqATJLKysCl -CmjoykBy46KiY3Lm5sAAQAVAAABvALyABgAAAERIxEjESMRIzUzNTQ3NjMyFxUmBgcGHQEBvFbCVj -k5AxHHHx5iVgcDAg798gHM/jQBzEIOJRuWBUcIJDAVIRYAAAABABX//AHkAvIAJQAAJR4BNxUGIyI -mJyY1ESYjIgcGHQEzFSMRIxEjNTM1NDc2MzIXERQBowIcIxkkKi4CAR4nXgwDbW1WLy8DEbNdOmYa -EQQ/BCQvEjQCFQZWFSEWQv40AcxCDiUblhP9uSEAAAAAAAAWAQ4AAQAAAAAAAAATACgAAQAAAAAAA -QAHAEwAAQAAAAAAAgAHAGQAAQAAAAAAAwAaAKIAAQAAAAAABAAHAM0AAQAAAAAABQA8AU8AAQAAAA -AABgAPAawAAQAAAAAACAALAdQAAQAAAAAACQALAfgAAQAAAAAACwAXAjQAAQAAAAAADAAXAnwAAwA -BBAkAAAAmAAAAAwABBAkAAQAOADwAAwABBAkAAgAOAFQAAwABBAkAAwA0AGwAAwABBAkABAAOAL0A -AwABBAkABQB4ANUAAwABBAkABgAeAYwAAwABBAkACAAWAbwAAwABBAkACQAWAeAAAwABBAkACwAuA -gQAAwABBAkADAAuAkwATgBvACAAUgBpAGcAaAB0AHMAIABSAGUAcwBlAHIAdgBlAGQALgAATm8gUm -lnaHRzIFJlc2VydmVkLgAAQQBpAGwAZQByAG8AbgAAQWlsZXJvbgAAUgBlAGcAdQBsAGEAcgAAUmV -ndWxhcgAAMQAuADEAMAAyADsAVQBLAFcATgA7AEEAaQBsAGUAcgBvAG4ALQBSAGUAZwB1AGwAYQBy -AAAxLjEwMjtVS1dOO0FpbGVyb24tUmVndWxhcgAAQQBpAGwAZQByAG8AbgAAQWlsZXJvbgAAVgBlA -HIAcwBpAG8AbgAgADEALgAxADAAMgA7AFAAUwAgADAAMAAxAC4AMQAwADIAOwBoAG8AdABjAG8Abg -B2ACAAMQAuADAALgA3ADAAOwBtAGEAawBlAG8AdABmAC4AbABpAGIAMgAuADUALgA1ADgAMwAyADk -AAFZlcnNpb24gMS4xMDI7UFMgMDAxLjEwMjtob3Rjb252IDEuMC43MDttYWtlb3RmLmxpYjIuNS41 -ODMyOQAAQQBpAGwAZQByAG8AbgAtAFIAZQBnAHUAbABhAHIAAEFpbGVyb24tUmVndWxhcgAAUwBvA -HIAYQAgAFMAYQBnAGEAbgBvAABTb3JhIFNhZ2FubwAAUwBvAHIAYQAgAFMAYQBnAGEAbgBvAABTb3 -JhIFNhZ2FubwAAaAB0AHQAcAA6AC8ALwB3AHcAdwAuAGQAbwB0AGMAbwBsAG8AbgAuAG4AZQB0AAB -odHRwOi8vd3d3LmRvdGNvbG9uLm5ldAAAaAB0AHQAcAA6AC8ALwB3AHcAdwAuAGQAbwB0AGMAbwBs -AG8AbgAuAG4AZQB0AABodHRwOi8vd3d3LmRvdGNvbG9uLm5ldAAAAAACAAAAAAAA/4MAMgAAAAAAA -AAAAAAAAAAAAAAAAAAAAHQAAAABAAIAAwAEAAUABgAHAAgACQAKAAsADAANAA4ADwAQABEAEgATAB -QAFQAWABcAGAAZABoAGwAcAB0AHgAfACAAIQAiACMAJAAlACYAJwAoACkAKgArACwALQAuAC8AMAA -xADIAMwA0ADUANgA3ADgAOQA6ADsAPAA9AD4APwBAAEEAQgBDAEQARQBGAEcASABJAEoASwBMAE0A -TgBPAFAAUQBSAFMAVABVAFYAVwBYAFkAWgBbAFwAXQBeAF8AYABhAIsAqQCDAJMAjQDDAKoAtgC3A -LQAtQCrAL4AvwC8AIwAwADBAAAAAAAB//8AAgABAAAADAAAABwAAAACAAIAAwBxAAEAcgBzAAIABA -AAAAIAAAABAAAACgBMAGYAAkRGTFQADmxhdG4AGgAEAAAAAP//AAEAAAAWAANDQVQgAB5NT0wgABZ -ST00gABYAAP//AAEAAAAA//8AAgAAAAEAAmxpZ2EADmxvY2wAFAAAAAEAAQAAAAEAAAACAAYAEAAG -AAAAAgASADQABAAAAAEATAADAAAAAgAQABYAAQAcAAAAAQABAE8AAQABAGcAAQABAE8AAwAAAAIAE -AAWAAEAHAAAAAEAAQAvAAEAAQBnAAEAAQAvAAEAGgABAAgAAgAGAAwAcwACAE8AcgACAEwAAQABAE -kAAAABAAAACgBGAGAAAkRGTFQADmxhdG4AHAAEAAAAAP//AAIAAAABABYAA0NBVCAAFk1PTCAAFlJ -PTSAAFgAA//8AAgAAAAEAAmNwc3AADmtlcm4AFAAAAAEAAAAAAAEAAQACAAYADgABAAAAAQASAAIA -AAACAB4ANgABAAoABQAFAAoAAgABACQAPQAAAAEAEgAEAAAAAQAMAAEAOP/nAAEAAQAkAAIGigAEA -AAFJAXKABoAGQAA//gAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAD/sv+4/+z/7v/MAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAD/xAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA/9T/6AAAAAD/8QAA -ABD/vQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/7gAAAAAAAAAAAAAAAAAA//MAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABIAAAAAAAAAAP/5AAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP/gAAD/4AAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA//L/9AAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAA/+gAAAAAAAkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP/zAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP/mAAAAAAAAAAAAAAAAAAD -/4gAA//AAAAAA//YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/+AAAAAAAAP/OAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/zv/qAAAAAP/0AAAACAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAP/ZAAD/egAA/1kAAAAA/5D/rgAAAAAAAAAAAA -AAAAAAAAAAAAAAAAD/9AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAD/8AAA/7b/8P+wAAD/8P/E/98AAAAA/8P/+P/0//oAAAAAAAAAAAAA//gA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA/+AAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/w//C/9MAAP/SAAD/9wAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAD/yAAA/+kAAAAA//QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/9wAAAAD//QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAP/2AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAP/cAAAAAAAAAAAAAAAA/7YAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAP/8AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAD/6AAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAQAkAFAAEAAAAAQACwAAABcA -BgAAAAAAAAAIAA4AAAAAAAsAEgAAAAAAAAATABkAAwANAAAAAQAJAAAAAAAAAAAAAAAAAAAAGAAAA -AAABwAAAAAAAAAAAAAAFQAFAAAAAAAYABgAAAAUAAAACgAAAAwAAgAPABEAFgAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFAAEAEQBdAAYAAAAAAAAAAAAAAAAAAAAAAAA -AAAAAAAAAAAAAAAAAAAAAAAAAAQAAAAcAAAAAAAAABwAAAAAACAAAAAAAAAAAAAcAAAAHAAAAEwAJ -ABUADgAPAAAACwAQAAAAAAAAAAAAAAAAAAUAGAACAAIAAgAAAAIAGAAXAAAAGAAAABYAFgACABYAA -gAWAAAAEQADAAoAFAAMAA0ABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAASAAAAEgAGAAEAHgAkAC -YAJwApACoALQAuAC8AMgAzADcAOAA5ADoAPAA9AEUASABOAE8AUgBTAFUAVwBZAFoAWwBcAF0AcwA -AAAAAAQAAAADa3tfFAAAAANAan9kAAAAA4QodoQ== -""" - ) - ), - 10 if size is None else size, - layout_engine=Layout.BASIC, - ) - return load_default_imagefont() diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageGrab.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageGrab.py deleted file mode 100644 index 1eb45073..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageGrab.py +++ /dev/null @@ -1,196 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# screen grabber -# -# History: -# 2001-04-26 fl created -# 2001-09-17 fl use builtin driver, if present -# 2002-11-19 fl added grabclipboard support -# -# Copyright (c) 2001-2002 by Secret Labs AB -# Copyright (c) 2001-2002 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -import os -import shutil -import subprocess -import sys -import tempfile - -from . import Image - -TYPE_CHECKING = False -if TYPE_CHECKING: - from . import ImageWin - - -def grab( - bbox: tuple[int, int, int, int] | None = None, - include_layered_windows: bool = False, - all_screens: bool = False, - xdisplay: str | None = None, - window: int | ImageWin.HWND | None = None, -) -> Image.Image: - im: Image.Image - if xdisplay is None: - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - args = ["screencapture"] - if bbox: - left, top, right, bottom = bbox - args += ["-R", f"{left},{top},{right-left},{bottom-top}"] - subprocess.call(args + ["-x", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_resized = im.resize((right - left, bottom - top)) - im.close() - return im_resized - return im - elif sys.platform == "win32": - if window is not None: - all_screens = -1 - offset, size, data = Image.core.grabscreen_win32( - include_layered_windows, - all_screens, - int(window) if window is not None else 0, - ) - im = Image.frombytes( - "RGB", - size, - data, - # RGB, 32-bit line padding, origin lower left corner - "raw", - "BGR", - (size[0] * 3 + 3) & -4, - -1, - ) - if bbox: - x0, y0 = offset - left, top, right, bottom = bbox - im = im.crop((left - x0, top - y0, right - x0, bottom - y0)) - return im - # Cast to Optional[str] needed for Windows and macOS. - display_name: str | None = xdisplay - try: - if not Image.core.HAVE_XCB: - msg = "Pillow was built without XCB support" - raise OSError(msg) - size, data = Image.core.grabscreen_x11(display_name) - except OSError: - if display_name is None and sys.platform not in ("darwin", "win32"): - if shutil.which("gnome-screenshot"): - args = ["gnome-screenshot", "-f"] - elif shutil.which("grim"): - args = ["grim"] - elif shutil.which("spectacle"): - args = ["spectacle", "-n", "-b", "-f", "-o"] - else: - raise - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - subprocess.call(args + [filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_cropped = im.crop(bbox) - im.close() - return im_cropped - return im - else: - raise - else: - im = Image.frombytes("RGB", size, data, "raw", "BGRX", size[0] * 4, 1) - if bbox: - im = im.crop(bbox) - return im - - -def grabclipboard() -> Image.Image | list[str] | None: - if sys.platform == "darwin": - p = subprocess.run( - ["osascript", "-e", "get the clipboard as Β«class PNGfΒ»"], - capture_output=True, - ) - if p.returncode != 0: - return None - - import binascii - - data = io.BytesIO(binascii.unhexlify(p.stdout[11:-3])) - return Image.open(data) - elif sys.platform == "win32": - fmt, data = Image.core.grabclipboard_win32() - if fmt == "file": # CF_HDROP - import struct - - o = struct.unpack_from("I", data)[0] - if data[16] == 0: - files = data[o:].decode("mbcs").split("\0") - else: - files = data[o:].decode("utf-16le").split("\0") - return files[: files.index("")] - if isinstance(data, bytes): - data = io.BytesIO(data) - if fmt == "png": - from . import PngImagePlugin - - return PngImagePlugin.PngImageFile(data) - elif fmt == "DIB": - from . import BmpImagePlugin - - return BmpImagePlugin.DibImageFile(data) - return None - else: - if os.getenv("WAYLAND_DISPLAY"): - session_type = "wayland" - elif os.getenv("DISPLAY"): - session_type = "x11" - else: # Session type check failed - session_type = None - - if shutil.which("wl-paste") and session_type in ("wayland", None): - args = ["wl-paste", "-t", "image"] - elif shutil.which("xclip") and session_type in ("x11", None): - args = ["xclip", "-selection", "clipboard", "-t", "image/png", "-o"] - else: - msg = "wl-paste or xclip is required for ImageGrab.grabclipboard() on Linux" - raise NotImplementedError(msg) - - p = subprocess.run(args, capture_output=True) - if p.returncode != 0: - err = p.stderr - for silent_error in [ - # wl-paste, when the clipboard is empty - b"Nothing is copied", - # Ubuntu/Debian wl-paste, when the clipboard is empty - b"No selection", - # Ubuntu/Debian wl-paste, when an image isn't available - b"No suitable type of content copied", - # wl-paste or Ubuntu/Debian xclip, when an image isn't available - b" not available", - # xclip, when an image isn't available - b"cannot convert ", - # xclip, when the clipboard isn't initialized - b"xclip: Error: There is no owner for the ", - ]: - if silent_error in err: - return None - msg = f"{args[0]} error" - if err: - msg += f": {err.strip().decode()}" - raise ChildProcessError(msg) - - data = io.BytesIO(p.stdout) - im = Image.open(data) - im.load() - return im diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageMath.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageMath.py deleted file mode 100644 index dfdc50c0..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageMath.py +++ /dev/null @@ -1,314 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# a simple math add-on for the Python Imaging Library -# -# History: -# 1999-02-15 fl Original PIL Plus release -# 2005-05-05 fl Simplified and cleaned up for PIL 1.1.6 -# 2005-09-12 fl Fixed int() and float() for Python 2.4.1 -# -# Copyright (c) 1999-2005 by Secret Labs AB -# Copyright (c) 2005 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import builtins - -from . import Image, _imagingmath - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from types import CodeType - from typing import Any - - -class _Operand: - """Wraps an image operand, providing standard operators""" - - def __init__(self, im: Image.Image): - self.im = im - - def __fixup(self, im1: _Operand | float) -> Image.Image: - # convert image to suitable mode - if isinstance(im1, _Operand): - # argument was an image. - if im1.im.mode in ("1", "L"): - return im1.im.convert("I") - elif im1.im.mode in ("I", "F"): - return im1.im - else: - msg = f"unsupported mode: {im1.im.mode}" - raise ValueError(msg) - else: - # argument was a constant - if isinstance(im1, (int, float)) and self.im.mode in ("1", "L", "I"): - return Image.new("I", self.im.size, im1) - else: - return Image.new("F", self.im.size, im1) - - def apply( - self, - op: str, - im1: _Operand | float, - im2: _Operand | float | None = None, - mode: str | None = None, - ) -> _Operand: - im_1 = self.__fixup(im1) - if im2 is None: - # unary operation - out = Image.new(mode or im_1.mode, im_1.size, None) - try: - op = getattr(_imagingmath, f"{op}_{im_1.mode}") - except AttributeError as e: - msg = f"bad operand type for '{op}'" - raise TypeError(msg) from e - _imagingmath.unop(op, out.getim(), im_1.getim()) - else: - # binary operation - im_2 = self.__fixup(im2) - if im_1.mode != im_2.mode: - # convert both arguments to floating point - if im_1.mode != "F": - im_1 = im_1.convert("F") - if im_2.mode != "F": - im_2 = im_2.convert("F") - if im_1.size != im_2.size: - # crop both arguments to a common size - size = ( - min(im_1.size[0], im_2.size[0]), - min(im_1.size[1], im_2.size[1]), - ) - if im_1.size != size: - im_1 = im_1.crop((0, 0) + size) - if im_2.size != size: - im_2 = im_2.crop((0, 0) + size) - out = Image.new(mode or im_1.mode, im_1.size, None) - try: - op = getattr(_imagingmath, f"{op}_{im_1.mode}") - except AttributeError as e: - msg = f"bad operand type for '{op}'" - raise TypeError(msg) from e - _imagingmath.binop(op, out.getim(), im_1.getim(), im_2.getim()) - return _Operand(out) - - # unary operators - def __bool__(self) -> bool: - # an image is "true" if it contains at least one non-zero pixel - return self.im.getbbox() is not None - - def __abs__(self) -> _Operand: - return self.apply("abs", self) - - def __pos__(self) -> _Operand: - return self - - def __neg__(self) -> _Operand: - return self.apply("neg", self) - - # binary operators - def __add__(self, other: _Operand | float) -> _Operand: - return self.apply("add", self, other) - - def __radd__(self, other: _Operand | float) -> _Operand: - return self.apply("add", other, self) - - def __sub__(self, other: _Operand | float) -> _Operand: - return self.apply("sub", self, other) - - def __rsub__(self, other: _Operand | float) -> _Operand: - return self.apply("sub", other, self) - - def __mul__(self, other: _Operand | float) -> _Operand: - return self.apply("mul", self, other) - - def __rmul__(self, other: _Operand | float) -> _Operand: - return self.apply("mul", other, self) - - def __truediv__(self, other: _Operand | float) -> _Operand: - return self.apply("div", self, other) - - def __rtruediv__(self, other: _Operand | float) -> _Operand: - return self.apply("div", other, self) - - def __mod__(self, other: _Operand | float) -> _Operand: - return self.apply("mod", self, other) - - def __rmod__(self, other: _Operand | float) -> _Operand: - return self.apply("mod", other, self) - - def __pow__(self, other: _Operand | float) -> _Operand: - return self.apply("pow", self, other) - - def __rpow__(self, other: _Operand | float) -> _Operand: - return self.apply("pow", other, self) - - # bitwise - def __invert__(self) -> _Operand: - return self.apply("invert", self) - - def __and__(self, other: _Operand | float) -> _Operand: - return self.apply("and", self, other) - - def __rand__(self, other: _Operand | float) -> _Operand: - return self.apply("and", other, self) - - def __or__(self, other: _Operand | float) -> _Operand: - return self.apply("or", self, other) - - def __ror__(self, other: _Operand | float) -> _Operand: - return self.apply("or", other, self) - - def __xor__(self, other: _Operand | float) -> _Operand: - return self.apply("xor", self, other) - - def __rxor__(self, other: _Operand | float) -> _Operand: - return self.apply("xor", other, self) - - def __lshift__(self, other: _Operand | float) -> _Operand: - return self.apply("lshift", self, other) - - def __rshift__(self, other: _Operand | float) -> _Operand: - return self.apply("rshift", self, other) - - # logical - def __eq__(self, other: _Operand | float) -> _Operand: # type: ignore[override] - return self.apply("eq", self, other) - - def __ne__(self, other: _Operand | float) -> _Operand: # type: ignore[override] - return self.apply("ne", self, other) - - def __lt__(self, other: _Operand | float) -> _Operand: - return self.apply("lt", self, other) - - def __le__(self, other: _Operand | float) -> _Operand: - return self.apply("le", self, other) - - def __gt__(self, other: _Operand | float) -> _Operand: - return self.apply("gt", self, other) - - def __ge__(self, other: _Operand | float) -> _Operand: - return self.apply("ge", self, other) - - -# conversions -def imagemath_int(self: _Operand) -> _Operand: - return _Operand(self.im.convert("I")) - - -def imagemath_float(self: _Operand) -> _Operand: - return _Operand(self.im.convert("F")) - - -# logical -def imagemath_equal(self: _Operand, other: _Operand | float | None) -> _Operand: - return self.apply("eq", self, other, mode="I") - - -def imagemath_notequal(self: _Operand, other: _Operand | float | None) -> _Operand: - return self.apply("ne", self, other, mode="I") - - -def imagemath_min(self: _Operand, other: _Operand | float | None) -> _Operand: - return self.apply("min", self, other) - - -def imagemath_max(self: _Operand, other: _Operand | float | None) -> _Operand: - return self.apply("max", self, other) - - -def imagemath_convert(self: _Operand, mode: str) -> _Operand: - return _Operand(self.im.convert(mode)) - - -ops = { - "int": imagemath_int, - "float": imagemath_float, - "equal": imagemath_equal, - "notequal": imagemath_notequal, - "min": imagemath_min, - "max": imagemath_max, - "convert": imagemath_convert, -} - - -def lambda_eval(expression: Callable[[dict[str, Any]], Any], **kw: Any) -> Any: - """ - Returns the result of an image function. - - :py:mod:`~PIL.ImageMath` only supports single-layer images. To process multi-band - images, use the :py:meth:`~PIL.Image.Image.split` method or - :py:func:`~PIL.Image.merge` function. - - :param expression: A function that receives a dictionary. - :param **kw: Values to add to the function's dictionary. - :return: The expression result. This is usually an image object, but can - also be an integer, a floating point value, or a pixel tuple, - depending on the expression. - """ - - args: dict[str, Any] = ops.copy() - args.update(kw) - for k, v in args.items(): - if isinstance(v, Image.Image): - args[k] = _Operand(v) - - out = expression(args) - try: - return out.im - except AttributeError: - return out - - -def unsafe_eval(expression: str, **kw: Any) -> Any: - """ - Evaluates an image expression. This uses Python's ``eval()`` function to process - the expression string, and carries the security risks of doing so. It is not - recommended to process expressions without considering this. - :py:meth:`~lambda_eval` is a more secure alternative. - - :py:mod:`~PIL.ImageMath` only supports single-layer images. To process multi-band - images, use the :py:meth:`~PIL.Image.Image.split` method or - :py:func:`~PIL.Image.merge` function. - - :param expression: A string containing a Python-style expression. - :param **kw: Values to add to the evaluation context. - :return: The evaluated expression. This is usually an image object, but can - also be an integer, a floating point value, or a pixel tuple, - depending on the expression. - """ - - # build execution namespace - args: dict[str, Any] = ops.copy() - for k in kw: - if "__" in k or hasattr(builtins, k): - msg = f"'{k}' not allowed" - raise ValueError(msg) - - args.update(kw) - for k, v in args.items(): - if isinstance(v, Image.Image): - args[k] = _Operand(v) - - compiled_code = compile(expression, "", "eval") - - def scan(code: CodeType) -> None: - for const in code.co_consts: - if type(const) is type(compiled_code): - scan(const) - - for name in code.co_names: - if name not in args and name != "abs": - msg = f"'{name}' not allowed" - raise ValueError(msg) - - scan(compiled_code) - out = builtins.eval(expression, {"__builtins": {"abs": abs}}, args) - try: - return out.im - except AttributeError: - return out diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageMode.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageMode.py deleted file mode 100644 index b7c6c863..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageMode.py +++ /dev/null @@ -1,85 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard mode descriptors -# -# History: -# 2006-03-20 fl Added -# -# Copyright (c) 2006 by Secret Labs AB. -# Copyright (c) 2006 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import sys -from functools import lru_cache -from typing import NamedTuple - - -class ModeDescriptor(NamedTuple): - """Wrapper for mode strings.""" - - mode: str - bands: tuple[str, ...] - basemode: str - basetype: str - typestr: str - - def __str__(self) -> str: - return self.mode - - -@lru_cache -def getmode(mode: str) -> ModeDescriptor: - """Gets a mode descriptor for the given mode.""" - endian = "<" if sys.byteorder == "little" else ">" - - modes = { - # core modes - # Bits need to be extended to bytes - "1": ("L", "L", ("1",), "|b1"), - "L": ("L", "L", ("L",), "|u1"), - "I": ("L", "I", ("I",), f"{endian}i4"), - "F": ("L", "F", ("F",), f"{endian}f4"), - "P": ("P", "L", ("P",), "|u1"), - "RGB": ("RGB", "L", ("R", "G", "B"), "|u1"), - "RGBX": ("RGB", "L", ("R", "G", "B", "X"), "|u1"), - "RGBA": ("RGB", "L", ("R", "G", "B", "A"), "|u1"), - "CMYK": ("RGB", "L", ("C", "M", "Y", "K"), "|u1"), - "YCbCr": ("RGB", "L", ("Y", "Cb", "Cr"), "|u1"), - # UNDONE - unsigned |u1i1i1 - "LAB": ("RGB", "L", ("L", "A", "B"), "|u1"), - "HSV": ("RGB", "L", ("H", "S", "V"), "|u1"), - # extra experimental modes - "RGBa": ("RGB", "L", ("R", "G", "B", "a"), "|u1"), - "LA": ("L", "L", ("L", "A"), "|u1"), - "La": ("L", "L", ("L", "a"), "|u1"), - "PA": ("RGB", "L", ("P", "A"), "|u1"), - } - if mode in modes: - base_mode, base_type, bands, type_str = modes[mode] - return ModeDescriptor(mode, bands, base_mode, base_type, type_str) - - mapping_modes = { - # I;16 == I;16L, and I;32 == I;32L - "I;16": "u2", - "I;16BS": ">i2", - "I;16N": f"{endian}u2", - "I;16NS": f"{endian}i2", - "I;32": "u4", - "I;32L": "i4", - "I;32LS": " -from __future__ import annotations - -import re - -from . import Image, _imagingmorph - -LUT_SIZE = 1 << 9 - -# fmt: off -ROTATION_MATRIX = [ - 6, 3, 0, - 7, 4, 1, - 8, 5, 2, -] -MIRROR_MATRIX = [ - 2, 1, 0, - 5, 4, 3, - 8, 7, 6, -] -# fmt: on - - -class LutBuilder: - """A class for building a MorphLut from a descriptive language - - The input patterns is a list of a strings sequences like these:: - - 4:(... - .1. - 111)->1 - - (whitespaces including linebreaks are ignored). The option 4 - describes a series of symmetry operations (in this case a - 4-rotation), the pattern is described by: - - - . or X - Ignore - - 1 - Pixel is on - - 0 - Pixel is off - - The result of the operation is described after "->" string. - - The default is to return the current pixel value, which is - returned if no other match is found. - - Operations: - - - 4 - 4 way rotation - - N - Negate - - 1 - Dummy op for no other operation (an op must always be given) - - M - Mirroring - - Example:: - - lb = LutBuilder(patterns = ["4:(... .1. 111)->1"]) - lut = lb.build_lut() - - """ - - def __init__( - self, patterns: list[str] | None = None, op_name: str | None = None - ) -> None: - if patterns is not None: - self.patterns = patterns - else: - self.patterns = [] - self.lut: bytearray | None = None - if op_name is not None: - known_patterns = { - "corner": ["1:(... ... ...)->0", "4:(00. 01. ...)->1"], - "dilation4": ["4:(... .0. .1.)->1"], - "dilation8": ["4:(... .0. .1.)->1", "4:(... .0. ..1)->1"], - "erosion4": ["4:(... .1. .0.)->0"], - "erosion8": ["4:(... .1. .0.)->0", "4:(... .1. ..0)->0"], - "edge": [ - "1:(... ... ...)->0", - "4:(.0. .1. ...)->1", - "4:(01. .1. ...)->1", - ], - } - if op_name not in known_patterns: - msg = f"Unknown pattern {op_name}!" - raise Exception(msg) - - self.patterns = known_patterns[op_name] - - def add_patterns(self, patterns: list[str]) -> None: - self.patterns += patterns - - def build_default_lut(self) -> None: - symbols = [0, 1] - m = 1 << 4 # pos of current pixel - self.lut = bytearray(symbols[(i & m) > 0] for i in range(LUT_SIZE)) - - def get_lut(self) -> bytearray | None: - return self.lut - - def _string_permute(self, pattern: str, permutation: list[int]) -> str: - """string_permute takes a pattern and a permutation and returns the - string permuted according to the permutation list. - """ - assert len(permutation) == 9 - return "".join(pattern[p] for p in permutation) - - def _pattern_permute( - self, basic_pattern: str, options: str, basic_result: int - ) -> list[tuple[str, int]]: - """pattern_permute takes a basic pattern and its result and clones - the pattern according to the modifications described in the $options - parameter. It returns a list of all cloned patterns.""" - patterns = [(basic_pattern, basic_result)] - - # rotations - if "4" in options: - res = patterns[-1][1] - for i in range(4): - patterns.append( - (self._string_permute(patterns[-1][0], ROTATION_MATRIX), res) - ) - # mirror - if "M" in options: - n = len(patterns) - for pattern, res in patterns[:n]: - patterns.append((self._string_permute(pattern, MIRROR_MATRIX), res)) - - # negate - if "N" in options: - n = len(patterns) - for pattern, res in patterns[:n]: - # Swap 0 and 1 - pattern = pattern.replace("0", "Z").replace("1", "0").replace("Z", "1") - res = 1 - int(res) - patterns.append((pattern, res)) - - return patterns - - def build_lut(self) -> bytearray: - """Compile all patterns into a morphology lut. - - TBD :Build based on (file) morphlut:modify_lut - """ - self.build_default_lut() - assert self.lut is not None - patterns = [] - - # Parse and create symmetries of the patterns strings - for p in self.patterns: - m = re.search(r"(\w):?\s*\((.+?)\)\s*->\s*(\d)", p.replace("\n", "")) - if not m: - msg = 'Syntax error in pattern "' + p + '"' - raise Exception(msg) - options = m.group(1) - pattern = m.group(2) - result = int(m.group(3)) - - # Get rid of spaces - pattern = pattern.replace(" ", "").replace("\n", "") - - patterns += self._pattern_permute(pattern, options, result) - - # compile the patterns into regular expressions for speed - compiled_patterns = [] - for pattern in patterns: - p = pattern[0].replace(".", "X").replace("X", "[01]") - compiled_patterns.append((re.compile(p), pattern[1])) - - # Step through table and find patterns that match. - # Note that all the patterns are searched. The last one - # caught overrides - for i in range(LUT_SIZE): - # Build the bit pattern - bitpattern = bin(i)[2:] - bitpattern = ("0" * (9 - len(bitpattern)) + bitpattern)[::-1] - - for pattern, r in compiled_patterns: - if pattern.match(bitpattern): - self.lut[i] = [0, 1][r] - - return self.lut - - -class MorphOp: - """A class for binary morphological operators""" - - def __init__( - self, - lut: bytearray | None = None, - op_name: str | None = None, - patterns: list[str] | None = None, - ) -> None: - """Create a binary morphological operator""" - self.lut = lut - if op_name is not None: - self.lut = LutBuilder(op_name=op_name).build_lut() - elif patterns is not None: - self.lut = LutBuilder(patterns=patterns).build_lut() - - def apply(self, image: Image.Image) -> tuple[int, Image.Image]: - """Run a single morphological operation on an image - - Returns a tuple of the number of changed pixels and the - morphed image""" - if self.lut is None: - msg = "No operator loaded" - raise Exception(msg) - - if image.mode != "L": - msg = "Image mode must be L" - raise ValueError(msg) - outimage = Image.new(image.mode, image.size, None) - count = _imagingmorph.apply(bytes(self.lut), image.getim(), outimage.getim()) - return count, outimage - - def match(self, image: Image.Image) -> list[tuple[int, int]]: - """Get a list of coordinates matching the morphological operation on - an image. - - Returns a list of tuples of (x,y) coordinates - of all matching pixels. See :ref:`coordinate-system`.""" - if self.lut is None: - msg = "No operator loaded" - raise Exception(msg) - - if image.mode != "L": - msg = "Image mode must be L" - raise ValueError(msg) - return _imagingmorph.match(bytes(self.lut), image.getim()) - - def get_on_pixels(self, image: Image.Image) -> list[tuple[int, int]]: - """Get a list of all turned on pixels in a binary image - - Returns a list of tuples of (x,y) coordinates - of all matching pixels. See :ref:`coordinate-system`.""" - - if image.mode != "L": - msg = "Image mode must be L" - raise ValueError(msg) - return _imagingmorph.get_on_pixels(image.getim()) - - def load_lut(self, filename: str) -> None: - """Load an operator from an mrl file""" - with open(filename, "rb") as f: - self.lut = bytearray(f.read()) - - if len(self.lut) != LUT_SIZE: - self.lut = None - msg = "Wrong size operator file!" - raise Exception(msg) - - def save_lut(self, filename: str) -> None: - """Save an operator to an mrl file""" - if self.lut is None: - msg = "No operator loaded" - raise Exception(msg) - with open(filename, "wb") as f: - f.write(self.lut) - - def set_lut(self, lut: bytearray | None) -> None: - """Set the lut from an external source""" - self.lut = lut diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageOps.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageOps.py deleted file mode 100644 index 42b10bd7..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageOps.py +++ /dev/null @@ -1,746 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard image operations -# -# History: -# 2001-10-20 fl Created -# 2001-10-23 fl Added autocontrast operator -# 2001-12-18 fl Added Kevin's fit operator -# 2004-03-14 fl Fixed potential division by zero in equalize -# 2005-05-05 fl Fixed equalize for low number of values -# -# Copyright (c) 2001-2004 by Secret Labs AB -# Copyright (c) 2001-2004 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import functools -import operator -import re -from collections.abc import Sequence -from typing import Literal, Protocol, cast, overload - -from . import ExifTags, Image, ImagePalette - -# -# helpers - - -def _border(border: int | tuple[int, ...]) -> tuple[int, int, int, int]: - if isinstance(border, tuple): - if len(border) == 2: - left, top = right, bottom = border - elif len(border) == 4: - left, top, right, bottom = border - else: - left = top = right = bottom = border - return left, top, right, bottom - - -def _color(color: str | int | tuple[int, ...], mode: str) -> int | tuple[int, ...]: - if isinstance(color, str): - from . import ImageColor - - color = ImageColor.getcolor(color, mode) - return color - - -def _lut(image: Image.Image, lut: list[int]) -> Image.Image: - if image.mode == "P": - # FIXME: apply to lookup table, not image data - msg = "mode P support coming soon" - raise NotImplementedError(msg) - elif image.mode in ("L", "RGB"): - if image.mode == "RGB" and len(lut) == 256: - lut = lut + lut + lut - return image.point(lut) - else: - msg = f"not supported for mode {image.mode}" - raise OSError(msg) - - -# -# actions - - -def autocontrast( - image: Image.Image, - cutoff: float | tuple[float, float] = 0, - ignore: int | Sequence[int] | None = None, - mask: Image.Image | None = None, - preserve_tone: bool = False, -) -> Image.Image: - """ - Maximize (normalize) image contrast. This function calculates a - histogram of the input image (or mask region), removes ``cutoff`` percent of the - lightest and darkest pixels from the histogram, and remaps the image - so that the darkest pixel becomes black (0), and the lightest - becomes white (255). - - :param image: The image to process. - :param cutoff: The percent to cut off from the histogram on the low and - high ends. Either a tuple of (low, high), or a single - number for both. - :param ignore: The background pixel value (use None for no background). - :param mask: Histogram used in contrast operation is computed using pixels - within the mask. If no mask is given the entire image is used - for histogram computation. - :param preserve_tone: Preserve image tone in Photoshop-like style autocontrast. - - .. versionadded:: 8.2.0 - - :return: An image. - """ - if preserve_tone: - histogram = image.convert("L").histogram(mask) - else: - histogram = image.histogram(mask) - - lut = [] - for layer in range(0, len(histogram), 256): - h = histogram[layer : layer + 256] - if ignore is not None: - # get rid of outliers - if isinstance(ignore, int): - h[ignore] = 0 - else: - for ix in ignore: - h[ix] = 0 - if cutoff: - # cut off pixels from both ends of the histogram - if not isinstance(cutoff, tuple): - cutoff = (cutoff, cutoff) - # get number of pixels - n = 0 - for ix in range(256): - n = n + h[ix] - # remove cutoff% pixels from the low end - cut = int(n * cutoff[0] // 100) - for lo in range(256): - if cut > h[lo]: - cut = cut - h[lo] - h[lo] = 0 - else: - h[lo] -= cut - cut = 0 - if cut <= 0: - break - # remove cutoff% samples from the high end - cut = int(n * cutoff[1] // 100) - for hi in range(255, -1, -1): - if cut > h[hi]: - cut = cut - h[hi] - h[hi] = 0 - else: - h[hi] -= cut - cut = 0 - if cut <= 0: - break - # find lowest/highest samples after preprocessing - for lo in range(256): - if h[lo]: - break - for hi in range(255, -1, -1): - if h[hi]: - break - if hi <= lo: - # don't bother - lut.extend(list(range(256))) - else: - scale = 255.0 / (hi - lo) - offset = -lo * scale - for ix in range(256): - ix = int(ix * scale + offset) - if ix < 0: - ix = 0 - elif ix > 255: - ix = 255 - lut.append(ix) - return _lut(image, lut) - - -def colorize( - image: Image.Image, - black: str | tuple[int, ...], - white: str | tuple[int, ...], - mid: str | int | tuple[int, ...] | None = None, - blackpoint: int = 0, - whitepoint: int = 255, - midpoint: int = 127, -) -> Image.Image: - """ - Colorize grayscale image. - This function calculates a color wedge which maps all black pixels in - the source image to the first color and all white pixels to the - second color. If ``mid`` is specified, it uses three-color mapping. - The ``black`` and ``white`` arguments should be RGB tuples or color names; - optionally you can use three-color mapping by also specifying ``mid``. - Mapping positions for any of the colors can be specified - (e.g. ``blackpoint``), where these parameters are the integer - value corresponding to where the corresponding color should be mapped. - These parameters must have logical order, such that - ``blackpoint <= midpoint <= whitepoint`` (if ``mid`` is specified). - - :param image: The image to colorize. - :param black: The color to use for black input pixels. - :param white: The color to use for white input pixels. - :param mid: The color to use for midtone input pixels. - :param blackpoint: an int value [0, 255] for the black mapping. - :param whitepoint: an int value [0, 255] for the white mapping. - :param midpoint: an int value [0, 255] for the midtone mapping. - :return: An image. - """ - - # Initial asserts - assert image.mode == "L" - if mid is None: - assert 0 <= blackpoint <= whitepoint <= 255 - else: - assert 0 <= blackpoint <= midpoint <= whitepoint <= 255 - - # Define colors from arguments - rgb_black = cast(Sequence[int], _color(black, "RGB")) - rgb_white = cast(Sequence[int], _color(white, "RGB")) - rgb_mid = cast(Sequence[int], _color(mid, "RGB")) if mid is not None else None - - # Empty lists for the mapping - red = [] - green = [] - blue = [] - - # Create the low-end values - for i in range(blackpoint): - red.append(rgb_black[0]) - green.append(rgb_black[1]) - blue.append(rgb_black[2]) - - # Create the mapping (2-color) - if rgb_mid is None: - range_map = range(whitepoint - blackpoint) - - for i in range_map: - red.append( - rgb_black[0] + i * (rgb_white[0] - rgb_black[0]) // len(range_map) - ) - green.append( - rgb_black[1] + i * (rgb_white[1] - rgb_black[1]) // len(range_map) - ) - blue.append( - rgb_black[2] + i * (rgb_white[2] - rgb_black[2]) // len(range_map) - ) - - # Create the mapping (3-color) - else: - range_map1 = range(midpoint - blackpoint) - range_map2 = range(whitepoint - midpoint) - - for i in range_map1: - red.append( - rgb_black[0] + i * (rgb_mid[0] - rgb_black[0]) // len(range_map1) - ) - green.append( - rgb_black[1] + i * (rgb_mid[1] - rgb_black[1]) // len(range_map1) - ) - blue.append( - rgb_black[2] + i * (rgb_mid[2] - rgb_black[2]) // len(range_map1) - ) - for i in range_map2: - red.append(rgb_mid[0] + i * (rgb_white[0] - rgb_mid[0]) // len(range_map2)) - green.append( - rgb_mid[1] + i * (rgb_white[1] - rgb_mid[1]) // len(range_map2) - ) - blue.append(rgb_mid[2] + i * (rgb_white[2] - rgb_mid[2]) // len(range_map2)) - - # Create the high-end values - for i in range(256 - whitepoint): - red.append(rgb_white[0]) - green.append(rgb_white[1]) - blue.append(rgb_white[2]) - - # Return converted image - image = image.convert("RGB") - return _lut(image, red + green + blue) - - -def contain( - image: Image.Image, size: tuple[int, int], method: int = Image.Resampling.BICUBIC -) -> Image.Image: - """ - Returns a resized version of the image, set to the maximum width and height - within the requested size, while maintaining the original aspect ratio. - - :param image: The image to resize. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :return: An image. - """ - - im_ratio = image.width / image.height - dest_ratio = size[0] / size[1] - - if im_ratio != dest_ratio: - if im_ratio > dest_ratio: - new_height = round(image.height / image.width * size[0]) - if new_height != size[1]: - size = (size[0], new_height) - else: - new_width = round(image.width / image.height * size[1]) - if new_width != size[0]: - size = (new_width, size[1]) - return image.resize(size, resample=method) - - -def cover( - image: Image.Image, size: tuple[int, int], method: int = Image.Resampling.BICUBIC -) -> Image.Image: - """ - Returns a resized version of the image, so that the requested size is - covered, while maintaining the original aspect ratio. - - :param image: The image to resize. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :return: An image. - """ - - im_ratio = image.width / image.height - dest_ratio = size[0] / size[1] - - if im_ratio != dest_ratio: - if im_ratio < dest_ratio: - new_height = round(image.height / image.width * size[0]) - if new_height != size[1]: - size = (size[0], new_height) - else: - new_width = round(image.width / image.height * size[1]) - if new_width != size[0]: - size = (new_width, size[1]) - return image.resize(size, resample=method) - - -def pad( - image: Image.Image, - size: tuple[int, int], - method: int = Image.Resampling.BICUBIC, - color: str | int | tuple[int, ...] | None = None, - centering: tuple[float, float] = (0.5, 0.5), -) -> Image.Image: - """ - Returns a resized and padded version of the image, expanded to fill the - requested aspect ratio and size. - - :param image: The image to resize and crop. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :param color: The background color of the padded image. - :param centering: Control the position of the original image within the - padded version. - - (0.5, 0.5) will keep the image centered - (0, 0) will keep the image aligned to the top left - (1, 1) will keep the image aligned to the bottom - right - :return: An image. - """ - - resized = contain(image, size, method) - if resized.size == size: - out = resized - else: - out = Image.new(image.mode, size, color) - if resized.palette: - palette = resized.getpalette() - if palette is not None: - out.putpalette(palette) - if resized.width != size[0]: - x = round((size[0] - resized.width) * max(0, min(centering[0], 1))) - out.paste(resized, (x, 0)) - else: - y = round((size[1] - resized.height) * max(0, min(centering[1], 1))) - out.paste(resized, (0, y)) - return out - - -def crop(image: Image.Image, border: int = 0) -> Image.Image: - """ - Remove border from image. The same amount of pixels are removed - from all four sides. This function works on all image modes. - - .. seealso:: :py:meth:`~PIL.Image.Image.crop` - - :param image: The image to crop. - :param border: The number of pixels to remove. - :return: An image. - """ - left, top, right, bottom = _border(border) - return image.crop((left, top, image.size[0] - right, image.size[1] - bottom)) - - -def scale( - image: Image.Image, factor: float, resample: int = Image.Resampling.BICUBIC -) -> Image.Image: - """ - Returns a rescaled image by a specific factor given in parameter. - A factor greater than 1 expands the image, between 0 and 1 contracts the - image. - - :param image: The image to rescale. - :param factor: The expansion factor, as a float. - :param resample: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :returns: An :py:class:`~PIL.Image.Image` object. - """ - if factor == 1: - return image.copy() - elif factor <= 0: - msg = "the factor must be greater than 0" - raise ValueError(msg) - else: - size = (round(factor * image.width), round(factor * image.height)) - return image.resize(size, resample) - - -class SupportsGetMesh(Protocol): - """ - An object that supports the ``getmesh`` method, taking an image as an - argument, and returning a list of tuples. Each tuple contains two tuples, - the source box as a tuple of 4 integers, and a tuple of 8 integers for the - final quadrilateral, in order of top left, bottom left, bottom right, top - right. - """ - - def getmesh( - self, image: Image.Image - ) -> list[ - tuple[tuple[int, int, int, int], tuple[int, int, int, int, int, int, int, int]] - ]: ... - - -def deform( - image: Image.Image, - deformer: SupportsGetMesh, - resample: int = Image.Resampling.BILINEAR, -) -> Image.Image: - """ - Deform the image. - - :param image: The image to deform. - :param deformer: A deformer object. Any object that implements a - ``getmesh`` method can be used. - :param resample: An optional resampling filter. Same values possible as - in the PIL.Image.transform function. - :return: An image. - """ - return image.transform( - image.size, Image.Transform.MESH, deformer.getmesh(image), resample - ) - - -def equalize(image: Image.Image, mask: Image.Image | None = None) -> Image.Image: - """ - Equalize the image histogram. This function applies a non-linear - mapping to the input image, in order to create a uniform - distribution of grayscale values in the output image. - - :param image: The image to equalize. - :param mask: An optional mask. If given, only the pixels selected by - the mask are included in the analysis. - :return: An image. - """ - if image.mode == "P": - image = image.convert("RGB") - h = image.histogram(mask) - lut = [] - for b in range(0, len(h), 256): - histo = [_f for _f in h[b : b + 256] if _f] - if len(histo) <= 1: - lut.extend(list(range(256))) - else: - step = (functools.reduce(operator.add, histo) - histo[-1]) // 255 - if not step: - lut.extend(list(range(256))) - else: - n = step // 2 - for i in range(256): - lut.append(n // step) - n = n + h[i + b] - return _lut(image, lut) - - -def expand( - image: Image.Image, - border: int | tuple[int, ...] = 0, - fill: str | int | tuple[int, ...] = 0, -) -> Image.Image: - """ - Add border to the image - - :param image: The image to expand. - :param border: Border width, in pixels. - :param fill: Pixel fill value (a color value). Default is 0 (black). - :return: An image. - """ - left, top, right, bottom = _border(border) - width = left + image.size[0] + right - height = top + image.size[1] + bottom - color = _color(fill, image.mode) - if image.palette: - mode = image.palette.mode - palette = ImagePalette.ImagePalette(mode, image.getpalette(mode)) - if isinstance(color, tuple) and (len(color) == 3 or len(color) == 4): - color = palette.getcolor(color) - else: - palette = None - out = Image.new(image.mode, (width, height), color) - if palette: - out.putpalette(palette.palette, mode) - out.paste(image, (left, top)) - return out - - -def fit( - image: Image.Image, - size: tuple[int, int], - method: int = Image.Resampling.BICUBIC, - bleed: float = 0.0, - centering: tuple[float, float] = (0.5, 0.5), -) -> Image.Image: - """ - Returns a resized and cropped version of the image, cropped to the - requested aspect ratio and size. - - This function was contributed by Kevin Cazabon. - - :param image: The image to resize and crop. - :param size: The requested output size in pixels, given as a - (width, height) tuple. - :param method: Resampling method to use. Default is - :py:attr:`~PIL.Image.Resampling.BICUBIC`. - See :ref:`concept-filters`. - :param bleed: Remove a border around the outside of the image from all - four edges. The value is a decimal percentage (use 0.01 for - one percent). The default value is 0 (no border). - Cannot be greater than or equal to 0.5. - :param centering: Control the cropping position. Use (0.5, 0.5) for - center cropping (e.g. if cropping the width, take 50% off - of the left side, and therefore 50% off the right side). - (0.0, 0.0) will crop from the top left corner (i.e. if - cropping the width, take all of the crop off of the right - side, and if cropping the height, take all of it off the - bottom). (1.0, 0.0) will crop from the bottom left - corner, etc. (i.e. if cropping the width, take all of the - crop off the left side, and if cropping the height take - none from the top, and therefore all off the bottom). - :return: An image. - """ - - # by Kevin Cazabon, Feb 17/2000 - # kevin@cazabon.com - # https://www.cazabon.com - - centering_x, centering_y = centering - - if not 0.0 <= centering_x <= 1.0: - centering_x = 0.5 - if not 0.0 <= centering_y <= 1.0: - centering_y = 0.5 - - if not 0.0 <= bleed < 0.5: - bleed = 0.0 - - # calculate the area to use for resizing and cropping, subtracting - # the 'bleed' around the edges - - # number of pixels to trim off on Top and Bottom, Left and Right - bleed_pixels = (bleed * image.size[0], bleed * image.size[1]) - - live_size = ( - image.size[0] - bleed_pixels[0] * 2, - image.size[1] - bleed_pixels[1] * 2, - ) - - # calculate the aspect ratio of the live_size - live_size_ratio = live_size[0] / live_size[1] - - # calculate the aspect ratio of the output image - output_ratio = size[0] / size[1] - - # figure out if the sides or top/bottom will be cropped off - if live_size_ratio == output_ratio: - # live_size is already the needed ratio - crop_width = live_size[0] - crop_height = live_size[1] - elif live_size_ratio >= output_ratio: - # live_size is wider than what's needed, crop the sides - crop_width = output_ratio * live_size[1] - crop_height = live_size[1] - else: - # live_size is taller than what's needed, crop the top and bottom - crop_width = live_size[0] - crop_height = live_size[0] / output_ratio - - # make the crop - crop_left = bleed_pixels[0] + (live_size[0] - crop_width) * centering_x - crop_top = bleed_pixels[1] + (live_size[1] - crop_height) * centering_y - - crop = (crop_left, crop_top, crop_left + crop_width, crop_top + crop_height) - - # resize the image and return it - return image.resize(size, method, box=crop) - - -def flip(image: Image.Image) -> Image.Image: - """ - Flip the image vertically (top to bottom). - - :param image: The image to flip. - :return: An image. - """ - return image.transpose(Image.Transpose.FLIP_TOP_BOTTOM) - - -def grayscale(image: Image.Image) -> Image.Image: - """ - Convert the image to grayscale. - - :param image: The image to convert. - :return: An image. - """ - return image.convert("L") - - -def invert(image: Image.Image) -> Image.Image: - """ - Invert (negate) the image. - - :param image: The image to invert. - :return: An image. - """ - lut = list(range(255, -1, -1)) - return image.point(lut) if image.mode == "1" else _lut(image, lut) - - -def mirror(image: Image.Image) -> Image.Image: - """ - Flip image horizontally (left to right). - - :param image: The image to mirror. - :return: An image. - """ - return image.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - - -def posterize(image: Image.Image, bits: int) -> Image.Image: - """ - Reduce the number of bits for each color channel. - - :param image: The image to posterize. - :param bits: The number of bits to keep for each channel (1-8). - :return: An image. - """ - mask = ~(2 ** (8 - bits) - 1) - lut = [i & mask for i in range(256)] - return _lut(image, lut) - - -def solarize(image: Image.Image, threshold: int = 128) -> Image.Image: - """ - Invert all pixel values above a threshold. - - :param image: The image to solarize. - :param threshold: All pixels above this grayscale level are inverted. - :return: An image. - """ - lut = [] - for i in range(256): - if i < threshold: - lut.append(i) - else: - lut.append(255 - i) - return _lut(image, lut) - - -@overload -def exif_transpose(image: Image.Image, *, in_place: Literal[True]) -> None: ... - - -@overload -def exif_transpose( - image: Image.Image, *, in_place: Literal[False] = False -) -> Image.Image: ... - - -def exif_transpose(image: Image.Image, *, in_place: bool = False) -> Image.Image | None: - """ - If an image has an EXIF Orientation tag, other than 1, transpose the image - accordingly, and remove the orientation data. - - :param image: The image to transpose. - :param in_place: Boolean. Keyword-only argument. - If ``True``, the original image is modified in-place, and ``None`` is returned. - If ``False`` (default), a new :py:class:`~PIL.Image.Image` object is returned - with the transposition applied. If there is no transposition, a copy of the - image will be returned. - """ - image.load() - image_exif = image.getexif() - orientation = image_exif.get(ExifTags.Base.Orientation, 1) - method = { - 2: Image.Transpose.FLIP_LEFT_RIGHT, - 3: Image.Transpose.ROTATE_180, - 4: Image.Transpose.FLIP_TOP_BOTTOM, - 5: Image.Transpose.TRANSPOSE, - 6: Image.Transpose.ROTATE_270, - 7: Image.Transpose.TRANSVERSE, - 8: Image.Transpose.ROTATE_90, - }.get(orientation) - if method is not None: - if in_place: - image.im = image.im.transpose(method) - image._size = image.im.size - else: - transposed_image = image.transpose(method) - exif_image = image if in_place else transposed_image - - exif = exif_image.getexif() - if ExifTags.Base.Orientation in exif: - del exif[ExifTags.Base.Orientation] - if "exif" in exif_image.info: - exif_image.info["exif"] = exif.tobytes() - elif "Raw profile type exif" in exif_image.info: - exif_image.info["Raw profile type exif"] = exif.tobytes().hex() - for key in ("XML:com.adobe.xmp", "xmp"): - if key in exif_image.info: - for pattern in ( - r'tiff:Orientation="([0-9])"', - r"([0-9])", - ): - value = exif_image.info[key] - if isinstance(value, str): - value = re.sub(pattern, "", value) - elif isinstance(value, tuple): - value = tuple( - re.sub(pattern.encode(), b"", v) for v in value - ) - else: - value = re.sub(pattern.encode(), b"", value) - exif_image.info[key] = value - if not in_place: - return transposed_image - elif not in_place: - return image.copy() - return None diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImagePalette.py b/pptx-env/lib/python3.12/site-packages/PIL/ImagePalette.py deleted file mode 100644 index 10369711..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImagePalette.py +++ /dev/null @@ -1,286 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# image palette object -# -# History: -# 1996-03-11 fl Rewritten. -# 1997-01-03 fl Up and running. -# 1997-08-23 fl Added load hack -# 2001-04-16 fl Fixed randint shadow bug in random() -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import array -from collections.abc import Sequence -from typing import IO - -from . import GimpGradientFile, GimpPaletteFile, ImageColor, PaletteFile - -TYPE_CHECKING = False -if TYPE_CHECKING: - from . import Image - - -class ImagePalette: - """ - Color palette for palette mapped images - - :param mode: The mode to use for the palette. See: - :ref:`concept-modes`. Defaults to "RGB" - :param palette: An optional palette. If given, it must be a bytearray, - an array or a list of ints between 0-255. The list must consist of - all channels for one color followed by the next color (e.g. RGBRGBRGB). - Defaults to an empty palette. - """ - - def __init__( - self, - mode: str = "RGB", - palette: Sequence[int] | bytes | bytearray | None = None, - ) -> None: - self.mode = mode - self.rawmode: str | None = None # if set, palette contains raw data - self.palette = palette or bytearray() - self.dirty: int | None = None - - @property - def palette(self) -> Sequence[int] | bytes | bytearray: - return self._palette - - @palette.setter - def palette(self, palette: Sequence[int] | bytes | bytearray) -> None: - self._colors: dict[tuple[int, ...], int] | None = None - self._palette = palette - - @property - def colors(self) -> dict[tuple[int, ...], int]: - if self._colors is None: - mode_len = len(self.mode) - self._colors = {} - for i in range(0, len(self.palette), mode_len): - color = tuple(self.palette[i : i + mode_len]) - if color in self._colors: - continue - self._colors[color] = i // mode_len - return self._colors - - @colors.setter - def colors(self, colors: dict[tuple[int, ...], int]) -> None: - self._colors = colors - - def copy(self) -> ImagePalette: - new = ImagePalette() - - new.mode = self.mode - new.rawmode = self.rawmode - if self.palette is not None: - new.palette = self.palette[:] - new.dirty = self.dirty - - return new - - def getdata(self) -> tuple[str, Sequence[int] | bytes | bytearray]: - """ - Get palette contents in format suitable for the low-level - ``im.putpalette`` primitive. - - .. warning:: This method is experimental. - """ - if self.rawmode: - return self.rawmode, self.palette - return self.mode, self.tobytes() - - def tobytes(self) -> bytes: - """Convert palette to bytes. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(self.palette, bytes): - return self.palette - arr = array.array("B", self.palette) - return arr.tobytes() - - # Declare tostring as an alias for tobytes - tostring = tobytes - - def _new_color_index( - self, image: Image.Image | None = None, e: Exception | None = None - ) -> int: - if not isinstance(self.palette, bytearray): - self._palette = bytearray(self.palette) - index = len(self.palette) // 3 - special_colors: tuple[int | tuple[int, ...] | None, ...] = () - if image: - special_colors = ( - image.info.get("background"), - image.info.get("transparency"), - ) - while index in special_colors: - index += 1 - if index >= 256: - if image: - # Search for an unused index - for i, count in reversed(list(enumerate(image.histogram()))): - if count == 0 and i not in special_colors: - index = i - break - if index >= 256: - msg = "cannot allocate more than 256 colors" - raise ValueError(msg) from e - return index - - def getcolor( - self, - color: tuple[int, ...], - image: Image.Image | None = None, - ) -> int: - """Given an rgb tuple, allocate palette entry. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(color, tuple): - if self.mode == "RGB": - if len(color) == 4: - if color[3] != 255: - msg = "cannot add non-opaque RGBA color to RGB palette" - raise ValueError(msg) - color = color[:3] - elif self.mode == "RGBA": - if len(color) == 3: - color += (255,) - try: - return self.colors[color] - except KeyError as e: - # allocate new color slot - index = self._new_color_index(image, e) - assert isinstance(self._palette, bytearray) - self.colors[color] = index - if index * 3 < len(self.palette): - self._palette = ( - self._palette[: index * 3] - + bytes(color) - + self._palette[index * 3 + 3 :] - ) - else: - self._palette += bytes(color) - self.dirty = 1 - return index - else: - msg = f"unknown color specifier: {repr(color)}" # type: ignore[unreachable] - raise ValueError(msg) - - def save(self, fp: str | IO[str]) -> None: - """Save palette to text file. - - .. warning:: This method is experimental. - """ - if self.rawmode: - msg = "palette contains raw palette data" - raise ValueError(msg) - if isinstance(fp, str): - fp = open(fp, "w") - fp.write("# Palette\n") - fp.write(f"# Mode: {self.mode}\n") - for i in range(256): - fp.write(f"{i}") - for j in range(i * len(self.mode), (i + 1) * len(self.mode)): - try: - fp.write(f" {self.palette[j]}") - except IndexError: - fp.write(" 0") - fp.write("\n") - fp.close() - - -# -------------------------------------------------------------------- -# Internal - - -def raw(rawmode: str, data: Sequence[int] | bytes | bytearray) -> ImagePalette: - palette = ImagePalette() - palette.rawmode = rawmode - palette.palette = data - palette.dirty = 1 - return palette - - -# -------------------------------------------------------------------- -# Factories - - -def make_linear_lut(black: int, white: float) -> list[int]: - if black == 0: - return [int(white * i // 255) for i in range(256)] - - msg = "unavailable when black is non-zero" - raise NotImplementedError(msg) # FIXME - - -def make_gamma_lut(exp: float) -> list[int]: - return [int(((i / 255.0) ** exp) * 255.0 + 0.5) for i in range(256)] - - -def negative(mode: str = "RGB") -> ImagePalette: - palette = list(range(256 * len(mode))) - palette.reverse() - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def random(mode: str = "RGB") -> ImagePalette: - from random import randint - - palette = [randint(0, 255) for _ in range(256 * len(mode))] - return ImagePalette(mode, palette) - - -def sepia(white: str = "#fff0c0") -> ImagePalette: - bands = [make_linear_lut(0, band) for band in ImageColor.getrgb(white)] - return ImagePalette("RGB", [bands[i % 3][i // 3] for i in range(256 * 3)]) - - -def wedge(mode: str = "RGB") -> ImagePalette: - palette = list(range(256 * len(mode))) - return ImagePalette(mode, [i // len(mode) for i in palette]) - - -def load(filename: str) -> tuple[bytes, str]: - # FIXME: supports GIMP gradients only - - with open(filename, "rb") as fp: - paletteHandlers: list[ - type[ - GimpPaletteFile.GimpPaletteFile - | GimpGradientFile.GimpGradientFile - | PaletteFile.PaletteFile - ] - ] = [ - GimpPaletteFile.GimpPaletteFile, - GimpGradientFile.GimpGradientFile, - PaletteFile.PaletteFile, - ] - for paletteHandler in paletteHandlers: - try: - fp.seek(0) - lut = paletteHandler(fp).getpalette() - if lut: - break - except (SyntaxError, ValueError): - pass - else: - msg = "cannot load palette" - raise OSError(msg) - - return lut # data, rawmode diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImagePath.py b/pptx-env/lib/python3.12/site-packages/PIL/ImagePath.py deleted file mode 100644 index 77e8a609..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImagePath.py +++ /dev/null @@ -1,20 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# path interface -# -# History: -# 1996-11-04 fl Created -# 2002-04-14 fl Added documentation stub class -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image - -Path = Image.core.path diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageQt.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageQt.py deleted file mode 100644 index af4d0742..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageQt.py +++ /dev/null @@ -1,219 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a simple Qt image interface. -# -# history: -# 2006-06-03 fl: created -# 2006-06-04 fl: inherit from QImage instead of wrapping it -# 2006-06-05 fl: removed toimage helper; move string support to ImageQt -# 2013-11-13 fl: add support for Qt5 (aurelien.ballier@cyclonit.com) -# -# Copyright (c) 2006 by Secret Labs AB -# Copyright (c) 2006 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import sys -from io import BytesIO - -from . import Image -from ._util import is_path - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from typing import Any - - from . import ImageFile - - QBuffer: type - -qt_version: str | None -qt_versions = [ - ["6", "PyQt6"], - ["side6", "PySide6"], -] - -# If a version has already been imported, attempt it first -qt_versions.sort(key=lambda version: version[1] in sys.modules, reverse=True) -for version, qt_module in qt_versions: - try: - qRgba: Callable[[int, int, int, int], int] - if qt_module == "PyQt6": - from PyQt6.QtCore import QBuffer, QByteArray, QIODevice - from PyQt6.QtGui import QImage, QPixmap, qRgba - elif qt_module == "PySide6": - from PySide6.QtCore import ( # type: ignore[assignment] - QBuffer, - QByteArray, - QIODevice, - ) - from PySide6.QtGui import QImage, QPixmap, qRgba # type: ignore[assignment] - except (ImportError, RuntimeError): - continue - qt_is_installed = True - qt_version = version - break -else: - qt_is_installed = False - qt_version = None - - -def rgb(r: int, g: int, b: int, a: int = 255) -> int: - """(Internal) Turns an RGB color into a Qt compatible color integer.""" - # use qRgb to pack the colors, and then turn the resulting long - # into a negative integer with the same bitpattern. - return qRgba(r, g, b, a) & 0xFFFFFFFF - - -def fromqimage(im: QImage | QPixmap) -> ImageFile.ImageFile: - """ - :param im: QImage or PIL ImageQt object - """ - buffer = QBuffer() - qt_openmode: object - if qt_version == "6": - try: - qt_openmode = getattr(QIODevice, "OpenModeFlag") - except AttributeError: - qt_openmode = getattr(QIODevice, "OpenMode") - else: - qt_openmode = QIODevice - buffer.open(getattr(qt_openmode, "ReadWrite")) - # preserve alpha channel with png - # otherwise ppm is more friendly with Image.open - if im.hasAlphaChannel(): - im.save(buffer, "png") - else: - im.save(buffer, "ppm") - - b = BytesIO() - b.write(buffer.data()) - buffer.close() - b.seek(0) - - return Image.open(b) - - -def fromqpixmap(im: QPixmap) -> ImageFile.ImageFile: - return fromqimage(im) - - -def align8to32(bytes: bytes, width: int, mode: str) -> bytes: - """ - converts each scanline of data from 8 bit to 32 bit aligned - """ - - bits_per_pixel = {"1": 1, "L": 8, "P": 8, "I;16": 16}[mode] - - # calculate bytes per line and the extra padding if needed - bits_per_line = bits_per_pixel * width - full_bytes_per_line, remaining_bits_per_line = divmod(bits_per_line, 8) - bytes_per_line = full_bytes_per_line + (1 if remaining_bits_per_line else 0) - - extra_padding = -bytes_per_line % 4 - - # already 32 bit aligned by luck - if not extra_padding: - return bytes - - new_data = [ - bytes[i * bytes_per_line : (i + 1) * bytes_per_line] + b"\x00" * extra_padding - for i in range(len(bytes) // bytes_per_line) - ] - - return b"".join(new_data) - - -def _toqclass_helper(im: Image.Image | str | QByteArray) -> dict[str, Any]: - data = None - colortable = None - exclusive_fp = False - - # handle filename, if given instead of image name - if hasattr(im, "toUtf8"): - # FIXME - is this really the best way to do this? - im = str(im.toUtf8(), "utf-8") - if is_path(im): - im = Image.open(im) - exclusive_fp = True - assert isinstance(im, Image.Image) - - qt_format = getattr(QImage, "Format") if qt_version == "6" else QImage - if im.mode == "1": - format = getattr(qt_format, "Format_Mono") - elif im.mode == "L": - format = getattr(qt_format, "Format_Indexed8") - colortable = [rgb(i, i, i) for i in range(256)] - elif im.mode == "P": - format = getattr(qt_format, "Format_Indexed8") - palette = im.getpalette() - assert palette is not None - colortable = [rgb(*palette[i : i + 3]) for i in range(0, len(palette), 3)] - elif im.mode == "RGB": - # Populate the 4th channel with 255 - im = im.convert("RGBA") - - data = im.tobytes("raw", "BGRA") - format = getattr(qt_format, "Format_RGB32") - elif im.mode == "RGBA": - data = im.tobytes("raw", "BGRA") - format = getattr(qt_format, "Format_ARGB32") - elif im.mode == "I;16": - im = im.point(lambda i: i * 256) - - format = getattr(qt_format, "Format_Grayscale16") - else: - if exclusive_fp: - im.close() - msg = f"unsupported image mode {repr(im.mode)}" - raise ValueError(msg) - - size = im.size - __data = data or align8to32(im.tobytes(), size[0], im.mode) - if exclusive_fp: - im.close() - return {"data": __data, "size": size, "format": format, "colortable": colortable} - - -if qt_is_installed: - - class ImageQt(QImage): - def __init__(self, im: Image.Image | str | QByteArray) -> None: - """ - An PIL image wrapper for Qt. This is a subclass of PyQt's QImage - class. - - :param im: A PIL Image object, or a file name (given either as - Python string or a PyQt string object). - """ - im_data = _toqclass_helper(im) - # must keep a reference, or Qt will crash! - # All QImage constructors that take data operate on an existing - # buffer, so this buffer has to hang on for the life of the image. - # Fixes https://github.com/python-pillow/Pillow/issues/1370 - self.__data = im_data["data"] - super().__init__( - self.__data, - im_data["size"][0], - im_data["size"][1], - im_data["format"], - ) - if im_data["colortable"]: - self.setColorTable(im_data["colortable"]) - - -def toqimage(im: Image.Image | str | QByteArray) -> ImageQt: - return ImageQt(im) - - -def toqpixmap(im: Image.Image | str | QByteArray) -> QPixmap: - qimage = toqimage(im) - pixmap = getattr(QPixmap, "fromImage")(qimage) - if qt_version == "6": - pixmap.detach() - return pixmap diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageSequence.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageSequence.py deleted file mode 100644 index 361be489..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageSequence.py +++ /dev/null @@ -1,88 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# sequence support classes -# -# history: -# 1997-02-20 fl Created -# -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1997 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -## -from __future__ import annotations - -from . import Image - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - - -class Iterator: - """ - This class implements an iterator object that can be used to loop - over an image sequence. - - You can use the ``[]`` operator to access elements by index. This operator - will raise an :py:exc:`IndexError` if you try to access a nonexistent - frame. - - :param im: An image object. - """ - - def __init__(self, im: Image.Image) -> None: - if not hasattr(im, "seek"): - msg = "im must have seek method" - raise AttributeError(msg) - self.im = im - self.position = getattr(self.im, "_min_frame", 0) - - def __getitem__(self, ix: int) -> Image.Image: - try: - self.im.seek(ix) - return self.im - except EOFError as e: - msg = "end of sequence" - raise IndexError(msg) from e - - def __iter__(self) -> Iterator: - return self - - def __next__(self) -> Image.Image: - try: - self.im.seek(self.position) - self.position += 1 - return self.im - except EOFError as e: - msg = "end of sequence" - raise StopIteration(msg) from e - - -def all_frames( - im: Image.Image | list[Image.Image], - func: Callable[[Image.Image], Image.Image] | None = None, -) -> list[Image.Image]: - """ - Applies a given function to all frames in an image or a list of images. - The frames are returned as a list of separate images. - - :param im: An image, or a list of images. - :param func: The function to apply to all of the image frames. - :returns: A list of images. - """ - if not isinstance(im, list): - im = [im] - - ims = [] - for imSequence in im: - current = imSequence.tell() - - ims += [im_frame.copy() for im_frame in Iterator(imSequence)] - - imSequence.seek(current) - return [func(im) for im in ims] if func else ims diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageShow.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageShow.py deleted file mode 100644 index 7705608e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageShow.py +++ /dev/null @@ -1,362 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# im.show() drivers -# -# History: -# 2008-04-06 fl Created -# -# Copyright (c) Secret Labs AB 2008. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import abc -import os -import shutil -import subprocess -import sys -from shlex import quote -from typing import Any - -from . import Image - -_viewers = [] - - -def register(viewer: type[Viewer] | Viewer, order: int = 1) -> None: - """ - The :py:func:`register` function is used to register additional viewers:: - - from PIL import ImageShow - ImageShow.register(MyViewer()) # MyViewer will be used as a last resort - ImageShow.register(MySecondViewer(), 0) # MySecondViewer will be prioritised - ImageShow.register(ImageShow.XVViewer(), 0) # XVViewer will be prioritised - - :param viewer: The viewer to be registered. - :param order: - Zero or a negative integer to prepend this viewer to the list, - a positive integer to append it. - """ - if isinstance(viewer, type) and issubclass(viewer, Viewer): - viewer = viewer() - if order > 0: - _viewers.append(viewer) - else: - _viewers.insert(0, viewer) - - -def show(image: Image.Image, title: str | None = None, **options: Any) -> bool: - r""" - Display a given image. - - :param image: An image object. - :param title: Optional title. Not all viewers can display the title. - :param \**options: Additional viewer options. - :returns: ``True`` if a suitable viewer was found, ``False`` otherwise. - """ - for viewer in _viewers: - if viewer.show(image, title=title, **options): - return True - return False - - -class Viewer: - """Base class for viewers.""" - - # main api - - def show(self, image: Image.Image, **options: Any) -> int: - """ - The main function for displaying an image. - Converts the given image to the target format and displays it. - """ - - if not ( - image.mode in ("1", "RGBA") - or (self.format == "PNG" and image.mode in ("I;16", "LA")) - ): - base = Image.getmodebase(image.mode) - if image.mode != base: - image = image.convert(base) - - return self.show_image(image, **options) - - # hook methods - - format: str | None = None - """The format to convert the image into.""" - options: dict[str, Any] = {} - """Additional options used to convert the image.""" - - def get_format(self, image: Image.Image) -> str | None: - """Return format name, or ``None`` to save as PGM/PPM.""" - return self.format - - def get_command(self, file: str, **options: Any) -> str: - """ - Returns the command used to display the file. - Not implemented in the base class. - """ - msg = "unavailable in base viewer" - raise NotImplementedError(msg) - - def save_image(self, image: Image.Image) -> str: - """Save to temporary file and return filename.""" - return image._dump(format=self.get_format(image), **self.options) - - def show_image(self, image: Image.Image, **options: Any) -> int: - """Display the given image.""" - return self.show_file(self.save_image(image), **options) - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - os.system(self.get_command(path, **options)) # nosec - return 1 - - -# -------------------------------------------------------------------- - - -class WindowsViewer(Viewer): - """The default viewer on Windows is the default system application for PNG files.""" - - format = "PNG" - options = {"compress_level": 1, "save_all": True} - - def get_command(self, file: str, **options: Any) -> str: - return ( - f'start "Pillow" /WAIT "{file}" ' - "&& ping -n 4 127.0.0.1 >NUL " - f'&& del /f "{file}"' - ) - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - subprocess.Popen( - self.get_command(path, **options), - shell=True, - creationflags=getattr(subprocess, "CREATE_NO_WINDOW"), - ) # nosec - return 1 - - -if sys.platform == "win32": - register(WindowsViewer) - - -class MacViewer(Viewer): - """The default viewer on macOS using ``Preview.app``.""" - - format = "PNG" - options = {"compress_level": 1, "save_all": True} - - def get_command(self, file: str, **options: Any) -> str: - # on darwin open returns immediately resulting in the temp - # file removal while app is opening - command = "open -a Preview.app" - command = f"({command} {quote(file)}; sleep 20; rm -f {quote(file)})&" - return command - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - subprocess.call(["open", "-a", "Preview.app", path]) - - pyinstaller = getattr(sys, "frozen", False) and hasattr(sys, "_MEIPASS") - executable = (not pyinstaller and sys.executable) or shutil.which("python3") - if executable: - subprocess.Popen( - [ - executable, - "-c", - "import os, sys, time; time.sleep(20); os.remove(sys.argv[1])", - path, - ] - ) - return 1 - - -if sys.platform == "darwin": - register(MacViewer) - - -class UnixViewer(abc.ABC, Viewer): - format = "PNG" - options = {"compress_level": 1, "save_all": True} - - @abc.abstractmethod - def get_command_ex(self, file: str, **options: Any) -> tuple[str, str]: - pass - - def get_command(self, file: str, **options: Any) -> str: - command = self.get_command_ex(file, **options)[0] - return f"{command} {quote(file)}" - - -class XDGViewer(UnixViewer): - """ - The freedesktop.org ``xdg-open`` command. - """ - - def get_command_ex(self, file: str, **options: Any) -> tuple[str, str]: - command = executable = "xdg-open" - return command, executable - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - subprocess.Popen(["xdg-open", path]) - return 1 - - -class DisplayViewer(UnixViewer): - """ - The ImageMagick ``display`` command. - This viewer supports the ``title`` parameter. - """ - - def get_command_ex( - self, file: str, title: str | None = None, **options: Any - ) -> tuple[str, str]: - command = executable = "display" - if title: - command += f" -title {quote(title)}" - return command, executable - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - args = ["display"] - title = options.get("title") - if title: - args += ["-title", title] - args.append(path) - - subprocess.Popen(args) - return 1 - - -class GmDisplayViewer(UnixViewer): - """The GraphicsMagick ``gm display`` command.""" - - def get_command_ex(self, file: str, **options: Any) -> tuple[str, str]: - executable = "gm" - command = "gm display" - return command, executable - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - subprocess.Popen(["gm", "display", path]) - return 1 - - -class EogViewer(UnixViewer): - """The GNOME Image Viewer ``eog`` command.""" - - def get_command_ex(self, file: str, **options: Any) -> tuple[str, str]: - executable = "eog" - command = "eog -n" - return command, executable - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - subprocess.Popen(["eog", "-n", path]) - return 1 - - -class XVViewer(UnixViewer): - """ - The X Viewer ``xv`` command. - This viewer supports the ``title`` parameter. - """ - - def get_command_ex( - self, file: str, title: str | None = None, **options: Any - ) -> tuple[str, str]: - # note: xv is pretty outdated. most modern systems have - # imagemagick's display command instead. - command = executable = "xv" - if title: - command += f" -name {quote(title)}" - return command, executable - - def show_file(self, path: str, **options: Any) -> int: - """ - Display given file. - """ - if not os.path.exists(path): - raise FileNotFoundError - args = ["xv"] - title = options.get("title") - if title: - args += ["-name", title] - args.append(path) - - subprocess.Popen(args) - return 1 - - -if sys.platform not in ("win32", "darwin"): # unixoids - if shutil.which("xdg-open"): - register(XDGViewer) - if shutil.which("display"): - register(DisplayViewer) - if shutil.which("gm"): - register(GmDisplayViewer) - if shutil.which("eog"): - register(EogViewer) - if shutil.which("xv"): - register(XVViewer) - - -class IPythonViewer(Viewer): - """The viewer for IPython frontends.""" - - def show_image(self, image: Image.Image, **options: Any) -> int: - ipython_display(image) - return 1 - - -try: - from IPython.display import display as ipython_display -except ImportError: - pass -else: - register(IPythonViewer) - - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 ImageShow.py imagefile [title]") - sys.exit() - - with Image.open(sys.argv[1]) as im: - print(show(im, *sys.argv[2:])) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageStat.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageStat.py deleted file mode 100644 index 3a1044ba..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageStat.py +++ /dev/null @@ -1,167 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# global image statistics -# -# History: -# 1996-04-05 fl Created -# 1997-05-21 fl Added mask; added rms, var, stddev attributes -# 1997-08-05 fl Added median -# 1998-07-05 hk Fixed integer overflow error -# -# Notes: -# This class shows how to implement delayed evaluation of attributes. -# To get a certain value, simply access the corresponding attribute. -# The __getattr__ dispatcher takes care of the rest. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996-97. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import math -from functools import cached_property - -from . import Image - - -class Stat: - def __init__( - self, image_or_list: Image.Image | list[int], mask: Image.Image | None = None - ) -> None: - """ - Calculate statistics for the given image. If a mask is included, - only the regions covered by that mask are included in the - statistics. You can also pass in a previously calculated histogram. - - :param image: A PIL image, or a precalculated histogram. - - .. note:: - - For a PIL image, calculations rely on the - :py:meth:`~PIL.Image.Image.histogram` method. The pixel counts are - grouped into 256 bins, even if the image has more than 8 bits per - channel. So ``I`` and ``F`` mode images have a maximum ``mean``, - ``median`` and ``rms`` of 255, and cannot have an ``extrema`` maximum - of more than 255. - - :param mask: An optional mask. - """ - if isinstance(image_or_list, Image.Image): - self.h = image_or_list.histogram(mask) - elif isinstance(image_or_list, list): - self.h = image_or_list - else: - msg = "first argument must be image or list" # type: ignore[unreachable] - raise TypeError(msg) - self.bands = list(range(len(self.h) // 256)) - - @cached_property - def extrema(self) -> list[tuple[int, int]]: - """ - Min/max values for each band in the image. - - .. note:: - This relies on the :py:meth:`~PIL.Image.Image.histogram` method, and - simply returns the low and high bins used. This is correct for - images with 8 bits per channel, but fails for other modes such as - ``I`` or ``F``. Instead, use :py:meth:`~PIL.Image.Image.getextrema` to - return per-band extrema for the image. This is more correct and - efficient because, for non-8-bit modes, the histogram method uses - :py:meth:`~PIL.Image.Image.getextrema` to determine the bins used. - """ - - def minmax(histogram: list[int]) -> tuple[int, int]: - res_min, res_max = 255, 0 - for i in range(256): - if histogram[i]: - res_min = i - break - for i in range(255, -1, -1): - if histogram[i]: - res_max = i - break - return res_min, res_max - - return [minmax(self.h[i:]) for i in range(0, len(self.h), 256)] - - @cached_property - def count(self) -> list[int]: - """Total number of pixels for each band in the image.""" - return [sum(self.h[i : i + 256]) for i in range(0, len(self.h), 256)] - - @cached_property - def sum(self) -> list[float]: - """Sum of all pixels for each band in the image.""" - - v = [] - for i in range(0, len(self.h), 256): - layer_sum = 0.0 - for j in range(256): - layer_sum += j * self.h[i + j] - v.append(layer_sum) - return v - - @cached_property - def sum2(self) -> list[float]: - """Squared sum of all pixels for each band in the image.""" - - v = [] - for i in range(0, len(self.h), 256): - sum2 = 0.0 - for j in range(256): - sum2 += (j**2) * float(self.h[i + j]) - v.append(sum2) - return v - - @cached_property - def mean(self) -> list[float]: - """Average (arithmetic mean) pixel level for each band in the image.""" - return [self.sum[i] / self.count[i] if self.count[i] else 0 for i in self.bands] - - @cached_property - def median(self) -> list[int]: - """Median pixel level for each band in the image.""" - - v = [] - for i in self.bands: - s = 0 - half = self.count[i] // 2 - b = i * 256 - for j in range(256): - s = s + self.h[b + j] - if s > half: - break - v.append(j) - return v - - @cached_property - def rms(self) -> list[float]: - """RMS (root-mean-square) for each band in the image.""" - return [ - math.sqrt(self.sum2[i] / self.count[i]) if self.count[i] else 0 - for i in self.bands - ] - - @cached_property - def var(self) -> list[float]: - """Variance for each band in the image.""" - return [ - ( - (self.sum2[i] - (self.sum[i] ** 2.0) / self.count[i]) / self.count[i] - if self.count[i] - else 0 - ) - for i in self.bands - ] - - @cached_property - def stddev(self) -> list[float]: - """Standard deviation for each band in the image.""" - return [math.sqrt(self.var[i]) for i in self.bands] - - -Global = Stat # compatibility diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageText.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageText.py deleted file mode 100644 index c74570e6..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageText.py +++ /dev/null @@ -1,318 +0,0 @@ -from __future__ import annotations - -from . import ImageFont -from ._typing import _Ink - - -class Text: - def __init__( - self, - text: str | bytes, - font: ( - ImageFont.ImageFont - | ImageFont.FreeTypeFont - | ImageFont.TransposedFont - | None - ) = None, - mode: str = "RGB", - spacing: float = 4, - direction: str | None = None, - features: list[str] | None = None, - language: str | None = None, - ) -> None: - """ - :param text: String to be drawn. - :param font: Either an :py:class:`~PIL.ImageFont.ImageFont` instance, - :py:class:`~PIL.ImageFont.FreeTypeFont` instance, - :py:class:`~PIL.ImageFont.TransposedFont` instance or ``None``. If - ``None``, the default font from :py:meth:`.ImageFont.load_default` - will be used. - :param mode: The image mode this will be used with. - :param spacing: The number of pixels between lines. - :param direction: Direction of the text. It can be ``"rtl"`` (right to left), - ``"ltr"`` (left to right) or ``"ttb"`` (top to bottom). - Requires libraqm. - :param features: A list of OpenType font features to be used during text - layout. This is usually used to turn on optional font features - that are not enabled by default, for example ``"dlig"`` or - ``"ss01"``, but can be also used to turn off default font - features, for example ``"-liga"`` to disable ligatures or - ``"-kern"`` to disable kerning. To get all supported - features, see `OpenType docs`_. - Requires libraqm. - :param language: Language of the text. Different languages may use - different glyph shapes or ligatures. This parameter tells - the font which language the text is in, and to apply the - correct substitutions as appropriate, if available. - It should be a `BCP 47 language code`_. - Requires libraqm. - """ - self.text = text - self.font = font or ImageFont.load_default() - - self.mode = mode - self.spacing = spacing - self.direction = direction - self.features = features - self.language = language - - self.embedded_color = False - - self.stroke_width: float = 0 - self.stroke_fill: _Ink | None = None - - def embed_color(self) -> None: - """ - Use embedded color glyphs (COLR, CBDT, SBIX). - """ - if self.mode not in ("RGB", "RGBA"): - msg = "Embedded color supported only in RGB and RGBA modes" - raise ValueError(msg) - self.embedded_color = True - - def stroke(self, width: float = 0, fill: _Ink | None = None) -> None: - """ - :param width: The width of the text stroke. - :param fill: Color to use for the text stroke when drawing. If not given, will - default to the ``fill`` parameter from - :py:meth:`.ImageDraw.ImageDraw.text`. - """ - self.stroke_width = width - self.stroke_fill = fill - - def _get_fontmode(self) -> str: - if self.mode in ("1", "P", "I", "F"): - return "1" - elif self.embedded_color: - return "RGBA" - else: - return "L" - - def get_length(self): - """ - Returns length (in pixels with 1/64 precision) of text. - - This is the amount by which following text should be offset. - Text bounding box may extend past the length in some fonts, - e.g. when using italics or accents. - - The result is returned as a float; it is a whole number if using basic layout. - - Note that the sum of two lengths may not equal the length of a concatenated - string due to kerning. If you need to adjust for kerning, include the following - character and subtract its length. - - For example, instead of:: - - hello = ImageText.Text("Hello", font).get_length() - world = ImageText.Text("World", font).get_length() - helloworld = ImageText.Text("HelloWorld", font).get_length() - assert hello + world == helloworld - - use:: - - hello = ( - ImageText.Text("HelloW", font).get_length() - - ImageText.Text("W", font).get_length() - ) # adjusted for kerning - world = ImageText.Text("World", font).get_length() - helloworld = ImageText.Text("HelloWorld", font).get_length() - assert hello + world == helloworld - - or disable kerning with (requires libraqm):: - - hello = ImageText.Text("Hello", font, features=["-kern"]).get_length() - world = ImageText.Text("World", font, features=["-kern"]).get_length() - helloworld = ImageText.Text( - "HelloWorld", font, features=["-kern"] - ).get_length() - assert hello + world == helloworld - - :return: Either width for horizontal text, or height for vertical text. - """ - split_character = "\n" if isinstance(self.text, str) else b"\n" - if split_character in self.text: - msg = "can't measure length of multiline text" - raise ValueError(msg) - return self.font.getlength( - self.text, - self._get_fontmode(), - self.direction, - self.features, - self.language, - ) - - def _split( - self, xy: tuple[float, float], anchor: str | None, align: str - ) -> list[tuple[tuple[float, float], str, str | bytes]]: - if anchor is None: - anchor = "lt" if self.direction == "ttb" else "la" - elif len(anchor) != 2: - msg = "anchor must be a 2 character string" - raise ValueError(msg) - - lines = ( - self.text.split("\n") - if isinstance(self.text, str) - else self.text.split(b"\n") - ) - if len(lines) == 1: - return [(xy, anchor, self.text)] - - if anchor[1] in "tb" and self.direction != "ttb": - msg = "anchor not supported for multiline text" - raise ValueError(msg) - - fontmode = self._get_fontmode() - line_spacing = ( - self.font.getbbox( - "A", - fontmode, - None, - self.features, - self.language, - self.stroke_width, - )[3] - + self.stroke_width - + self.spacing - ) - - top = xy[1] - parts = [] - if self.direction == "ttb": - left = xy[0] - for line in lines: - parts.append(((left, top), anchor, line)) - left += line_spacing - else: - widths = [] - max_width: float = 0 - for line in lines: - line_width = self.font.getlength( - line, fontmode, self.direction, self.features, self.language - ) - widths.append(line_width) - max_width = max(max_width, line_width) - - if anchor[1] == "m": - top -= (len(lines) - 1) * line_spacing / 2.0 - elif anchor[1] == "d": - top -= (len(lines) - 1) * line_spacing - - idx = -1 - for line in lines: - left = xy[0] - idx += 1 - width_difference = max_width - widths[idx] - - # align by align parameter - if align in ("left", "justify"): - pass - elif align == "center": - left += width_difference / 2.0 - elif align == "right": - left += width_difference - else: - msg = 'align must be "left", "center", "right" or "justify"' - raise ValueError(msg) - - if ( - align == "justify" - and width_difference != 0 - and idx != len(lines) - 1 - ): - words = ( - line.split(" ") if isinstance(line, str) else line.split(b" ") - ) - if len(words) > 1: - # align left by anchor - if anchor[0] == "m": - left -= max_width / 2.0 - elif anchor[0] == "r": - left -= max_width - - word_widths = [ - self.font.getlength( - word, - fontmode, - self.direction, - self.features, - self.language, - ) - for word in words - ] - word_anchor = "l" + anchor[1] - width_difference = max_width - sum(word_widths) - i = 0 - for word in words: - parts.append(((left, top), word_anchor, word)) - left += word_widths[i] + width_difference / (len(words) - 1) - i += 1 - top += line_spacing - continue - - # align left by anchor - if anchor[0] == "m": - left -= width_difference / 2.0 - elif anchor[0] == "r": - left -= width_difference - parts.append(((left, top), anchor, line)) - top += line_spacing - - return parts - - def get_bbox( - self, - xy: tuple[float, float] = (0, 0), - anchor: str | None = None, - align: str = "left", - ) -> tuple[float, float, float, float]: - """ - Returns bounding box (in pixels) of text. - - Use :py:meth:`get_length` to get the offset of following text with 1/64 pixel - precision. The bounding box includes extra margins for some fonts, e.g. italics - or accents. - - :param xy: The anchor coordinates of the text. - :param anchor: The text anchor alignment. Determines the relative location of - the anchor to the text. The default alignment is top left, - specifically ``la`` for horizontal text and ``lt`` for - vertical text. See :ref:`text-anchors` for details. - :param align: For multiline text, ``"left"``, ``"center"``, ``"right"`` or - ``"justify"`` determines the relative alignment of lines. Use the - ``anchor`` parameter to specify the alignment to ``xy``. - - :return: ``(left, top, right, bottom)`` bounding box - """ - bbox: tuple[float, float, float, float] | None = None - fontmode = self._get_fontmode() - for xy, anchor, line in self._split(xy, anchor, align): - bbox_line = self.font.getbbox( - line, - fontmode, - self.direction, - self.features, - self.language, - self.stroke_width, - anchor, - ) - bbox_line = ( - bbox_line[0] + xy[0], - bbox_line[1] + xy[1], - bbox_line[2] + xy[0], - bbox_line[3] + xy[1], - ) - if bbox is None: - bbox = bbox_line - else: - bbox = ( - min(bbox[0], bbox_line[0]), - min(bbox[1], bbox_line[1]), - max(bbox[2], bbox_line[2]), - max(bbox[3], bbox_line[3]), - ) - - if bbox is None: - return xy[0], xy[1], xy[0], xy[1] - return bbox diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageTk.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageTk.py deleted file mode 100644 index 3a4cb81e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageTk.py +++ /dev/null @@ -1,266 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a Tk display interface -# -# History: -# 96-04-08 fl Created -# 96-09-06 fl Added getimage method -# 96-11-01 fl Rewritten, removed image attribute and crop method -# 97-05-09 fl Use PyImagingPaste method instead of image type -# 97-05-12 fl Minor tweaks to match the IFUNC95 interface -# 97-05-17 fl Support the "pilbitmap" booster patch -# 97-06-05 fl Added file= and data= argument to image constructors -# 98-03-09 fl Added width and height methods to Image classes -# 98-07-02 fl Use default mode for "P" images without palette attribute -# 98-07-02 fl Explicitly destroy Tkinter image objects -# 99-07-24 fl Support multiple Tk interpreters (from Greg Couch) -# 99-07-26 fl Automatically hook into Tkinter (if possible) -# 99-08-15 fl Hook uses _imagingtk instead of _imaging -# -# Copyright (c) 1997-1999 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import tkinter -from io import BytesIO -from typing import Any - -from . import Image, ImageFile - -TYPE_CHECKING = False -if TYPE_CHECKING: - from ._typing import CapsuleType - -# -------------------------------------------------------------------- -# Check for Tkinter interface hooks - - -def _get_image_from_kw(kw: dict[str, Any]) -> ImageFile.ImageFile | None: - source = None - if "file" in kw: - source = kw.pop("file") - elif "data" in kw: - source = BytesIO(kw.pop("data")) - if not source: - return None - return Image.open(source) - - -def _pyimagingtkcall( - command: str, photo: PhotoImage | tkinter.PhotoImage, ptr: CapsuleType -) -> None: - tk = photo.tk - try: - tk.call(command, photo, repr(ptr)) - except tkinter.TclError: - # activate Tkinter hook - # may raise an error if it cannot attach to Tkinter - from . import _imagingtk - - _imagingtk.tkinit(tk.interpaddr()) - tk.call(command, photo, repr(ptr)) - - -# -------------------------------------------------------------------- -# PhotoImage - - -class PhotoImage: - """ - A Tkinter-compatible photo image. This can be used - everywhere Tkinter expects an image object. If the image is an RGBA - image, pixels having alpha 0 are treated as transparent. - - The constructor takes either a PIL image, or a mode and a size. - Alternatively, you can use the ``file`` or ``data`` options to initialize - the photo image object. - - :param image: Either a PIL image, or a mode string. If a mode string is - used, a size must also be given. - :param size: If the first argument is a mode string, this defines the size - of the image. - :keyword file: A filename to load the image from (using - ``Image.open(file)``). - :keyword data: An 8-bit string containing image data (as loaded from an - image file). - """ - - def __init__( - self, - image: Image.Image | str | None = None, - size: tuple[int, int] | None = None, - **kw: Any, - ) -> None: - # Tk compatibility: file or data - if image is None: - image = _get_image_from_kw(kw) - - if image is None: - msg = "Image is required" - raise ValueError(msg) - elif isinstance(image, str): - mode = image - image = None - - if size is None: - msg = "If first argument is mode, size is required" - raise ValueError(msg) - else: - # got an image instead of a mode - mode = image.mode - if mode == "P": - # palette mapped data - image.apply_transparency() - image.load() - mode = image.palette.mode if image.palette else "RGB" - size = image.size - kw["width"], kw["height"] = size - - if mode not in ["1", "L", "RGB", "RGBA"]: - mode = Image.getmodebase(mode) - - self.__mode = mode - self.__size = size - self.__photo = tkinter.PhotoImage(**kw) - self.tk = self.__photo.tk - if image: - self.paste(image) - - def __del__(self) -> None: - try: - name = self.__photo.name - except AttributeError: - return - self.__photo.name = None - try: - self.__photo.tk.call("image", "delete", name) - except Exception: - pass # ignore internal errors - - def __str__(self) -> str: - """ - Get the Tkinter photo image identifier. This method is automatically - called by Tkinter whenever a PhotoImage object is passed to a Tkinter - method. - - :return: A Tkinter photo image identifier (a string). - """ - return str(self.__photo) - - def width(self) -> int: - """ - Get the width of the image. - - :return: The width, in pixels. - """ - return self.__size[0] - - def height(self) -> int: - """ - Get the height of the image. - - :return: The height, in pixels. - """ - return self.__size[1] - - def paste(self, im: Image.Image) -> None: - """ - Paste a PIL image into the photo image. Note that this can - be very slow if the photo image is displayed. - - :param im: A PIL image. The size must match the target region. If the - mode does not match, the image is converted to the mode of - the bitmap image. - """ - # convert to blittable - ptr = im.getim() - image = im.im - if not image.isblock() or im.mode != self.__mode: - block = Image.core.new_block(self.__mode, im.size) - image.convert2(block, image) # convert directly between buffers - ptr = block.ptr - - _pyimagingtkcall("PyImagingPhoto", self.__photo, ptr) - - -# -------------------------------------------------------------------- -# BitmapImage - - -class BitmapImage: - """ - A Tkinter-compatible bitmap image. This can be used everywhere Tkinter - expects an image object. - - The given image must have mode "1". Pixels having value 0 are treated as - transparent. Options, if any, are passed on to Tkinter. The most commonly - used option is ``foreground``, which is used to specify the color for the - non-transparent parts. See the Tkinter documentation for information on - how to specify colours. - - :param image: A PIL image. - """ - - def __init__(self, image: Image.Image | None = None, **kw: Any) -> None: - # Tk compatibility: file or data - if image is None: - image = _get_image_from_kw(kw) - - if image is None: - msg = "Image is required" - raise ValueError(msg) - self.__mode = image.mode - self.__size = image.size - - self.__photo = tkinter.BitmapImage(data=image.tobitmap(), **kw) - - def __del__(self) -> None: - try: - name = self.__photo.name - except AttributeError: - return - self.__photo.name = None - try: - self.__photo.tk.call("image", "delete", name) - except Exception: - pass # ignore internal errors - - def width(self) -> int: - """ - Get the width of the image. - - :return: The width, in pixels. - """ - return self.__size[0] - - def height(self) -> int: - """ - Get the height of the image. - - :return: The height, in pixels. - """ - return self.__size[1] - - def __str__(self) -> str: - """ - Get the Tkinter bitmap image identifier. This method is automatically - called by Tkinter whenever a BitmapImage object is passed to a Tkinter - method. - - :return: A Tkinter bitmap image identifier (a string). - """ - return str(self.__photo) - - -def getimage(photo: PhotoImage) -> Image.Image: - """Copies the contents of a PhotoImage to a PIL image memory.""" - im = Image.new("RGBA", (photo.width(), photo.height())) - - _pyimagingtkcall("PyImagingPhotoGet", photo, im.getim()) - - return im diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageTransform.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageTransform.py deleted file mode 100644 index fb144ff3..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageTransform.py +++ /dev/null @@ -1,136 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# transform wrappers -# -# History: -# 2002-04-08 fl Created -# -# Copyright (c) 2002 by Secret Labs AB -# Copyright (c) 2002 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from collections.abc import Sequence -from typing import Any - -from . import Image - - -class Transform(Image.ImageTransformHandler): - """Base class for other transforms defined in :py:mod:`~PIL.ImageTransform`.""" - - method: Image.Transform - - def __init__(self, data: Sequence[Any]) -> None: - self.data = data - - def getdata(self) -> tuple[Image.Transform, Sequence[int]]: - return self.method, self.data - - def transform( - self, - size: tuple[int, int], - image: Image.Image, - **options: Any, - ) -> Image.Image: - """Perform the transform. Called from :py:meth:`.Image.transform`.""" - # can be overridden - method, data = self.getdata() - return image.transform(size, method, data, **options) - - -class AffineTransform(Transform): - """ - Define an affine image transform. - - This function takes a 6-tuple (a, b, c, d, e, f) which contain the first - two rows from the inverse of an affine transform matrix. For each pixel - (x, y) in the output image, the new value is taken from a position (a x + - b y + c, d x + e y + f) in the input image, rounded to nearest pixel. - - This function can be used to scale, translate, rotate, and shear the - original image. - - See :py:meth:`.Image.transform` - - :param matrix: A 6-tuple (a, b, c, d, e, f) containing the first two rows - from the inverse of an affine transform matrix. - """ - - method = Image.Transform.AFFINE - - -class PerspectiveTransform(Transform): - """ - Define a perspective image transform. - - This function takes an 8-tuple (a, b, c, d, e, f, g, h). For each pixel - (x, y) in the output image, the new value is taken from a position - ((a x + b y + c) / (g x + h y + 1), (d x + e y + f) / (g x + h y + 1)) in - the input image, rounded to nearest pixel. - - This function can be used to scale, translate, rotate, and shear the - original image. - - See :py:meth:`.Image.transform` - - :param matrix: An 8-tuple (a, b, c, d, e, f, g, h). - """ - - method = Image.Transform.PERSPECTIVE - - -class ExtentTransform(Transform): - """ - Define a transform to extract a subregion from an image. - - Maps a rectangle (defined by two corners) from the image to a rectangle of - the given size. The resulting image will contain data sampled from between - the corners, such that (x0, y0) in the input image will end up at (0,0) in - the output image, and (x1, y1) at size. - - This method can be used to crop, stretch, shrink, or mirror an arbitrary - rectangle in the current image. It is slightly slower than crop, but about - as fast as a corresponding resize operation. - - See :py:meth:`.Image.transform` - - :param bbox: A 4-tuple (x0, y0, x1, y1) which specifies two points in the - input image's coordinate system. See :ref:`coordinate-system`. - """ - - method = Image.Transform.EXTENT - - -class QuadTransform(Transform): - """ - Define a quad image transform. - - Maps a quadrilateral (a region defined by four corners) from the image to a - rectangle of the given size. - - See :py:meth:`.Image.transform` - - :param xy: An 8-tuple (x0, y0, x1, y1, x2, y2, x3, y3) which contain the - upper left, lower left, lower right, and upper right corner of the - source quadrilateral. - """ - - method = Image.Transform.QUAD - - -class MeshTransform(Transform): - """ - Define a mesh image transform. A mesh transform consists of one or more - individual quad transforms. - - See :py:meth:`.Image.transform` - - :param data: A list of (bbox, quad) tuples. - """ - - method = Image.Transform.MESH diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImageWin.py b/pptx-env/lib/python3.12/site-packages/PIL/ImageWin.py deleted file mode 100644 index 98c28f29..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImageWin.py +++ /dev/null @@ -1,247 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# a Windows DIB display interface -# -# History: -# 1996-05-20 fl Created -# 1996-09-20 fl Fixed subregion exposure -# 1997-09-21 fl Added draw primitive (for tzPrint) -# 2003-05-21 fl Added experimental Window/ImageWindow classes -# 2003-09-05 fl Added fromstring/tostring methods -# -# Copyright (c) Secret Labs AB 1997-2003. -# Copyright (c) Fredrik Lundh 1996-2003. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image - - -class HDC: - """ - Wraps an HDC integer. The resulting object can be passed to the - :py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose` - methods. - """ - - def __init__(self, dc: int) -> None: - self.dc = dc - - def __int__(self) -> int: - return self.dc - - -class HWND: - """ - Wraps an HWND integer. The resulting object can be passed to the - :py:meth:`~PIL.ImageWin.Dib.draw` and :py:meth:`~PIL.ImageWin.Dib.expose` - methods, instead of a DC. - """ - - def __init__(self, wnd: int) -> None: - self.wnd = wnd - - def __int__(self) -> int: - return self.wnd - - -class Dib: - """ - A Windows bitmap with the given mode and size. The mode can be one of "1", - "L", "P", or "RGB". - - If the display requires a palette, this constructor creates a suitable - palette and associates it with the image. For an "L" image, 128 graylevels - are allocated. For an "RGB" image, a 6x6x6 colour cube is used, together - with 20 graylevels. - - To make sure that palettes work properly under Windows, you must call the - ``palette`` method upon certain events from Windows. - - :param image: Either a PIL image, or a mode string. If a mode string is - used, a size must also be given. The mode can be one of "1", - "L", "P", or "RGB". - :param size: If the first argument is a mode string, this - defines the size of the image. - """ - - def __init__( - self, image: Image.Image | str, size: tuple[int, int] | None = None - ) -> None: - if isinstance(image, str): - mode = image - image = "" - if size is None: - msg = "If first argument is mode, size is required" - raise ValueError(msg) - else: - mode = image.mode - size = image.size - if mode not in ["1", "L", "P", "RGB"]: - mode = Image.getmodebase(mode) - self.image = Image.core.display(mode, size) - self.mode = mode - self.size = size - if image: - assert not isinstance(image, str) - self.paste(image) - - def expose(self, handle: int | HDC | HWND) -> None: - """ - Copy the bitmap contents to a device context. - - :param handle: Device context (HDC), cast to a Python integer, or an - HDC or HWND instance. In PythonWin, you can use - ``CDC.GetHandleAttrib()`` to get a suitable handle. - """ - handle_int = int(handle) - if isinstance(handle, HWND): - dc = self.image.getdc(handle_int) - try: - self.image.expose(dc) - finally: - self.image.releasedc(handle_int, dc) - else: - self.image.expose(handle_int) - - def draw( - self, - handle: int | HDC | HWND, - dst: tuple[int, int, int, int], - src: tuple[int, int, int, int] | None = None, - ) -> None: - """ - Same as expose, but allows you to specify where to draw the image, and - what part of it to draw. - - The destination and source areas are given as 4-tuple rectangles. If - the source is omitted, the entire image is copied. If the source and - the destination have different sizes, the image is resized as - necessary. - """ - if src is None: - src = (0, 0) + self.size - handle_int = int(handle) - if isinstance(handle, HWND): - dc = self.image.getdc(handle_int) - try: - self.image.draw(dc, dst, src) - finally: - self.image.releasedc(handle_int, dc) - else: - self.image.draw(handle_int, dst, src) - - def query_palette(self, handle: int | HDC | HWND) -> int: - """ - Installs the palette associated with the image in the given device - context. - - This method should be called upon **QUERYNEWPALETTE** and - **PALETTECHANGED** events from Windows. If this method returns a - non-zero value, one or more display palette entries were changed, and - the image should be redrawn. - - :param handle: Device context (HDC), cast to a Python integer, or an - HDC or HWND instance. - :return: The number of entries that were changed (if one or more entries, - this indicates that the image should be redrawn). - """ - handle_int = int(handle) - if isinstance(handle, HWND): - handle = self.image.getdc(handle_int) - try: - result = self.image.query_palette(handle) - finally: - self.image.releasedc(handle, handle) - else: - result = self.image.query_palette(handle_int) - return result - - def paste( - self, im: Image.Image, box: tuple[int, int, int, int] | None = None - ) -> None: - """ - Paste a PIL image into the bitmap image. - - :param im: A PIL image. The size must match the target region. - If the mode does not match, the image is converted to the - mode of the bitmap image. - :param box: A 4-tuple defining the left, upper, right, and - lower pixel coordinate. See :ref:`coordinate-system`. If - None is given instead of a tuple, all of the image is - assumed. - """ - im.load() - if self.mode != im.mode: - im = im.convert(self.mode) - if box: - self.image.paste(im.im, box) - else: - self.image.paste(im.im) - - def frombytes(self, buffer: bytes) -> None: - """ - Load display memory contents from byte data. - - :param buffer: A buffer containing display data (usually - data returned from :py:func:`~PIL.ImageWin.Dib.tobytes`) - """ - self.image.frombytes(buffer) - - def tobytes(self) -> bytes: - """ - Copy display memory contents to bytes object. - - :return: A bytes object containing display data. - """ - return self.image.tobytes() - - -class Window: - """Create a Window with the given title size.""" - - def __init__( - self, title: str = "PIL", width: int | None = None, height: int | None = None - ) -> None: - self.hwnd = Image.core.createwindow( - title, self.__dispatcher, width or 0, height or 0 - ) - - def __dispatcher(self, action: str, *args: int) -> None: - getattr(self, f"ui_handle_{action}")(*args) - - def ui_handle_clear(self, dc: int, x0: int, y0: int, x1: int, y1: int) -> None: - pass - - def ui_handle_damage(self, x0: int, y0: int, x1: int, y1: int) -> None: - pass - - def ui_handle_destroy(self) -> None: - pass - - def ui_handle_repair(self, dc: int, x0: int, y0: int, x1: int, y1: int) -> None: - pass - - def ui_handle_resize(self, width: int, height: int) -> None: - pass - - def mainloop(self) -> None: - Image.core.eventloop() - - -class ImageWindow(Window): - """Create an image window which displays the given image.""" - - def __init__(self, image: Image.Image | Dib, title: str = "PIL") -> None: - if not isinstance(image, Dib): - image = Dib(image) - self.image = image - width, height = image.size - super().__init__(title, width=width, height=height) - - def ui_handle_repair(self, dc: int, x0: int, y0: int, x1: int, y1: int) -> None: - self.image.draw(dc, (x0, y0, x1, y1)) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/ImtImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/ImtImagePlugin.py deleted file mode 100644 index c4eccee3..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/ImtImagePlugin.py +++ /dev/null @@ -1,103 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IM Tools support for PIL -# -# history: -# 1996-05-27 fl Created (read 8-bit images only) -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.2) -# -# Copyright (c) Secret Labs AB 1997-2001. -# Copyright (c) Fredrik Lundh 1996-2001. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import re - -from . import Image, ImageFile - -# -# -------------------------------------------------------------------- - -field = re.compile(rb"([a-z]*) ([^ \r\n]*)") - - -## -# Image plugin for IM Tools images. - - -class ImtImageFile(ImageFile.ImageFile): - format = "IMT" - format_description = "IM Tools" - - def _open(self) -> None: - # Quick rejection: if there's not a LF among the first - # 100 bytes, this is (probably) not a text header. - - assert self.fp is not None - - buffer = self.fp.read(100) - if b"\n" not in buffer: - msg = "not an IM file" - raise SyntaxError(msg) - - xsize = ysize = 0 - - while True: - if buffer: - s = buffer[:1] - buffer = buffer[1:] - else: - s = self.fp.read(1) - if not s: - break - - if s == b"\x0c": - # image data begins - self.tile = [ - ImageFile._Tile( - "raw", - (0, 0) + self.size, - self.fp.tell() - len(buffer), - self.mode, - ) - ] - - break - - else: - # read key/value pair - if b"\n" not in buffer: - buffer += self.fp.read(100) - lines = buffer.split(b"\n") - s += lines.pop(0) - buffer = b"\n".join(lines) - if len(s) == 1 or len(s) > 100: - break - if s[0] == ord(b"*"): - continue # comment - - m = field.match(s) - if not m: - break - k, v = m.group(1, 2) - if k == b"width": - xsize = int(v) - self._size = xsize, ysize - elif k == b"height": - ysize = int(v) - self._size = xsize, ysize - elif k == b"pixel" and v == b"n8": - self._mode = "L" - - -# -# -------------------------------------------------------------------- - -Image.register_open(ImtImageFile.format, ImtImageFile) - -# -# no extension registered (".im" is simply too common) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/IptcImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/IptcImagePlugin.py deleted file mode 100644 index c28f4dcc..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/IptcImagePlugin.py +++ /dev/null @@ -1,229 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# IPTC/NAA file handling -# -# history: -# 1995-10-01 fl Created -# 1998-03-09 fl Cleaned up and added to PIL -# 2002-06-18 fl Added getiptcinfo helper -# -# Copyright (c) Secret Labs AB 1997-2002. -# Copyright (c) Fredrik Lundh 1995. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from io import BytesIO -from typing import cast - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import i32be as i32 - -COMPRESSION = {1: "raw", 5: "jpeg"} - - -# -# Helpers - - -def _i(c: bytes) -> int: - return i32((b"\0\0\0\0" + c)[-4:]) - - -## -# Image plugin for IPTC/NAA datastreams. To read IPTC/NAA fields -# from TIFF and JPEG files, use the getiptcinfo function. - - -class IptcImageFile(ImageFile.ImageFile): - format = "IPTC" - format_description = "IPTC/NAA" - - def getint(self, key: tuple[int, int]) -> int: - return _i(self.info[key]) - - def field(self) -> tuple[tuple[int, int] | None, int]: - # - # get a IPTC field header - s = self.fp.read(5) - if not s.strip(b"\x00"): - return None, 0 - - tag = s[1], s[2] - - # syntax - if s[0] != 0x1C or tag[0] not in [1, 2, 3, 4, 5, 6, 7, 8, 9, 240]: - msg = "invalid IPTC/NAA file" - raise SyntaxError(msg) - - # field size - size = s[3] - if size > 132: - msg = "illegal field length in IPTC/NAA file" - raise OSError(msg) - elif size == 128: - size = 0 - elif size > 128: - size = _i(self.fp.read(size - 128)) - else: - size = i16(s, 3) - - return tag, size - - def _open(self) -> None: - # load descriptive fields - while True: - offset = self.fp.tell() - tag, size = self.field() - if not tag or tag == (8, 10): - break - if size: - tagdata = self.fp.read(size) - else: - tagdata = None - if tag in self.info: - if isinstance(self.info[tag], list): - self.info[tag].append(tagdata) - else: - self.info[tag] = [self.info[tag], tagdata] - else: - self.info[tag] = tagdata - - # mode - layers = self.info[(3, 60)][0] - component = self.info[(3, 60)][1] - if layers == 1 and not component: - self._mode = "L" - band = None - else: - if layers == 3 and component: - self._mode = "RGB" - elif layers == 4 and component: - self._mode = "CMYK" - if (3, 65) in self.info: - band = self.info[(3, 65)][0] - 1 - else: - band = 0 - - # size - self._size = self.getint((3, 20)), self.getint((3, 30)) - - # compression - try: - compression = COMPRESSION[self.getint((3, 120))] - except KeyError as e: - msg = "Unknown IPTC image compression" - raise OSError(msg) from e - - # tile - if tag == (8, 10): - self.tile = [ - ImageFile._Tile("iptc", (0, 0) + self.size, offset, (compression, band)) - ] - - def load(self) -> Image.core.PixelAccess | None: - if self.tile: - args = self.tile[0].args - assert isinstance(args, tuple) - compression, band = args - - self.fp.seek(self.tile[0].offset) - - # Copy image data to temporary file - o = BytesIO() - if compression == "raw": - # To simplify access to the extracted file, - # prepend a PPM header - o.write(b"P5\n%d %d\n255\n" % self.size) - while True: - type, size = self.field() - if type != (8, 10): - break - while size > 0: - s = self.fp.read(min(size, 8192)) - if not s: - break - o.write(s) - size -= len(s) - - with Image.open(o) as _im: - if band is not None: - bands = [Image.new("L", _im.size)] * Image.getmodebands(self.mode) - bands[band] = _im - _im = Image.merge(self.mode, bands) - else: - _im.load() - self.im = _im.im - self.tile = [] - return ImageFile.ImageFile.load(self) - - -Image.register_open(IptcImageFile.format, IptcImageFile) - -Image.register_extension(IptcImageFile.format, ".iim") - - -def getiptcinfo( - im: ImageFile.ImageFile, -) -> dict[tuple[int, int], bytes | list[bytes]] | None: - """ - Get IPTC information from TIFF, JPEG, or IPTC file. - - :param im: An image containing IPTC data. - :returns: A dictionary containing IPTC information, or None if - no IPTC information block was found. - """ - from . import JpegImagePlugin, TiffImagePlugin - - data = None - - info: dict[tuple[int, int], bytes | list[bytes]] = {} - if isinstance(im, IptcImageFile): - # return info dictionary right away - for k, v in im.info.items(): - if isinstance(k, tuple): - info[k] = v - return info - - elif isinstance(im, JpegImagePlugin.JpegImageFile): - # extract the IPTC/NAA resource - photoshop = im.info.get("photoshop") - if photoshop: - data = photoshop.get(0x0404) - - elif isinstance(im, TiffImagePlugin.TiffImageFile): - # get raw data from the IPTC/NAA tag (PhotoShop tags the data - # as 4-byte integers, so we cannot use the get method...) - try: - data = im.tag_v2._tagdata[TiffImagePlugin.IPTC_NAA_CHUNK] - except KeyError: - pass - - if data is None: - return None # no properties - - # create an IptcImagePlugin object without initializing it - class FakeImage: - pass - - fake_im = FakeImage() - fake_im.__class__ = IptcImageFile # type: ignore[assignment] - iptc_im = cast(IptcImageFile, fake_im) - - # parse the IPTC information chunk - iptc_im.info = {} - iptc_im.fp = BytesIO(data) - - try: - iptc_im._open() - except (IndexError, KeyError): - pass # expected failure - - for k, v in iptc_im.info.items(): - if isinstance(k, tuple): - info[k] = v - return info diff --git a/pptx-env/lib/python3.12/site-packages/PIL/Jpeg2KImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/Jpeg2KImagePlugin.py deleted file mode 100644 index 4c85dd4e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/Jpeg2KImagePlugin.py +++ /dev/null @@ -1,446 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# JPEG2000 file handling -# -# History: -# 2014-03-12 ajh Created -# 2021-06-30 rogermb Extract dpi information from the 'resc' header box -# -# Copyright (c) 2014 Coriolis Systems Limited -# Copyright (c) 2014 Alastair Houghton -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -import os -import struct -from typing import cast - -from . import Image, ImageFile, ImagePalette, _binary - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from typing import IO - - -class BoxReader: - """ - A small helper class to read fields stored in JPEG2000 header boxes - and to easily step into and read sub-boxes. - """ - - def __init__(self, fp: IO[bytes], length: int = -1) -> None: - self.fp = fp - self.has_length = length >= 0 - self.length = length - self.remaining_in_box = -1 - - def _can_read(self, num_bytes: int) -> bool: - if self.has_length and self.fp.tell() + num_bytes > self.length: - # Outside box: ensure we don't read past the known file length - return False - if self.remaining_in_box >= 0: - # Inside box contents: ensure read does not go past box boundaries - return num_bytes <= self.remaining_in_box - else: - return True # No length known, just read - - def _read_bytes(self, num_bytes: int) -> bytes: - if not self._can_read(num_bytes): - msg = "Not enough data in header" - raise SyntaxError(msg) - - data = self.fp.read(num_bytes) - if len(data) < num_bytes: - msg = f"Expected to read {num_bytes} bytes but only got {len(data)}." - raise OSError(msg) - - if self.remaining_in_box > 0: - self.remaining_in_box -= num_bytes - return data - - def read_fields(self, field_format: str) -> tuple[int | bytes, ...]: - size = struct.calcsize(field_format) - data = self._read_bytes(size) - return struct.unpack(field_format, data) - - def read_boxes(self) -> BoxReader: - size = self.remaining_in_box - data = self._read_bytes(size) - return BoxReader(io.BytesIO(data), size) - - def has_next_box(self) -> bool: - if self.has_length: - return self.fp.tell() + self.remaining_in_box < self.length - else: - return True - - def next_box_type(self) -> bytes: - # Skip the rest of the box if it has not been read - if self.remaining_in_box > 0: - self.fp.seek(self.remaining_in_box, os.SEEK_CUR) - self.remaining_in_box = -1 - - # Read the length and type of the next box - lbox, tbox = cast(tuple[int, bytes], self.read_fields(">I4s")) - if lbox == 1: - lbox = cast(int, self.read_fields(">Q")[0]) - hlen = 16 - else: - hlen = 8 - - if lbox < hlen or not self._can_read(lbox - hlen): - msg = "Invalid header length" - raise SyntaxError(msg) - - self.remaining_in_box = lbox - hlen - return tbox - - -def _parse_codestream(fp: IO[bytes]) -> tuple[tuple[int, int], str]: - """Parse the JPEG 2000 codestream to extract the size and component - count from the SIZ marker segment, returning a PIL (size, mode) tuple.""" - - hdr = fp.read(2) - lsiz = _binary.i16be(hdr) - siz = hdr + fp.read(lsiz - 2) - lsiz, rsiz, xsiz, ysiz, xosiz, yosiz, _, _, _, _, csiz = struct.unpack_from( - ">HHIIIIIIIIH", siz - ) - - size = (xsiz - xosiz, ysiz - yosiz) - if csiz == 1: - ssiz = struct.unpack_from(">B", siz, 38) - if (ssiz[0] & 0x7F) + 1 > 8: - mode = "I;16" - else: - mode = "L" - elif csiz == 2: - mode = "LA" - elif csiz == 3: - mode = "RGB" - elif csiz == 4: - mode = "RGBA" - else: - msg = "unable to determine J2K image mode" - raise SyntaxError(msg) - - return size, mode - - -def _res_to_dpi(num: int, denom: int, exp: int) -> float | None: - """Convert JPEG2000's (numerator, denominator, exponent-base-10) resolution, - calculated as (num / denom) * 10^exp and stored in dots per meter, - to floating-point dots per inch.""" - if denom == 0: - return None - return (254 * num * (10**exp)) / (10000 * denom) - - -def _parse_jp2_header( - fp: IO[bytes], -) -> tuple[ - tuple[int, int], - str, - str | None, - tuple[float, float] | None, - ImagePalette.ImagePalette | None, -]: - """Parse the JP2 header box to extract size, component count, - color space information, and optionally DPI information, - returning a (size, mode, mimetype, dpi) tuple.""" - - # Find the JP2 header box - reader = BoxReader(fp) - header = None - mimetype = None - while reader.has_next_box(): - tbox = reader.next_box_type() - - if tbox == b"jp2h": - header = reader.read_boxes() - break - elif tbox == b"ftyp": - if reader.read_fields(">4s")[0] == b"jpx ": - mimetype = "image/jpx" - assert header is not None - - size = None - mode = None - bpc = None - nc = None - dpi = None # 2-tuple of DPI info, or None - palette = None - - while header.has_next_box(): - tbox = header.next_box_type() - - if tbox == b"ihdr": - height, width, nc, bpc = header.read_fields(">IIHB") - assert isinstance(height, int) - assert isinstance(width, int) - assert isinstance(bpc, int) - size = (width, height) - if nc == 1 and (bpc & 0x7F) > 8: - mode = "I;16" - elif nc == 1: - mode = "L" - elif nc == 2: - mode = "LA" - elif nc == 3: - mode = "RGB" - elif nc == 4: - mode = "RGBA" - elif tbox == b"colr" and nc == 4: - meth, _, _, enumcs = header.read_fields(">BBBI") - if meth == 1 and enumcs == 12: - mode = "CMYK" - elif tbox == b"pclr" and mode in ("L", "LA"): - ne, npc = header.read_fields(">HB") - assert isinstance(ne, int) - assert isinstance(npc, int) - max_bitdepth = 0 - for bitdepth in header.read_fields(">" + ("B" * npc)): - assert isinstance(bitdepth, int) - if bitdepth > max_bitdepth: - max_bitdepth = bitdepth - if max_bitdepth <= 8: - palette = ImagePalette.ImagePalette("RGBA" if npc == 4 else "RGB") - for i in range(ne): - color: list[int] = [] - for value in header.read_fields(">" + ("B" * npc)): - assert isinstance(value, int) - color.append(value) - palette.getcolor(tuple(color)) - mode = "P" if mode == "L" else "PA" - elif tbox == b"res ": - res = header.read_boxes() - while res.has_next_box(): - tres = res.next_box_type() - if tres == b"resc": - vrcn, vrcd, hrcn, hrcd, vrce, hrce = res.read_fields(">HHHHBB") - assert isinstance(vrcn, int) - assert isinstance(vrcd, int) - assert isinstance(hrcn, int) - assert isinstance(hrcd, int) - assert isinstance(vrce, int) - assert isinstance(hrce, int) - hres = _res_to_dpi(hrcn, hrcd, hrce) - vres = _res_to_dpi(vrcn, vrcd, vrce) - if hres is not None and vres is not None: - dpi = (hres, vres) - break - - if size is None or mode is None: - msg = "Malformed JP2 header" - raise SyntaxError(msg) - - return size, mode, mimetype, dpi, palette - - -## -# Image plugin for JPEG2000 images. - - -class Jpeg2KImageFile(ImageFile.ImageFile): - format = "JPEG2000" - format_description = "JPEG 2000 (ISO 15444)" - - def _open(self) -> None: - sig = self.fp.read(4) - if sig == b"\xff\x4f\xff\x51": - self.codec = "j2k" - self._size, self._mode = _parse_codestream(self.fp) - self._parse_comment() - else: - sig = sig + self.fp.read(8) - - if sig == b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a": - self.codec = "jp2" - header = _parse_jp2_header(self.fp) - self._size, self._mode, self.custom_mimetype, dpi, self.palette = header - if dpi is not None: - self.info["dpi"] = dpi - if self.fp.read(12).endswith(b"jp2c\xff\x4f\xff\x51"): - hdr = self.fp.read(2) - length = _binary.i16be(hdr) - self.fp.seek(length - 2, os.SEEK_CUR) - self._parse_comment() - else: - msg = "not a JPEG 2000 file" - raise SyntaxError(msg) - - self._reduce = 0 - self.layers = 0 - - fd = -1 - length = -1 - - try: - fd = self.fp.fileno() - length = os.fstat(fd).st_size - except Exception: - fd = -1 - try: - pos = self.fp.tell() - self.fp.seek(0, io.SEEK_END) - length = self.fp.tell() - self.fp.seek(pos) - except Exception: - length = -1 - - self.tile = [ - ImageFile._Tile( - "jpeg2k", - (0, 0) + self.size, - 0, - (self.codec, self._reduce, self.layers, fd, length), - ) - ] - - def _parse_comment(self) -> None: - while True: - marker = self.fp.read(2) - if not marker: - break - typ = marker[1] - if typ in (0x90, 0xD9): - # Start of tile or end of codestream - break - hdr = self.fp.read(2) - length = _binary.i16be(hdr) - if typ == 0x64: - # Comment - self.info["comment"] = self.fp.read(length - 2)[2:] - break - else: - self.fp.seek(length - 2, os.SEEK_CUR) - - @property # type: ignore[override] - def reduce( - self, - ) -> ( - Callable[[int | tuple[int, int], tuple[int, int, int, int] | None], Image.Image] - | int - ): - # https://github.com/python-pillow/Pillow/issues/4343 found that the - # new Image 'reduce' method was shadowed by this plugin's 'reduce' - # property. This attempts to allow for both scenarios - return self._reduce or super().reduce - - @reduce.setter - def reduce(self, value: int) -> None: - self._reduce = value - - def load(self) -> Image.core.PixelAccess | None: - if self.tile and self._reduce: - power = 1 << self._reduce - adjust = power >> 1 - self._size = ( - int((self.size[0] + adjust) / power), - int((self.size[1] + adjust) / power), - ) - - # Update the reduce and layers settings - t = self.tile[0] - assert isinstance(t[3], tuple) - t3 = (t[3][0], self._reduce, self.layers, t[3][3], t[3][4]) - self.tile = [ImageFile._Tile(t[0], (0, 0) + self.size, t[2], t3)] - - return ImageFile.ImageFile.load(self) - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith( - (b"\xff\x4f\xff\x51", b"\x00\x00\x00\x0cjP \x0d\x0a\x87\x0a") - ) - - -# ------------------------------------------------------------ -# Save support - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - # Get the keyword arguments - info = im.encoderinfo - - if isinstance(filename, str): - filename = filename.encode() - if filename.endswith(b".j2k") or info.get("no_jp2", False): - kind = "j2k" - else: - kind = "jp2" - - offset = info.get("offset", None) - tile_offset = info.get("tile_offset", None) - tile_size = info.get("tile_size", None) - quality_mode = info.get("quality_mode", "rates") - quality_layers = info.get("quality_layers", None) - if quality_layers is not None and not ( - isinstance(quality_layers, (list, tuple)) - and all( - isinstance(quality_layer, (int, float)) for quality_layer in quality_layers - ) - ): - msg = "quality_layers must be a sequence of numbers" - raise ValueError(msg) - - num_resolutions = info.get("num_resolutions", 0) - cblk_size = info.get("codeblock_size", None) - precinct_size = info.get("precinct_size", None) - irreversible = info.get("irreversible", False) - progression = info.get("progression", "LRCP") - cinema_mode = info.get("cinema_mode", "no") - mct = info.get("mct", 0) - signed = info.get("signed", False) - comment = info.get("comment") - if isinstance(comment, str): - comment = comment.encode() - plt = info.get("plt", False) - - fd = -1 - if hasattr(fp, "fileno"): - try: - fd = fp.fileno() - except Exception: - fd = -1 - - im.encoderconfig = ( - offset, - tile_offset, - tile_size, - quality_mode, - quality_layers, - num_resolutions, - cblk_size, - precinct_size, - irreversible, - progression, - cinema_mode, - mct, - signed, - fd, - comment, - plt, - ) - - ImageFile._save(im, fp, [ImageFile._Tile("jpeg2k", (0, 0) + im.size, 0, kind)]) - - -# ------------------------------------------------------------ -# Registry stuff - - -Image.register_open(Jpeg2KImageFile.format, Jpeg2KImageFile, _accept) -Image.register_save(Jpeg2KImageFile.format, _save) - -Image.register_extensions( - Jpeg2KImageFile.format, [".jp2", ".j2k", ".jpc", ".jpf", ".jpx", ".j2c"] -) - -Image.register_mime(Jpeg2KImageFile.format, "image/jp2") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/JpegImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/JpegImagePlugin.py deleted file mode 100644 index 755ca648..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/JpegImagePlugin.py +++ /dev/null @@ -1,888 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# JPEG (JFIF) file handling -# -# See "Digital Compression and Coding of Continuous-Tone Still Images, -# Part 1, Requirements and Guidelines" (CCITT T.81 / ISO 10918-1) -# -# History: -# 1995-09-09 fl Created -# 1995-09-13 fl Added full parser -# 1996-03-25 fl Added hack to use the IJG command line utilities -# 1996-05-05 fl Workaround Photoshop 2.5 CMYK polarity bug -# 1996-05-28 fl Added draft support, JFIF version (0.1) -# 1996-12-30 fl Added encoder options, added progression property (0.2) -# 1997-08-27 fl Save mode 1 images as BW (0.3) -# 1998-07-12 fl Added YCbCr to draft and save methods (0.4) -# 1998-10-19 fl Don't hang on files using 16-bit DQT's (0.4.1) -# 2001-04-16 fl Extract DPI settings from JFIF files (0.4.2) -# 2002-07-01 fl Skip pad bytes before markers; identify Exif files (0.4.3) -# 2003-04-25 fl Added experimental EXIF decoder (0.5) -# 2003-06-06 fl Added experimental EXIF GPSinfo decoder -# 2003-09-13 fl Extract COM markers -# 2009-09-06 fl Added icc_profile support (from Florian Hoech) -# 2009-03-06 fl Changed CMYK handling; always use Adobe polarity (0.6) -# 2009-03-08 fl Added subsampling support (from Justin Huff). -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-1996 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import array -import io -import math -import os -import struct -import subprocess -import sys -import tempfile -import warnings - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o16be as o16 -from .JpegPresets import presets - -TYPE_CHECKING = False -if TYPE_CHECKING: - from typing import IO, Any - - from .MpoImagePlugin import MpoImageFile - -# -# Parser - - -def Skip(self: JpegImageFile, marker: int) -> None: - n = i16(self.fp.read(2)) - 2 - ImageFile._safe_read(self.fp, n) - - -def APP(self: JpegImageFile, marker: int) -> None: - # - # Application marker. Store these in the APP dictionary. - # Also look for well-known application markers. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - - app = f"APP{marker & 15}" - - self.app[app] = s # compatibility - self.applist.append((app, s)) - - if marker == 0xFFE0 and s.startswith(b"JFIF"): - # extract JFIF information - self.info["jfif"] = version = i16(s, 5) # version - self.info["jfif_version"] = divmod(version, 256) - # extract JFIF properties - try: - jfif_unit = s[7] - jfif_density = i16(s, 8), i16(s, 10) - except Exception: - pass - else: - if jfif_unit == 1: - self.info["dpi"] = jfif_density - elif jfif_unit == 2: # cm - # 1 dpcm = 2.54 dpi - self.info["dpi"] = tuple(d * 2.54 for d in jfif_density) - self.info["jfif_unit"] = jfif_unit - self.info["jfif_density"] = jfif_density - elif marker == 0xFFE1 and s.startswith(b"Exif\0\0"): - # extract EXIF information - if "exif" in self.info: - self.info["exif"] += s[6:] - else: - self.info["exif"] = s - self._exif_offset = self.fp.tell() - n + 6 - elif marker == 0xFFE1 and s.startswith(b"http://ns.adobe.com/xap/1.0/\x00"): - self.info["xmp"] = s.split(b"\x00", 1)[1] - elif marker == 0xFFE2 and s.startswith(b"FPXR\0"): - # extract FlashPix information (incomplete) - self.info["flashpix"] = s # FIXME: value will change - elif marker == 0xFFE2 and s.startswith(b"ICC_PROFILE\0"): - # Since an ICC profile can be larger than the maximum size of - # a JPEG marker (64K), we need provisions to split it into - # multiple markers. The format defined by the ICC specifies - # one or more APP2 markers containing the following data: - # Identifying string ASCII "ICC_PROFILE\0" (12 bytes) - # Marker sequence number 1, 2, etc (1 byte) - # Number of markers Total of APP2's used (1 byte) - # Profile data (remainder of APP2 data) - # Decoders should use the marker sequence numbers to - # reassemble the profile, rather than assuming that the APP2 - # markers appear in the correct sequence. - self.icclist.append(s) - elif marker == 0xFFED and s.startswith(b"Photoshop 3.0\x00"): - # parse the image resource block - offset = 14 - photoshop = self.info.setdefault("photoshop", {}) - while s[offset : offset + 4] == b"8BIM": - try: - offset += 4 - # resource code - code = i16(s, offset) - offset += 2 - # resource name (usually empty) - name_len = s[offset] - # name = s[offset+1:offset+1+name_len] - offset += 1 + name_len - offset += offset & 1 # align - # resource data block - size = i32(s, offset) - offset += 4 - data = s[offset : offset + size] - if code == 0x03ED: # ResolutionInfo - photoshop[code] = { - "XResolution": i32(data, 0) / 65536, - "DisplayedUnitsX": i16(data, 4), - "YResolution": i32(data, 8) / 65536, - "DisplayedUnitsY": i16(data, 12), - } - else: - photoshop[code] = data - offset += size - offset += offset & 1 # align - except struct.error: - break # insufficient data - - elif marker == 0xFFEE and s.startswith(b"Adobe"): - self.info["adobe"] = i16(s, 5) - # extract Adobe custom properties - try: - adobe_transform = s[11] - except IndexError: - pass - else: - self.info["adobe_transform"] = adobe_transform - elif marker == 0xFFE2 and s.startswith(b"MPF\0"): - # extract MPO information - self.info["mp"] = s[4:] - # offset is current location minus buffer size - # plus constant header size - self.info["mpoffset"] = self.fp.tell() - n + 4 - - -def COM(self: JpegImageFile, marker: int) -> None: - # - # Comment marker. Store these in the APP dictionary. - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - - self.info["comment"] = s - self.app["COM"] = s # compatibility - self.applist.append(("COM", s)) - - -def SOF(self: JpegImageFile, marker: int) -> None: - # - # Start of frame marker. Defines the size and mode of the - # image. JPEG is colour blind, so we use some simple - # heuristics to map the number of layers to an appropriate - # mode. Note that this could be made a bit brighter, by - # looking for JFIF and Adobe APP markers. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - self._size = i16(s, 3), i16(s, 1) - if self._im is not None and self.size != self.im.size: - self._im = None - - self.bits = s[0] - if self.bits != 8: - msg = f"cannot handle {self.bits}-bit layers" - raise SyntaxError(msg) - - self.layers = s[5] - if self.layers == 1: - self._mode = "L" - elif self.layers == 3: - self._mode = "RGB" - elif self.layers == 4: - self._mode = "CMYK" - else: - msg = f"cannot handle {self.layers}-layer images" - raise SyntaxError(msg) - - if marker in [0xFFC2, 0xFFC6, 0xFFCA, 0xFFCE]: - self.info["progressive"] = self.info["progression"] = 1 - - if self.icclist: - # fixup icc profile - self.icclist.sort() # sort by sequence number - if self.icclist[0][13] == len(self.icclist): - profile = [p[14:] for p in self.icclist] - icc_profile = b"".join(profile) - else: - icc_profile = None # wrong number of fragments - self.info["icc_profile"] = icc_profile - self.icclist = [] - - for i in range(6, len(s), 3): - t = s[i : i + 3] - # 4-tuples: id, vsamp, hsamp, qtable - self.layer.append((t[0], t[1] // 16, t[1] & 15, t[2])) - - -def DQT(self: JpegImageFile, marker: int) -> None: - # - # Define quantization table. Note that there might be more - # than one table in each marker. - - # FIXME: The quantization tables can be used to estimate the - # compression quality. - - n = i16(self.fp.read(2)) - 2 - s = ImageFile._safe_read(self.fp, n) - while len(s): - v = s[0] - precision = 1 if (v // 16 == 0) else 2 # in bytes - qt_length = 1 + precision * 64 - if len(s) < qt_length: - msg = "bad quantization table marker" - raise SyntaxError(msg) - data = array.array("B" if precision == 1 else "H", s[1:qt_length]) - if sys.byteorder == "little" and precision > 1: - data.byteswap() # the values are always big-endian - self.quantization[v & 15] = [data[i] for i in zigzag_index] - s = s[qt_length:] - - -# -# JPEG marker table - -MARKER = { - 0xFFC0: ("SOF0", "Baseline DCT", SOF), - 0xFFC1: ("SOF1", "Extended Sequential DCT", SOF), - 0xFFC2: ("SOF2", "Progressive DCT", SOF), - 0xFFC3: ("SOF3", "Spatial lossless", SOF), - 0xFFC4: ("DHT", "Define Huffman table", Skip), - 0xFFC5: ("SOF5", "Differential sequential DCT", SOF), - 0xFFC6: ("SOF6", "Differential progressive DCT", SOF), - 0xFFC7: ("SOF7", "Differential spatial", SOF), - 0xFFC8: ("JPG", "Extension", None), - 0xFFC9: ("SOF9", "Extended sequential DCT (AC)", SOF), - 0xFFCA: ("SOF10", "Progressive DCT (AC)", SOF), - 0xFFCB: ("SOF11", "Spatial lossless DCT (AC)", SOF), - 0xFFCC: ("DAC", "Define arithmetic coding conditioning", Skip), - 0xFFCD: ("SOF13", "Differential sequential DCT (AC)", SOF), - 0xFFCE: ("SOF14", "Differential progressive DCT (AC)", SOF), - 0xFFCF: ("SOF15", "Differential spatial (AC)", SOF), - 0xFFD0: ("RST0", "Restart 0", None), - 0xFFD1: ("RST1", "Restart 1", None), - 0xFFD2: ("RST2", "Restart 2", None), - 0xFFD3: ("RST3", "Restart 3", None), - 0xFFD4: ("RST4", "Restart 4", None), - 0xFFD5: ("RST5", "Restart 5", None), - 0xFFD6: ("RST6", "Restart 6", None), - 0xFFD7: ("RST7", "Restart 7", None), - 0xFFD8: ("SOI", "Start of image", None), - 0xFFD9: ("EOI", "End of image", None), - 0xFFDA: ("SOS", "Start of scan", Skip), - 0xFFDB: ("DQT", "Define quantization table", DQT), - 0xFFDC: ("DNL", "Define number of lines", Skip), - 0xFFDD: ("DRI", "Define restart interval", Skip), - 0xFFDE: ("DHP", "Define hierarchical progression", SOF), - 0xFFDF: ("EXP", "Expand reference component", Skip), - 0xFFE0: ("APP0", "Application segment 0", APP), - 0xFFE1: ("APP1", "Application segment 1", APP), - 0xFFE2: ("APP2", "Application segment 2", APP), - 0xFFE3: ("APP3", "Application segment 3", APP), - 0xFFE4: ("APP4", "Application segment 4", APP), - 0xFFE5: ("APP5", "Application segment 5", APP), - 0xFFE6: ("APP6", "Application segment 6", APP), - 0xFFE7: ("APP7", "Application segment 7", APP), - 0xFFE8: ("APP8", "Application segment 8", APP), - 0xFFE9: ("APP9", "Application segment 9", APP), - 0xFFEA: ("APP10", "Application segment 10", APP), - 0xFFEB: ("APP11", "Application segment 11", APP), - 0xFFEC: ("APP12", "Application segment 12", APP), - 0xFFED: ("APP13", "Application segment 13", APP), - 0xFFEE: ("APP14", "Application segment 14", APP), - 0xFFEF: ("APP15", "Application segment 15", APP), - 0xFFF0: ("JPG0", "Extension 0", None), - 0xFFF1: ("JPG1", "Extension 1", None), - 0xFFF2: ("JPG2", "Extension 2", None), - 0xFFF3: ("JPG3", "Extension 3", None), - 0xFFF4: ("JPG4", "Extension 4", None), - 0xFFF5: ("JPG5", "Extension 5", None), - 0xFFF6: ("JPG6", "Extension 6", None), - 0xFFF7: ("JPG7", "Extension 7", None), - 0xFFF8: ("JPG8", "Extension 8", None), - 0xFFF9: ("JPG9", "Extension 9", None), - 0xFFFA: ("JPG10", "Extension 10", None), - 0xFFFB: ("JPG11", "Extension 11", None), - 0xFFFC: ("JPG12", "Extension 12", None), - 0xFFFD: ("JPG13", "Extension 13", None), - 0xFFFE: ("COM", "Comment", COM), -} - - -def _accept(prefix: bytes) -> bool: - # Magic number was taken from https://en.wikipedia.org/wiki/JPEG - return prefix.startswith(b"\xff\xd8\xff") - - -## -# Image plugin for JPEG and JFIF images. - - -class JpegImageFile(ImageFile.ImageFile): - format = "JPEG" - format_description = "JPEG (ISO 10918)" - - def _open(self) -> None: - s = self.fp.read(3) - - if not _accept(s): - msg = "not a JPEG file" - raise SyntaxError(msg) - s = b"\xff" - - # Create attributes - self.bits = self.layers = 0 - self._exif_offset = 0 - - # JPEG specifics (internal) - self.layer: list[tuple[int, int, int, int]] = [] - self._huffman_dc: dict[Any, Any] = {} - self._huffman_ac: dict[Any, Any] = {} - self.quantization: dict[int, list[int]] = {} - self.app: dict[str, bytes] = {} # compatibility - self.applist: list[tuple[str, bytes]] = [] - self.icclist: list[bytes] = [] - - while True: - i = s[0] - if i == 0xFF: - s = s + self.fp.read(1) - i = i16(s) - else: - # Skip non-0xFF junk - s = self.fp.read(1) - continue - - if i in MARKER: - name, description, handler = MARKER[i] - if handler is not None: - handler(self, i) - if i == 0xFFDA: # start of scan - rawmode = self.mode - if self.mode == "CMYK": - rawmode = "CMYK;I" # assume adobe conventions - self.tile = [ - ImageFile._Tile("jpeg", (0, 0) + self.size, 0, (rawmode, "")) - ] - # self.__offset = self.fp.tell() - break - s = self.fp.read(1) - elif i in {0, 0xFFFF}: - # padded marker or junk; move on - s = b"\xff" - elif i == 0xFF00: # Skip extraneous data (escaped 0xFF) - s = self.fp.read(1) - else: - msg = "no marker found" - raise SyntaxError(msg) - - self._read_dpi_from_exif() - - def __getstate__(self) -> list[Any]: - return super().__getstate__() + [self.layers, self.layer] - - def __setstate__(self, state: list[Any]) -> None: - self.layers, self.layer = state[6:] - super().__setstate__(state) - - def load_read(self, read_bytes: int) -> bytes: - """ - internal: read more image data - For premature EOF and LOAD_TRUNCATED_IMAGES adds EOI marker - so libjpeg can finish decoding - """ - s = self.fp.read(read_bytes) - - if not s and ImageFile.LOAD_TRUNCATED_IMAGES and not hasattr(self, "_ended"): - # Premature EOF. - # Pretend file is finished adding EOI marker - self._ended = True - return b"\xff\xd9" - - return s - - def draft( - self, mode: str | None, size: tuple[int, int] | None - ) -> tuple[str, tuple[int, int, float, float]] | None: - if len(self.tile) != 1: - return None - - # Protect from second call - if self.decoderconfig: - return None - - d, e, o, a = self.tile[0] - scale = 1 - original_size = self.size - - assert isinstance(a, tuple) - if a[0] == "RGB" and mode in ["L", "YCbCr"]: - self._mode = mode - a = mode, "" - - if size: - scale = min(self.size[0] // size[0], self.size[1] // size[1]) - for s in [8, 4, 2, 1]: - if scale >= s: - break - assert e is not None - e = ( - e[0], - e[1], - (e[2] - e[0] + s - 1) // s + e[0], - (e[3] - e[1] + s - 1) // s + e[1], - ) - self._size = ((self.size[0] + s - 1) // s, (self.size[1] + s - 1) // s) - scale = s - - self.tile = [ImageFile._Tile(d, e, o, a)] - self.decoderconfig = (scale, 0) - - box = (0, 0, original_size[0] / scale, original_size[1] / scale) - return self.mode, box - - def load_djpeg(self) -> None: - # ALTERNATIVE: handle JPEGs via the IJG command line utilities - - f, path = tempfile.mkstemp() - os.close(f) - if os.path.exists(self.filename): - subprocess.check_call(["djpeg", "-outfile", path, self.filename]) - else: - try: - os.unlink(path) - except OSError: - pass - - msg = "Invalid Filename" - raise ValueError(msg) - - try: - with Image.open(path) as _im: - _im.load() - self.im = _im.im - finally: - try: - os.unlink(path) - except OSError: - pass - - self._mode = self.im.mode - self._size = self.im.size - - self.tile = [] - - def _getexif(self) -> dict[int, Any] | None: - return _getexif(self) - - def _read_dpi_from_exif(self) -> None: - # If DPI isn't in JPEG header, fetch from EXIF - if "dpi" in self.info or "exif" not in self.info: - return - try: - exif = self.getexif() - resolution_unit = exif[0x0128] - x_resolution = exif[0x011A] - try: - dpi = float(x_resolution[0]) / x_resolution[1] - except TypeError: - dpi = x_resolution - if math.isnan(dpi): - msg = "DPI is not a number" - raise ValueError(msg) - if resolution_unit == 3: # cm - # 1 dpcm = 2.54 dpi - dpi *= 2.54 - self.info["dpi"] = dpi, dpi - except ( - struct.error, # truncated EXIF - KeyError, # dpi not included - SyntaxError, # invalid/unreadable EXIF - TypeError, # dpi is an invalid float - ValueError, # dpi is an invalid float - ZeroDivisionError, # invalid dpi rational value - ): - self.info["dpi"] = 72, 72 - - def _getmp(self) -> dict[int, Any] | None: - return _getmp(self) - - -def _getexif(self: JpegImageFile) -> dict[int, Any] | None: - if "exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - -def _getmp(self: JpegImageFile) -> dict[int, Any] | None: - # Extract MP information. This method was inspired by the "highly - # experimental" _getexif version that's been in use for years now, - # itself based on the ImageFileDirectory class in the TIFF plugin. - - # The MP record essentially consists of a TIFF file embedded in a JPEG - # application marker. - try: - data = self.info["mp"] - except KeyError: - return None - file_contents = io.BytesIO(data) - head = file_contents.read(8) - endianness = ">" if head.startswith(b"\x4d\x4d\x00\x2a") else "<" - # process dictionary - from . import TiffImagePlugin - - try: - info = TiffImagePlugin.ImageFileDirectory_v2(head) - file_contents.seek(info.next) - info.load(file_contents) - mp = dict(info) - except Exception as e: - msg = "malformed MP Index (unreadable directory)" - raise SyntaxError(msg) from e - # it's an error not to have a number of images - try: - quant = mp[0xB001] - except KeyError as e: - msg = "malformed MP Index (no number of images)" - raise SyntaxError(msg) from e - # get MP entries - mpentries = [] - try: - rawmpentries = mp[0xB002] - for entrynum in range(quant): - unpackedentry = struct.unpack_from( - f"{endianness}LLLHH", rawmpentries, entrynum * 16 - ) - labels = ("Attribute", "Size", "DataOffset", "EntryNo1", "EntryNo2") - mpentry = dict(zip(labels, unpackedentry)) - mpentryattr = { - "DependentParentImageFlag": bool(mpentry["Attribute"] & (1 << 31)), - "DependentChildImageFlag": bool(mpentry["Attribute"] & (1 << 30)), - "RepresentativeImageFlag": bool(mpentry["Attribute"] & (1 << 29)), - "Reserved": (mpentry["Attribute"] & (3 << 27)) >> 27, - "ImageDataFormat": (mpentry["Attribute"] & (7 << 24)) >> 24, - "MPType": mpentry["Attribute"] & 0x00FFFFFF, - } - if mpentryattr["ImageDataFormat"] == 0: - mpentryattr["ImageDataFormat"] = "JPEG" - else: - msg = "unsupported picture format in MPO" - raise SyntaxError(msg) - mptypemap = { - 0x000000: "Undefined", - 0x010001: "Large Thumbnail (VGA Equivalent)", - 0x010002: "Large Thumbnail (Full HD Equivalent)", - 0x020001: "Multi-Frame Image (Panorama)", - 0x020002: "Multi-Frame Image: (Disparity)", - 0x020003: "Multi-Frame Image: (Multi-Angle)", - 0x030000: "Baseline MP Primary Image", - } - mpentryattr["MPType"] = mptypemap.get(mpentryattr["MPType"], "Unknown") - mpentry["Attribute"] = mpentryattr - mpentries.append(mpentry) - mp[0xB002] = mpentries - except KeyError as e: - msg = "malformed MP Index (bad MP Entry)" - raise SyntaxError(msg) from e - # Next we should try and parse the individual image unique ID list; - # we don't because I've never seen this actually used in a real MPO - # file and so can't test it. - return mp - - -# -------------------------------------------------------------------- -# stuff to save JPEG files - -RAWMODE = { - "1": "L", - "L": "L", - "RGB": "RGB", - "RGBX": "RGB", - "CMYK": "CMYK;I", # assume adobe conventions - "YCbCr": "YCbCr", -} - -# fmt: off -zigzag_index = ( - 0, 1, 5, 6, 14, 15, 27, 28, - 2, 4, 7, 13, 16, 26, 29, 42, - 3, 8, 12, 17, 25, 30, 41, 43, - 9, 11, 18, 24, 31, 40, 44, 53, - 10, 19, 23, 32, 39, 45, 52, 54, - 20, 22, 33, 38, 46, 51, 55, 60, - 21, 34, 37, 47, 50, 56, 59, 61, - 35, 36, 48, 49, 57, 58, 62, 63, -) - -samplings = { - (1, 1, 1, 1, 1, 1): 0, - (2, 1, 1, 1, 1, 1): 1, - (2, 2, 1, 1, 1, 1): 2, -} -# fmt: on - - -def get_sampling(im: Image.Image) -> int: - # There's no subsampling when images have only 1 layer - # (grayscale images) or when they are CMYK (4 layers), - # so set subsampling to the default value. - # - # NOTE: currently Pillow can't encode JPEG to YCCK format. - # If YCCK support is added in the future, subsampling code will have - # to be updated (here and in JpegEncode.c) to deal with 4 layers. - if not isinstance(im, JpegImageFile) or im.layers in (1, 4): - return -1 - sampling = im.layer[0][1:3] + im.layer[1][1:3] + im.layer[2][1:3] - return samplings.get(sampling, -1) - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.width == 0 or im.height == 0: - msg = "cannot write empty image as JPEG" - raise ValueError(msg) - - try: - rawmode = RAWMODE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as JPEG" - raise OSError(msg) from e - - info = im.encoderinfo - - dpi = [round(x) for x in info.get("dpi", (0, 0))] - - quality = info.get("quality", -1) - subsampling = info.get("subsampling", -1) - qtables = info.get("qtables") - - if quality == "keep": - quality = -1 - subsampling = "keep" - qtables = "keep" - elif quality in presets: - preset = presets[quality] - quality = -1 - subsampling = preset.get("subsampling", -1) - qtables = preset.get("quantization") - elif not isinstance(quality, int): - msg = "Invalid quality setting" - raise ValueError(msg) - else: - if subsampling in presets: - subsampling = presets[subsampling].get("subsampling", -1) - if isinstance(qtables, str) and qtables in presets: - qtables = presets[qtables].get("quantization") - - if subsampling == "4:4:4": - subsampling = 0 - elif subsampling == "4:2:2": - subsampling = 1 - elif subsampling == "4:2:0": - subsampling = 2 - elif subsampling == "4:1:1": - # For compatibility. Before Pillow 4.3, 4:1:1 actually meant 4:2:0. - # Set 4:2:0 if someone is still using that value. - subsampling = 2 - elif subsampling == "keep": - if im.format != "JPEG": - msg = "Cannot use 'keep' when original image is not a JPEG" - raise ValueError(msg) - subsampling = get_sampling(im) - - def validate_qtables( - qtables: ( - str | tuple[list[int], ...] | list[list[int]] | dict[int, list[int]] | None - ), - ) -> list[list[int]] | None: - if qtables is None: - return qtables - if isinstance(qtables, str): - try: - lines = [ - int(num) - for line in qtables.splitlines() - for num in line.split("#", 1)[0].split() - ] - except ValueError as e: - msg = "Invalid quantization table" - raise ValueError(msg) from e - else: - qtables = [lines[s : s + 64] for s in range(0, len(lines), 64)] - if isinstance(qtables, (tuple, list, dict)): - if isinstance(qtables, dict): - qtables = [ - qtables[key] for key in range(len(qtables)) if key in qtables - ] - elif isinstance(qtables, tuple): - qtables = list(qtables) - if not (0 < len(qtables) < 5): - msg = "None or too many quantization tables" - raise ValueError(msg) - for idx, table in enumerate(qtables): - try: - if len(table) != 64: - msg = "Invalid quantization table" - raise TypeError(msg) - table_array = array.array("H", table) - except TypeError as e: - msg = "Invalid quantization table" - raise ValueError(msg) from e - else: - qtables[idx] = list(table_array) - return qtables - - if qtables == "keep": - if im.format != "JPEG": - msg = "Cannot use 'keep' when original image is not a JPEG" - raise ValueError(msg) - qtables = getattr(im, "quantization", None) - qtables = validate_qtables(qtables) - - extra = info.get("extra", b"") - - MAX_BYTES_IN_MARKER = 65533 - if xmp := info.get("xmp"): - overhead_len = 29 # b"http://ns.adobe.com/xap/1.0/\x00" - max_data_bytes_in_marker = MAX_BYTES_IN_MARKER - overhead_len - if len(xmp) > max_data_bytes_in_marker: - msg = "XMP data is too long" - raise ValueError(msg) - size = o16(2 + overhead_len + len(xmp)) - extra += b"\xff\xe1" + size + b"http://ns.adobe.com/xap/1.0/\x00" + xmp - - if icc_profile := info.get("icc_profile"): - overhead_len = 14 # b"ICC_PROFILE\0" + o8(i) + o8(len(markers)) - max_data_bytes_in_marker = MAX_BYTES_IN_MARKER - overhead_len - markers = [] - while icc_profile: - markers.append(icc_profile[:max_data_bytes_in_marker]) - icc_profile = icc_profile[max_data_bytes_in_marker:] - i = 1 - for marker in markers: - size = o16(2 + overhead_len + len(marker)) - extra += ( - b"\xff\xe2" - + size - + b"ICC_PROFILE\0" - + o8(i) - + o8(len(markers)) - + marker - ) - i += 1 - - comment = info.get("comment", im.info.get("comment")) - - # "progressive" is the official name, but older documentation - # says "progression" - # FIXME: issue a warning if the wrong form is used (post-1.1.7) - progressive = info.get("progressive", False) or info.get("progression", False) - - optimize = info.get("optimize", False) - - exif = info.get("exif", b"") - if isinstance(exif, Image.Exif): - exif = exif.tobytes() - if len(exif) > MAX_BYTES_IN_MARKER: - msg = "EXIF data is too long" - raise ValueError(msg) - - # get keyword arguments - im.encoderconfig = ( - quality, - progressive, - info.get("smooth", 0), - optimize, - info.get("keep_rgb", False), - info.get("streamtype", 0), - dpi, - subsampling, - info.get("restart_marker_blocks", 0), - info.get("restart_marker_rows", 0), - qtables, - comment, - extra, - exif, - ) - - # if we optimize, libjpeg needs a buffer big enough to hold the whole image - # in a shot. Guessing on the size, at im.size bytes. (raw pixel size is - # channels*size, this is a value that's been used in a django patch. - # https://github.com/matthewwithanm/django-imagekit/issues/50 - if optimize or progressive: - # CMYK can be bigger - if im.mode == "CMYK": - bufsize = 4 * im.size[0] * im.size[1] - # keep sets quality to -1, but the actual value may be high. - elif quality >= 95 or quality == -1: - bufsize = 2 * im.size[0] * im.size[1] - else: - bufsize = im.size[0] * im.size[1] - if exif: - bufsize += len(exif) + 5 - if extra: - bufsize += len(extra) + 1 - else: - # The EXIF info needs to be written as one block, + APP1, + one spare byte. - # Ensure that our buffer is big enough. Same with the icc_profile block. - bufsize = max(len(exif) + 5, len(extra) + 1) - - ImageFile._save( - im, fp, [ImageFile._Tile("jpeg", (0, 0) + im.size, 0, rawmode)], bufsize - ) - - -## -# Factory for making JPEG and MPO instances -def jpeg_factory( - fp: IO[bytes], filename: str | bytes | None = None -) -> JpegImageFile | MpoImageFile: - im = JpegImageFile(fp, filename) - try: - mpheader = im._getmp() - if mpheader is not None and mpheader[45057] > 1: - for segment, content in im.applist: - if segment == "APP1" and b' hdrgm:Version="' in content: - # Ultra HDR images are not yet supported - return im - # It's actually an MPO - from .MpoImagePlugin import MpoImageFile - - # Don't reload everything, just convert it. - im = MpoImageFile.adopt(im, mpheader) - except (TypeError, IndexError): - # It is really a JPEG - pass - except SyntaxError: - warnings.warn( - "Image appears to be a malformed MPO file, it will be " - "interpreted as a base JPEG file" - ) - return im - - -# --------------------------------------------------------------------- -# Registry stuff - -Image.register_open(JpegImageFile.format, jpeg_factory, _accept) -Image.register_save(JpegImageFile.format, _save) - -Image.register_extensions(JpegImageFile.format, [".jfif", ".jpe", ".jpg", ".jpeg"]) - -Image.register_mime(JpegImageFile.format, "image/jpeg") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/JpegPresets.py b/pptx-env/lib/python3.12/site-packages/PIL/JpegPresets.py deleted file mode 100644 index d0e64a35..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/JpegPresets.py +++ /dev/null @@ -1,242 +0,0 @@ -""" -JPEG quality settings equivalent to the Photoshop settings. -Can be used when saving JPEG files. - -The following presets are available by default: -``web_low``, ``web_medium``, ``web_high``, ``web_very_high``, ``web_maximum``, -``low``, ``medium``, ``high``, ``maximum``. -More presets can be added to the :py:data:`presets` dict if needed. - -To apply the preset, specify:: - - quality="preset_name" - -To apply only the quantization table:: - - qtables="preset_name" - -To apply only the subsampling setting:: - - subsampling="preset_name" - -Example:: - - im.save("image_name.jpg", quality="web_high") - -Subsampling ------------ - -Subsampling is the practice of encoding images by implementing less resolution -for chroma information than for luma information. -(ref.: https://en.wikipedia.org/wiki/Chroma_subsampling) - -Possible subsampling values are 0, 1 and 2 that correspond to 4:4:4, 4:2:2 and -4:2:0. - -You can get the subsampling of a JPEG with the -:func:`.JpegImagePlugin.get_sampling` function. - -In JPEG compressed data a JPEG marker is used instead of an EXIFΒ tag. -(ref.: https://exiv2.org/tags.html) - - -Quantization tables -------------------- - -They are values use by the DCT (Discrete cosine transform) to remove -*unnecessary* information from the image (the lossy part of the compression). -(ref.: https://en.wikipedia.org/wiki/Quantization_matrix#Quantization_matrices, -https://en.wikipedia.org/wiki/JPEG#Quantization) - -You can get the quantization tables of a JPEG with:: - - im.quantization - -This will return a dict with a number of lists. You can pass this dict -directly as the qtables argument when saving a JPEG. - -The quantization table format in presets is a list with sublists. These formats -are interchangeable. - -Libjpeg ref.: -https://web.archive.org/web/20120328125543/http://www.jpegcameras.com/libjpeg/libjpeg-3.html - -""" - -from __future__ import annotations - -# fmt: off -presets = { - 'web_low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [20, 16, 25, 39, 50, 46, 62, 68, - 16, 18, 23, 38, 38, 53, 65, 68, - 25, 23, 31, 38, 53, 65, 68, 68, - 39, 38, 38, 53, 65, 68, 68, 68, - 50, 38, 53, 65, 68, 68, 68, 68, - 46, 53, 65, 68, 68, 68, 68, 68, - 62, 65, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68], - [21, 25, 32, 38, 54, 68, 68, 68, - 25, 28, 24, 38, 54, 68, 68, 68, - 32, 24, 32, 43, 66, 68, 68, 68, - 38, 38, 43, 53, 68, 68, 68, 68, - 54, 54, 66, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68, - 68, 68, 68, 68, 68, 68, 68, 68] - ]}, - 'web_medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [16, 11, 11, 16, 23, 27, 31, 30, - 11, 12, 12, 15, 20, 23, 23, 30, - 11, 12, 13, 16, 23, 26, 35, 47, - 16, 15, 16, 23, 26, 37, 47, 64, - 23, 20, 23, 26, 39, 51, 64, 64, - 27, 23, 26, 37, 51, 64, 64, 64, - 31, 23, 35, 47, 64, 64, 64, 64, - 30, 30, 47, 64, 64, 64, 64, 64], - [17, 15, 17, 21, 20, 26, 38, 48, - 15, 19, 18, 17, 20, 26, 35, 43, - 17, 18, 20, 22, 26, 30, 46, 53, - 21, 17, 22, 28, 30, 39, 53, 64, - 20, 20, 26, 30, 39, 48, 64, 64, - 26, 26, 30, 39, 48, 63, 64, 64, - 38, 35, 46, 53, 64, 64, 64, 64, - 48, 43, 53, 64, 64, 64, 64, 64] - ]}, - 'web_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 14, 19, - 6, 6, 6, 11, 12, 15, 19, 28, - 9, 8, 10, 12, 16, 20, 27, 31, - 11, 10, 12, 15, 20, 27, 31, 31, - 12, 12, 14, 19, 27, 31, 31, 31, - 16, 12, 19, 28, 31, 31, 31, 31], - [7, 7, 13, 24, 26, 31, 31, 31, - 7, 12, 16, 21, 31, 31, 31, 31, - 13, 16, 17, 31, 31, 31, 31, 31, - 24, 21, 31, 31, 31, 31, 31, 31, - 26, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31, - 31, 31, 31, 31, 31, 31, 31, 31] - ]}, - 'web_very_high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 11, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 11, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'web_maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 1, - 1, 1, 1, 1, 1, 1, 1, 2, - 1, 1, 1, 1, 1, 1, 2, 2, - 1, 1, 1, 1, 1, 2, 2, 3, - 1, 1, 1, 1, 2, 2, 3, 3, - 1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 2, 2, 3, 3, 3, 3], - [1, 1, 1, 2, 2, 3, 3, 3, - 1, 1, 1, 2, 3, 3, 3, 3, - 1, 1, 1, 3, 3, 3, 3, 3, - 2, 2, 3, 3, 3, 3, 3, 3, - 2, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3, - 3, 3, 3, 3, 3, 3, 3, 3] - ]}, - 'low': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [18, 14, 14, 21, 30, 35, 34, 17, - 14, 16, 16, 19, 26, 23, 12, 12, - 14, 16, 17, 21, 23, 12, 12, 12, - 21, 19, 21, 23, 12, 12, 12, 12, - 30, 26, 23, 12, 12, 12, 12, 12, - 35, 23, 12, 12, 12, 12, 12, 12, - 34, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [20, 19, 22, 27, 20, 20, 17, 17, - 19, 25, 23, 14, 14, 12, 12, 12, - 22, 23, 14, 14, 12, 12, 12, 12, - 27, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'medium': {'subsampling': 2, # "4:2:0" - 'quantization': [ - [12, 8, 8, 12, 17, 21, 24, 17, - 8, 9, 9, 11, 15, 19, 12, 12, - 8, 9, 10, 12, 19, 12, 12, 12, - 12, 11, 12, 21, 12, 12, 12, 12, - 17, 15, 19, 12, 12, 12, 12, 12, - 21, 19, 12, 12, 12, 12, 12, 12, - 24, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12], - [13, 11, 13, 16, 20, 20, 17, 17, - 11, 14, 14, 14, 14, 12, 12, 12, - 13, 14, 14, 14, 12, 12, 12, 12, - 16, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'high': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [6, 4, 4, 6, 9, 11, 12, 16, - 4, 5, 5, 6, 8, 10, 12, 12, - 4, 5, 5, 6, 10, 12, 12, 12, - 6, 6, 6, 11, 12, 12, 12, 12, - 9, 8, 10, 12, 12, 12, 12, 12, - 11, 10, 12, 12, 12, 12, 12, 12, - 12, 12, 12, 12, 12, 12, 12, 12, - 16, 12, 12, 12, 12, 12, 12, 12], - [7, 7, 13, 24, 20, 20, 17, 17, - 7, 12, 16, 14, 14, 12, 12, 12, - 13, 16, 14, 14, 12, 12, 12, 12, - 24, 14, 14, 12, 12, 12, 12, 12, - 20, 14, 12, 12, 12, 12, 12, 12, - 20, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12, - 17, 12, 12, 12, 12, 12, 12, 12] - ]}, - 'maximum': {'subsampling': 0, # "4:4:4" - 'quantization': [ - [2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 3, 4, 5, 6, - 2, 2, 2, 2, 4, 5, 7, 9, - 2, 2, 2, 4, 5, 7, 9, 12, - 3, 3, 4, 5, 8, 10, 12, 12, - 4, 4, 5, 7, 10, 12, 12, 12, - 5, 5, 7, 9, 12, 12, 12, 12, - 6, 6, 9, 12, 12, 12, 12, 12], - [3, 3, 5, 9, 13, 15, 15, 15, - 3, 4, 6, 10, 14, 12, 12, 12, - 5, 6, 9, 14, 12, 12, 12, 12, - 9, 10, 14, 12, 12, 12, 12, 12, - 13, 14, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12, - 15, 12, 12, 12, 12, 12, 12, 12] - ]}, -} -# fmt: on diff --git a/pptx-env/lib/python3.12/site-packages/PIL/McIdasImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/McIdasImagePlugin.py deleted file mode 100644 index 9a47933b..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/McIdasImagePlugin.py +++ /dev/null @@ -1,78 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Basic McIdas support for PIL -# -# History: -# 1997-05-05 fl Created (8-bit images only) -# 2009-03-08 fl Added 16/32-bit support. -# -# Thanks to Richard Jones and Craig Swank for specs and samples. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import struct - -from . import Image, ImageFile - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"\x00\x00\x00\x00\x00\x00\x00\x04") - - -## -# Image plugin for McIdas area images. - - -class McIdasImageFile(ImageFile.ImageFile): - format = "MCIDAS" - format_description = "McIdas area file" - - def _open(self) -> None: - # parse area file directory - assert self.fp is not None - - s = self.fp.read(256) - if not _accept(s) or len(s) != 256: - msg = "not an McIdas area file" - raise SyntaxError(msg) - - self.area_descriptor_raw = s - self.area_descriptor = w = [0, *struct.unpack("!64i", s)] - - # get mode - if w[11] == 1: - mode = rawmode = "L" - elif w[11] == 2: - mode = rawmode = "I;16B" - elif w[11] == 4: - # FIXME: add memory map support - mode = "I" - rawmode = "I;32B" - else: - msg = "unsupported McIdas format" - raise SyntaxError(msg) - - self._mode = mode - self._size = w[10], w[9] - - offset = w[34] + w[15] - stride = w[15] + w[10] * w[11] * w[14] - - self.tile = [ - ImageFile._Tile("raw", (0, 0) + self.size, offset, (rawmode, stride, 1)) - ] - - -# -------------------------------------------------------------------- -# registry - -Image.register_open(McIdasImageFile.format, McIdasImageFile, _accept) - -# no default extension diff --git a/pptx-env/lib/python3.12/site-packages/PIL/MicImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/MicImagePlugin.py deleted file mode 100644 index 9ce38c42..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/MicImagePlugin.py +++ /dev/null @@ -1,102 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Microsoft Image Composer support for PIL -# -# Notes: -# uses TiffImagePlugin.py to read the actual image streams -# -# History: -# 97-01-20 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import olefile - -from . import Image, TiffImagePlugin - -# -# -------------------------------------------------------------------- - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(olefile.MAGIC) - - -## -# Image plugin for Microsoft's Image Composer file format. - - -class MicImageFile(TiffImagePlugin.TiffImageFile): - format = "MIC" - format_description = "Microsoft Image Composer" - _close_exclusive_fp_after_loading = False - - def _open(self) -> None: - # read the OLE directory and see if this is a likely - # to be a Microsoft Image Composer file - - try: - self.ole = olefile.OleFileIO(self.fp) - except OSError as e: - msg = "not an MIC file; invalid OLE file" - raise SyntaxError(msg) from e - - # find ACI subfiles with Image members (maybe not the - # best way to identify MIC files, but what the... ;-) - - self.images = [ - path - for path in self.ole.listdir() - if path[1:] and path[0].endswith(".ACI") and path[1] == "Image" - ] - - # if we didn't find any images, this is probably not - # an MIC file. - if not self.images: - msg = "not an MIC file; no image entries" - raise SyntaxError(msg) - - self.frame = -1 - self._n_frames = len(self.images) - self.is_animated = self._n_frames > 1 - - self.__fp = self.fp - self.seek(0) - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - filename = self.images[frame] - self.fp = self.ole.openstream(filename) - - TiffImagePlugin.TiffImageFile._open(self) - - self.frame = frame - - def tell(self) -> int: - return self.frame - - def close(self) -> None: - self.__fp.close() - self.ole.close() - super().close() - - def __exit__(self, *args: object) -> None: - self.__fp.close() - self.ole.close() - super().__exit__() - - -# -# -------------------------------------------------------------------- - -Image.register_open(MicImageFile.format, MicImageFile, _accept) - -Image.register_extension(MicImageFile.format, ".mic") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/MpegImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/MpegImagePlugin.py deleted file mode 100644 index 47ebe9d6..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/MpegImagePlugin.py +++ /dev/null @@ -1,84 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# MPEG file handling -# -# History: -# 95-09-09 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1995. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image, ImageFile -from ._binary import i8 -from ._typing import SupportsRead - -# -# Bitstream parser - - -class BitStream: - def __init__(self, fp: SupportsRead[bytes]) -> None: - self.fp = fp - self.bits = 0 - self.bitbuffer = 0 - - def next(self) -> int: - return i8(self.fp.read(1)) - - def peek(self, bits: int) -> int: - while self.bits < bits: - self.bitbuffer = (self.bitbuffer << 8) + self.next() - self.bits += 8 - return self.bitbuffer >> (self.bits - bits) & (1 << bits) - 1 - - def skip(self, bits: int) -> None: - while self.bits < bits: - self.bitbuffer = (self.bitbuffer << 8) + i8(self.fp.read(1)) - self.bits += 8 - self.bits = self.bits - bits - - def read(self, bits: int) -> int: - v = self.peek(bits) - self.bits = self.bits - bits - return v - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"\x00\x00\x01\xb3") - - -## -# Image plugin for MPEG streams. This plugin can identify a stream, -# but it cannot read it. - - -class MpegImageFile(ImageFile.ImageFile): - format = "MPEG" - format_description = "MPEG" - - def _open(self) -> None: - assert self.fp is not None - - s = BitStream(self.fp) - if s.read(32) != 0x1B3: - msg = "not an MPEG file" - raise SyntaxError(msg) - - self._mode = "RGB" - self._size = s.read(12), s.read(12) - - -# -------------------------------------------------------------------- -# Registry stuff - -Image.register_open(MpegImageFile.format, MpegImageFile, _accept) - -Image.register_extensions(MpegImageFile.format, [".mpg", ".mpeg"]) - -Image.register_mime(MpegImageFile.format, "video/mpeg") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/MpoImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/MpoImagePlugin.py deleted file mode 100644 index b1ae0787..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/MpoImagePlugin.py +++ /dev/null @@ -1,202 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# MPO file handling -# -# See "Multi-Picture Format" (CIPA DC-007-Translation 2009, Standard of the -# Camera & Imaging Products Association) -# -# The multi-picture object combines multiple JPEG images (with a modified EXIF -# data format) into a single file. While it can theoretically be used much like -# a GIF animation, it is commonly used to represent 3D photographs and is (as -# of this writing) the most commonly used format by 3D cameras. -# -# History: -# 2014-03-13 Feneric Created -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -import struct -from typing import IO, Any, cast - -from . import ( - Image, - ImageFile, - ImageSequence, - JpegImagePlugin, - TiffImagePlugin, -) -from ._binary import o32le -from ._util import DeferredError - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - JpegImagePlugin._save(im, fp, filename) - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - append_images = im.encoderinfo.get("append_images", []) - if not append_images and not getattr(im, "is_animated", False): - _save(im, fp, filename) - return - - mpf_offset = 28 - offsets: list[int] = [] - im_sequences = [im, *append_images] - total = sum(getattr(seq, "n_frames", 1) for seq in im_sequences) - for im_sequence in im_sequences: - for im_frame in ImageSequence.Iterator(im_sequence): - if not offsets: - # APP2 marker - ifd_length = 66 + 16 * total - im_frame.encoderinfo["extra"] = ( - b"\xff\xe2" - + struct.pack(">H", 6 + ifd_length) - + b"MPF\0" - + b" " * ifd_length - ) - exif = im_frame.encoderinfo.get("exif") - if isinstance(exif, Image.Exif): - exif = exif.tobytes() - im_frame.encoderinfo["exif"] = exif - if exif: - mpf_offset += 4 + len(exif) - - JpegImagePlugin._save(im_frame, fp, filename) - offsets.append(fp.tell()) - else: - encoderinfo = im_frame._attach_default_encoderinfo(im) - im_frame.save(fp, "JPEG") - im_frame.encoderinfo = encoderinfo - offsets.append(fp.tell() - offsets[-1]) - - ifd = TiffImagePlugin.ImageFileDirectory_v2() - ifd[0xB000] = b"0100" - ifd[0xB001] = len(offsets) - - mpentries = b"" - data_offset = 0 - for i, size in enumerate(offsets): - if i == 0: - mptype = 0x030000 # Baseline MP Primary Image - else: - mptype = 0x000000 # Undefined - mpentries += struct.pack(" None: - self.fp.seek(0) # prep the fp in order to pass the JPEG test - JpegImagePlugin.JpegImageFile._open(self) - self._after_jpeg_open() - - def _after_jpeg_open(self, mpheader: dict[int, Any] | None = None) -> None: - self.mpinfo = mpheader if mpheader is not None else self._getmp() - if self.mpinfo is None: - msg = "Image appears to be a malformed MPO file" - raise ValueError(msg) - self.n_frames = self.mpinfo[0xB001] - self.__mpoffsets = [ - mpent["DataOffset"] + self.info["mpoffset"] for mpent in self.mpinfo[0xB002] - ] - self.__mpoffsets[0] = 0 - # Note that the following assertion will only be invalid if something - # gets broken within JpegImagePlugin. - assert self.n_frames == len(self.__mpoffsets) - del self.info["mpoffset"] # no longer needed - self.is_animated = self.n_frames > 1 - self._fp = self.fp # FIXME: hack - self._fp.seek(self.__mpoffsets[0]) # get ready to read first frame - self.__frame = 0 - self.offset = 0 - # for now we can only handle reading and individual frame extraction - self.readonly = 1 - - def load_seek(self, pos: int) -> None: - if isinstance(self._fp, DeferredError): - raise self._fp.ex - self._fp.seek(pos) - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - if isinstance(self._fp, DeferredError): - raise self._fp.ex - self.fp = self._fp - self.offset = self.__mpoffsets[frame] - - original_exif = self.info.get("exif") - if "exif" in self.info: - del self.info["exif"] - - self.fp.seek(self.offset + 2) # skip SOI marker - if not self.fp.read(2): - msg = "No data found for frame" - raise ValueError(msg) - self.fp.seek(self.offset) - JpegImagePlugin.JpegImageFile._open(self) - if self.info.get("exif") != original_exif: - self._reload_exif() - - self.tile = [ - ImageFile._Tile("jpeg", (0, 0) + self.size, self.offset, self.tile[0][-1]) - ] - self.__frame = frame - - def tell(self) -> int: - return self.__frame - - @staticmethod - def adopt( - jpeg_instance: JpegImagePlugin.JpegImageFile, - mpheader: dict[int, Any] | None = None, - ) -> MpoImageFile: - """ - Transform the instance of JpegImageFile into - an instance of MpoImageFile. - After the call, the JpegImageFile is extended - to be an MpoImageFile. - - This is essentially useful when opening a JPEG - file that reveals itself as an MPO, to avoid - double call to _open. - """ - jpeg_instance.__class__ = MpoImageFile - mpo_instance = cast(MpoImageFile, jpeg_instance) - mpo_instance._after_jpeg_open(mpheader) - return mpo_instance - - -# --------------------------------------------------------------------- -# Registry stuff - -# Note that since MPO shares a factory with JPEG, we do not need to do a -# separate registration for it here. -# Image.register_open(MpoImageFile.format, -# JpegImagePlugin.jpeg_factory, _accept) -Image.register_save(MpoImageFile.format, _save) -Image.register_save_all(MpoImageFile.format, _save_all) - -Image.register_extension(MpoImageFile.format, ".mpo") - -Image.register_mime(MpoImageFile.format, "image/mpo") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/MspImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/MspImagePlugin.py deleted file mode 100644 index 277087a8..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/MspImagePlugin.py +++ /dev/null @@ -1,200 +0,0 @@ -# -# The Python Imaging Library. -# -# MSP file handling -# -# This is the format used by the Paint program in Windows 1 and 2. -# -# History: -# 95-09-05 fl Created -# 97-01-03 fl Read/write MSP images -# 17-02-21 es Fixed RLE interpretation -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1995-97. -# Copyright (c) Eric Soroos 2017. -# -# See the README file for information on usage and redistribution. -# -# More info on this format: https://archive.org/details/gg243631 -# Page 313: -# Figure 205. Windows Paint Version 1: "DanM" Format -# Figure 206. Windows Paint Version 2: "LinS" Format. Used in Windows V2.03 -# -# See also: https://www.fileformat.info/format/mspaint/egff.htm -from __future__ import annotations - -import io -import struct -from typing import IO - -from . import Image, ImageFile -from ._binary import i16le as i16 -from ._binary import o16le as o16 - -# -# read MSP files - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith((b"DanM", b"LinS")) - - -## -# Image plugin for Windows MSP images. This plugin supports both -# uncompressed (Windows 1.0). - - -class MspImageFile(ImageFile.ImageFile): - format = "MSP" - format_description = "Windows Paint" - - def _open(self) -> None: - # Header - assert self.fp is not None - - s = self.fp.read(32) - if not _accept(s): - msg = "not an MSP file" - raise SyntaxError(msg) - - # Header checksum - checksum = 0 - for i in range(0, 32, 2): - checksum = checksum ^ i16(s, i) - if checksum != 0: - msg = "bad MSP checksum" - raise SyntaxError(msg) - - self._mode = "1" - self._size = i16(s, 4), i16(s, 6) - - if s.startswith(b"DanM"): - self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, 32, "1")] - else: - self.tile = [ImageFile._Tile("MSP", (0, 0) + self.size, 32)] - - -class MspDecoder(ImageFile.PyDecoder): - # The algo for the MSP decoder is from - # https://www.fileformat.info/format/mspaint/egff.htm - # cc-by-attribution -- That page references is taken from the - # Encyclopedia of Graphics File Formats and is licensed by - # O'Reilly under the Creative Common/Attribution license - # - # For RLE encoded files, the 32byte header is followed by a scan - # line map, encoded as one 16bit word of encoded byte length per - # line. - # - # NOTE: the encoded length of the line can be 0. This was not - # handled in the previous version of this encoder, and there's no - # mention of how to handle it in the documentation. From the few - # examples I've seen, I've assumed that it is a fill of the - # background color, in this case, white. - # - # - # Pseudocode of the decoder: - # Read a BYTE value as the RunType - # If the RunType value is zero - # Read next byte as the RunCount - # Read the next byte as the RunValue - # Write the RunValue byte RunCount times - # If the RunType value is non-zero - # Use this value as the RunCount - # Read and write the next RunCount bytes literally - # - # e.g.: - # 0x00 03 ff 05 00 01 02 03 04 - # would yield the bytes: - # 0xff ff ff 00 01 02 03 04 - # - # which are then interpreted as a bit packed mode '1' image - - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - - img = io.BytesIO() - blank_line = bytearray((0xFF,) * ((self.state.xsize + 7) // 8)) - try: - self.fd.seek(32) - rowmap = struct.unpack_from( - f"<{self.state.ysize}H", self.fd.read(self.state.ysize * 2) - ) - except struct.error as e: - msg = "Truncated MSP file in row map" - raise OSError(msg) from e - - for x, rowlen in enumerate(rowmap): - try: - if rowlen == 0: - img.write(blank_line) - continue - row = self.fd.read(rowlen) - if len(row) != rowlen: - msg = f"Truncated MSP file, expected {rowlen} bytes on row {x}" - raise OSError(msg) - idx = 0 - while idx < rowlen: - runtype = row[idx] - idx += 1 - if runtype == 0: - (runcount, runval) = struct.unpack_from("Bc", row, idx) - img.write(runval * runcount) - idx += 2 - else: - runcount = runtype - img.write(row[idx : idx + runcount]) - idx += runcount - - except struct.error as e: - msg = f"Corrupted MSP file in row {x}" - raise OSError(msg) from e - - self.set_as_raw(img.getvalue(), "1") - - return -1, 0 - - -Image.register_decoder("MSP", MspDecoder) - - -# -# write MSP files (uncompressed only) - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode != "1": - msg = f"cannot write mode {im.mode} as MSP" - raise OSError(msg) - - # create MSP header - header = [0] * 16 - - header[0], header[1] = i16(b"Da"), i16(b"nM") # version 1 - header[2], header[3] = im.size - header[4], header[5] = 1, 1 - header[6], header[7] = 1, 1 - header[8], header[9] = im.size - - checksum = 0 - for h in header: - checksum = checksum ^ h - header[12] = checksum # FIXME: is this the right field? - - # header - for h in header: - fp.write(o16(h)) - - # image body - ImageFile._save(im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 32, "1")]) - - -# -# registry - -Image.register_open(MspImageFile.format, MspImageFile, _accept) -Image.register_save(MspImageFile.format, _save) - -Image.register_extension(MspImageFile.format, ".msp") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PSDraw.py b/pptx-env/lib/python3.12/site-packages/PIL/PSDraw.py deleted file mode 100644 index 7fd4c5c9..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PSDraw.py +++ /dev/null @@ -1,237 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# Simple PostScript graphics interface -# -# History: -# 1996-04-20 fl Created -# 1999-01-10 fl Added gsave/grestore to image method -# 2005-05-04 fl Fixed floating point issue in image (from Eric Etheridge) -# -# Copyright (c) 1997-2005 by Secret Labs AB. All rights reserved. -# Copyright (c) 1996 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import sys -from typing import IO - -from . import EpsImagePlugin - -TYPE_CHECKING = False - - -## -# Simple PostScript graphics interface. - - -class PSDraw: - """ - Sets up printing to the given file. If ``fp`` is omitted, - ``sys.stdout.buffer`` is assumed. - """ - - def __init__(self, fp: IO[bytes] | None = None) -> None: - if not fp: - fp = sys.stdout.buffer - self.fp = fp - - def begin_document(self, id: str | None = None) -> None: - """Set up printing of a document. (Write PostScript DSC header.)""" - # FIXME: incomplete - self.fp.write( - b"%!PS-Adobe-3.0\n" - b"save\n" - b"/showpage { } def\n" - b"%%EndComments\n" - b"%%BeginDocument\n" - ) - # self.fp.write(ERROR_PS) # debugging! - self.fp.write(EDROFF_PS) - self.fp.write(VDI_PS) - self.fp.write(b"%%EndProlog\n") - self.isofont: dict[bytes, int] = {} - - def end_document(self) -> None: - """Ends printing. (Write PostScript DSC footer.)""" - self.fp.write(b"%%EndDocument\nrestore showpage\n%%End\n") - if hasattr(self.fp, "flush"): - self.fp.flush() - - def setfont(self, font: str, size: int) -> None: - """ - Selects which font to use. - - :param font: A PostScript font name - :param size: Size in points. - """ - font_bytes = bytes(font, "UTF-8") - if font_bytes not in self.isofont: - # reencode font - self.fp.write( - b"/PSDraw-%s ISOLatin1Encoding /%s E\n" % (font_bytes, font_bytes) - ) - self.isofont[font_bytes] = 1 - # rough - self.fp.write(b"/F0 %d /PSDraw-%s F\n" % (size, font_bytes)) - - def line(self, xy0: tuple[int, int], xy1: tuple[int, int]) -> None: - """ - Draws a line between the two points. Coordinates are given in - PostScript point coordinates (72 points per inch, (0, 0) is the lower - left corner of the page). - """ - self.fp.write(b"%d %d %d %d Vl\n" % (*xy0, *xy1)) - - def rectangle(self, box: tuple[int, int, int, int]) -> None: - """ - Draws a rectangle. - - :param box: A tuple of four integers, specifying left, bottom, width and - height. - """ - self.fp.write(b"%d %d M 0 %d %d Vr\n" % box) - - def text(self, xy: tuple[int, int], text: str) -> None: - """ - Draws text at the given position. You must use - :py:meth:`~PIL.PSDraw.PSDraw.setfont` before calling this method. - """ - text_bytes = bytes(text, "UTF-8") - text_bytes = b"\\(".join(text_bytes.split(b"(")) - text_bytes = b"\\)".join(text_bytes.split(b")")) - self.fp.write(b"%d %d M (%s) S\n" % (xy + (text_bytes,))) - - if TYPE_CHECKING: - from . import Image - - def image( - self, box: tuple[int, int, int, int], im: Image.Image, dpi: int | None = None - ) -> None: - """Draw a PIL image, centered in the given box.""" - # default resolution depends on mode - if not dpi: - if im.mode == "1": - dpi = 200 # fax - else: - dpi = 100 # grayscale - # image size (on paper) - x = im.size[0] * 72 / dpi - y = im.size[1] * 72 / dpi - # max allowed size - xmax = float(box[2] - box[0]) - ymax = float(box[3] - box[1]) - if x > xmax: - y = y * xmax / x - x = xmax - if y > ymax: - x = x * ymax / y - y = ymax - dx = (xmax - x) / 2 + box[0] - dy = (ymax - y) / 2 + box[1] - self.fp.write(b"gsave\n%f %f translate\n" % (dx, dy)) - if (x, y) != im.size: - # EpsImagePlugin._save prints the image at (0,0,xsize,ysize) - sx = x / im.size[0] - sy = y / im.size[1] - self.fp.write(b"%f %f scale\n" % (sx, sy)) - EpsImagePlugin._save(im, self.fp, "", 0) - self.fp.write(b"\ngrestore\n") - - -# -------------------------------------------------------------------- -# PostScript driver - -# -# EDROFF.PS -- PostScript driver for Edroff 2 -# -# History: -# 94-01-25 fl: created (edroff 2.04) -# -# Copyright (c) Fredrik Lundh 1994. -# - - -EDROFF_PS = b"""\ -/S { show } bind def -/P { moveto show } bind def -/M { moveto } bind def -/X { 0 rmoveto } bind def -/Y { 0 exch rmoveto } bind def -/E { findfont - dup maxlength dict begin - { - 1 index /FID ne { def } { pop pop } ifelse - } forall - /Encoding exch def - dup /FontName exch def - currentdict end definefont pop -} bind def -/F { findfont exch scalefont dup setfont - [ exch /setfont cvx ] cvx bind def -} bind def -""" - -# -# VDI.PS -- PostScript driver for VDI meta commands -# -# History: -# 94-01-25 fl: created (edroff 2.04) -# -# Copyright (c) Fredrik Lundh 1994. -# - -VDI_PS = b"""\ -/Vm { moveto } bind def -/Va { newpath arcn stroke } bind def -/Vl { moveto lineto stroke } bind def -/Vc { newpath 0 360 arc closepath } bind def -/Vr { exch dup 0 rlineto - exch dup 0 exch rlineto - exch neg 0 rlineto - 0 exch neg rlineto - setgray fill } bind def -/Tm matrix def -/Ve { Tm currentmatrix pop - translate scale newpath 0 0 .5 0 360 arc closepath - Tm setmatrix -} bind def -/Vf { currentgray exch setgray fill setgray } bind def -""" - -# -# ERROR.PS -- Error handler -# -# History: -# 89-11-21 fl: created (pslist 1.10) -# - -ERROR_PS = b"""\ -/landscape false def -/errorBUF 200 string def -/errorNL { currentpoint 10 sub exch pop 72 exch moveto } def -errordict begin /handleerror { - initmatrix /Courier findfont 10 scalefont setfont - newpath 72 720 moveto $error begin /newerror false def - (PostScript Error) show errorNL errorNL - (Error: ) show - /errorname load errorBUF cvs show errorNL errorNL - (Command: ) show - /command load dup type /stringtype ne { errorBUF cvs } if show - errorNL errorNL - (VMstatus: ) show - vmstatus errorBUF cvs show ( bytes available, ) show - errorBUF cvs show ( bytes used at level ) show - errorBUF cvs show errorNL errorNL - (Operand stargck: ) show errorNL /ostargck load { - dup type /stringtype ne { errorBUF cvs } if 72 0 rmoveto show errorNL - } forall errorNL - (Execution stargck: ) show errorNL /estargck load { - dup type /stringtype ne { errorBUF cvs } if 72 0 rmoveto show errorNL - } forall - end showpage -} def end -""" diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PaletteFile.py b/pptx-env/lib/python3.12/site-packages/PIL/PaletteFile.py deleted file mode 100644 index 2a26e5d4..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PaletteFile.py +++ /dev/null @@ -1,54 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read simple, teragon-style palette files -# -# History: -# 97-08-23 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from typing import IO - -from ._binary import o8 - - -class PaletteFile: - """File handler for Teragon-style palette files.""" - - rawmode = "RGB" - - def __init__(self, fp: IO[bytes]) -> None: - palette = [o8(i) * 3 for i in range(256)] - - while True: - s = fp.readline() - - if not s: - break - if s.startswith(b"#"): - continue - if len(s) > 100: - msg = "bad palette file" - raise SyntaxError(msg) - - v = [int(x) for x in s.split()] - try: - [i, r, g, b] = v - except ValueError: - [i, r] = v - g = b = r - - if 0 <= i <= 255: - palette[i] = o8(r) + o8(g) + o8(b) - - self.palette = b"".join(palette) - - def getpalette(self) -> tuple[bytes, str]: - return self.palette, self.rawmode diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PalmImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PalmImagePlugin.py deleted file mode 100644 index 15f71290..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PalmImagePlugin.py +++ /dev/null @@ -1,217 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# - -## -# Image plugin for Palm pixmap images (output only). -## -from __future__ import annotations - -from typing import IO - -from . import Image, ImageFile -from ._binary import o8 -from ._binary import o16be as o16b - -# fmt: off -_Palm8BitColormapValues = ( - (255, 255, 255), (255, 204, 255), (255, 153, 255), (255, 102, 255), - (255, 51, 255), (255, 0, 255), (255, 255, 204), (255, 204, 204), - (255, 153, 204), (255, 102, 204), (255, 51, 204), (255, 0, 204), - (255, 255, 153), (255, 204, 153), (255, 153, 153), (255, 102, 153), - (255, 51, 153), (255, 0, 153), (204, 255, 255), (204, 204, 255), - (204, 153, 255), (204, 102, 255), (204, 51, 255), (204, 0, 255), - (204, 255, 204), (204, 204, 204), (204, 153, 204), (204, 102, 204), - (204, 51, 204), (204, 0, 204), (204, 255, 153), (204, 204, 153), - (204, 153, 153), (204, 102, 153), (204, 51, 153), (204, 0, 153), - (153, 255, 255), (153, 204, 255), (153, 153, 255), (153, 102, 255), - (153, 51, 255), (153, 0, 255), (153, 255, 204), (153, 204, 204), - (153, 153, 204), (153, 102, 204), (153, 51, 204), (153, 0, 204), - (153, 255, 153), (153, 204, 153), (153, 153, 153), (153, 102, 153), - (153, 51, 153), (153, 0, 153), (102, 255, 255), (102, 204, 255), - (102, 153, 255), (102, 102, 255), (102, 51, 255), (102, 0, 255), - (102, 255, 204), (102, 204, 204), (102, 153, 204), (102, 102, 204), - (102, 51, 204), (102, 0, 204), (102, 255, 153), (102, 204, 153), - (102, 153, 153), (102, 102, 153), (102, 51, 153), (102, 0, 153), - (51, 255, 255), (51, 204, 255), (51, 153, 255), (51, 102, 255), - (51, 51, 255), (51, 0, 255), (51, 255, 204), (51, 204, 204), - (51, 153, 204), (51, 102, 204), (51, 51, 204), (51, 0, 204), - (51, 255, 153), (51, 204, 153), (51, 153, 153), (51, 102, 153), - (51, 51, 153), (51, 0, 153), (0, 255, 255), (0, 204, 255), - (0, 153, 255), (0, 102, 255), (0, 51, 255), (0, 0, 255), - (0, 255, 204), (0, 204, 204), (0, 153, 204), (0, 102, 204), - (0, 51, 204), (0, 0, 204), (0, 255, 153), (0, 204, 153), - (0, 153, 153), (0, 102, 153), (0, 51, 153), (0, 0, 153), - (255, 255, 102), (255, 204, 102), (255, 153, 102), (255, 102, 102), - (255, 51, 102), (255, 0, 102), (255, 255, 51), (255, 204, 51), - (255, 153, 51), (255, 102, 51), (255, 51, 51), (255, 0, 51), - (255, 255, 0), (255, 204, 0), (255, 153, 0), (255, 102, 0), - (255, 51, 0), (255, 0, 0), (204, 255, 102), (204, 204, 102), - (204, 153, 102), (204, 102, 102), (204, 51, 102), (204, 0, 102), - (204, 255, 51), (204, 204, 51), (204, 153, 51), (204, 102, 51), - (204, 51, 51), (204, 0, 51), (204, 255, 0), (204, 204, 0), - (204, 153, 0), (204, 102, 0), (204, 51, 0), (204, 0, 0), - (153, 255, 102), (153, 204, 102), (153, 153, 102), (153, 102, 102), - (153, 51, 102), (153, 0, 102), (153, 255, 51), (153, 204, 51), - (153, 153, 51), (153, 102, 51), (153, 51, 51), (153, 0, 51), - (153, 255, 0), (153, 204, 0), (153, 153, 0), (153, 102, 0), - (153, 51, 0), (153, 0, 0), (102, 255, 102), (102, 204, 102), - (102, 153, 102), (102, 102, 102), (102, 51, 102), (102, 0, 102), - (102, 255, 51), (102, 204, 51), (102, 153, 51), (102, 102, 51), - (102, 51, 51), (102, 0, 51), (102, 255, 0), (102, 204, 0), - (102, 153, 0), (102, 102, 0), (102, 51, 0), (102, 0, 0), - (51, 255, 102), (51, 204, 102), (51, 153, 102), (51, 102, 102), - (51, 51, 102), (51, 0, 102), (51, 255, 51), (51, 204, 51), - (51, 153, 51), (51, 102, 51), (51, 51, 51), (51, 0, 51), - (51, 255, 0), (51, 204, 0), (51, 153, 0), (51, 102, 0), - (51, 51, 0), (51, 0, 0), (0, 255, 102), (0, 204, 102), - (0, 153, 102), (0, 102, 102), (0, 51, 102), (0, 0, 102), - (0, 255, 51), (0, 204, 51), (0, 153, 51), (0, 102, 51), - (0, 51, 51), (0, 0, 51), (0, 255, 0), (0, 204, 0), - (0, 153, 0), (0, 102, 0), (0, 51, 0), (17, 17, 17), - (34, 34, 34), (68, 68, 68), (85, 85, 85), (119, 119, 119), - (136, 136, 136), (170, 170, 170), (187, 187, 187), (221, 221, 221), - (238, 238, 238), (192, 192, 192), (128, 0, 0), (128, 0, 128), - (0, 128, 0), (0, 128, 128), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)) -# fmt: on - - -# so build a prototype image to be used for palette resampling -def build_prototype_image() -> Image.Image: - image = Image.new("L", (1, len(_Palm8BitColormapValues))) - image.putdata(list(range(len(_Palm8BitColormapValues)))) - palettedata: tuple[int, ...] = () - for colormapValue in _Palm8BitColormapValues: - palettedata += colormapValue - palettedata += (0, 0, 0) * (256 - len(_Palm8BitColormapValues)) - image.putpalette(palettedata) - return image - - -Palm8BitColormapImage = build_prototype_image() - -# OK, we now have in Palm8BitColormapImage, -# a "P"-mode image with the right palette -# -# -------------------------------------------------------------------- - -_FLAGS = {"custom-colormap": 0x4000, "is-compressed": 0x8000, "has-transparent": 0x2000} - -_COMPRESSION_TYPES = {"none": 0xFF, "rle": 0x01, "scanline": 0x00} - - -# -# -------------------------------------------------------------------- - -## -# (Internal) Image save plugin for the Palm format. - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode == "P": - rawmode = "P" - bpp = 8 - version = 1 - - elif im.mode == "L": - if im.encoderinfo.get("bpp") in (1, 2, 4): - # this is 8-bit grayscale, so we shift it to get the high-order bits, - # and invert it because - # Palm does grayscale from white (0) to black (1) - bpp = im.encoderinfo["bpp"] - maxval = (1 << bpp) - 1 - shift = 8 - bpp - im = im.point(lambda x: maxval - (x >> shift)) - elif im.info.get("bpp") in (1, 2, 4): - # here we assume that even though the inherent mode is 8-bit grayscale, - # only the lower bpp bits are significant. - # We invert them to match the Palm. - bpp = im.info["bpp"] - maxval = (1 << bpp) - 1 - im = im.point(lambda x: maxval - (x & maxval)) - else: - msg = f"cannot write mode {im.mode} as Palm" - raise OSError(msg) - - # we ignore the palette here - im._mode = "P" - rawmode = f"P;{bpp}" - version = 1 - - elif im.mode == "1": - # monochrome -- write it inverted, as is the Palm standard - rawmode = "1;I" - bpp = 1 - version = 0 - - else: - msg = f"cannot write mode {im.mode} as Palm" - raise OSError(msg) - - # - # make sure image data is available - im.load() - - # write header - - cols = im.size[0] - rows = im.size[1] - - rowbytes = int((cols + (16 // bpp - 1)) / (16 // bpp)) * 2 - transparent_index = 0 - compression_type = _COMPRESSION_TYPES["none"] - - flags = 0 - if im.mode == "P": - flags |= _FLAGS["custom-colormap"] - colormap = im.im.getpalette() - colors = len(colormap) // 3 - colormapsize = 4 * colors + 2 - else: - colormapsize = 0 - - if "offset" in im.info: - offset = (rowbytes * rows + 16 + 3 + colormapsize) // 4 - else: - offset = 0 - - fp.write(o16b(cols) + o16b(rows) + o16b(rowbytes) + o16b(flags)) - fp.write(o8(bpp)) - fp.write(o8(version)) - fp.write(o16b(offset)) - fp.write(o8(transparent_index)) - fp.write(o8(compression_type)) - fp.write(o16b(0)) # reserved by Palm - - # now write colormap if necessary - - if colormapsize: - fp.write(o16b(colors)) - for i in range(colors): - fp.write(o8(i)) - fp.write(colormap[3 * i : 3 * i + 3]) - - # now convert data to raw form - ImageFile._save( - im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 0, (rawmode, rowbytes, 1))] - ) - - if hasattr(fp, "flush"): - fp.flush() - - -# -# -------------------------------------------------------------------- - -Image.register_save("Palm", _save) - -Image.register_extension("Palm", ".palm") - -Image.register_mime("Palm", "image/palm") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PcdImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PcdImagePlugin.py deleted file mode 100644 index 296f3775..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PcdImagePlugin.py +++ /dev/null @@ -1,68 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PCD file handling -# -# History: -# 96-05-10 fl Created -# 96-05-27 fl Added draft mode (128x192, 256x384) -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image, ImageFile - -## -# Image plugin for PhotoCD images. This plugin only reads the 768x512 -# image from the file; higher resolutions are encoded in a proprietary -# encoding. - - -class PcdImageFile(ImageFile.ImageFile): - format = "PCD" - format_description = "Kodak PhotoCD" - - def _open(self) -> None: - # rough - assert self.fp is not None - - self.fp.seek(2048) - s = self.fp.read(1539) - - if not s.startswith(b"PCD_"): - msg = "not a PCD file" - raise SyntaxError(msg) - - orientation = s[1538] & 3 - self.tile_post_rotate = None - if orientation == 1: - self.tile_post_rotate = 90 - elif orientation == 3: - self.tile_post_rotate = 270 - - self._mode = "RGB" - self._size = (512, 768) if orientation in (1, 3) else (768, 512) - self.tile = [ImageFile._Tile("pcd", (0, 0, 768, 512), 96 * 2048)] - - def load_prepare(self) -> None: - if self._im is None and self.tile_post_rotate: - self.im = Image.core.new(self.mode, (768, 512)) - ImageFile.ImageFile.load_prepare(self) - - def load_end(self) -> None: - if self.tile_post_rotate: - # Handle rotated PCDs - self.im = self.rotate(self.tile_post_rotate, expand=True).im - - -# -# registry - -Image.register_open(PcdImageFile.format, PcdImageFile) - -Image.register_extension(PcdImageFile.format, ".pcd") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PcfFontFile.py b/pptx-env/lib/python3.12/site-packages/PIL/PcfFontFile.py deleted file mode 100644 index a00e9b91..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PcfFontFile.py +++ /dev/null @@ -1,258 +0,0 @@ -# -# THIS IS WORK IN PROGRESS -# -# The Python Imaging Library -# $Id$ -# -# portable compiled font file parser -# -# history: -# 1997-08-19 fl created -# 2003-09-13 fl fixed loading of unicode fonts -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1997-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io - -from . import FontFile, Image -from ._binary import i8 -from ._binary import i16be as b16 -from ._binary import i16le as l16 -from ._binary import i32be as b32 -from ._binary import i32le as l32 - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from typing import BinaryIO - -# -------------------------------------------------------------------- -# declarations - -PCF_MAGIC = 0x70636601 # "\x01fcp" - -PCF_PROPERTIES = 1 << 0 -PCF_ACCELERATORS = 1 << 1 -PCF_METRICS = 1 << 2 -PCF_BITMAPS = 1 << 3 -PCF_INK_METRICS = 1 << 4 -PCF_BDF_ENCODINGS = 1 << 5 -PCF_SWIDTHS = 1 << 6 -PCF_GLYPH_NAMES = 1 << 7 -PCF_BDF_ACCELERATORS = 1 << 8 - -BYTES_PER_ROW: list[Callable[[int], int]] = [ - lambda bits: ((bits + 7) >> 3), - lambda bits: ((bits + 15) >> 3) & ~1, - lambda bits: ((bits + 31) >> 3) & ~3, - lambda bits: ((bits + 63) >> 3) & ~7, -] - - -def sz(s: bytes, o: int) -> bytes: - return s[o : s.index(b"\0", o)] - - -class PcfFontFile(FontFile.FontFile): - """Font file plugin for the X11 PCF format.""" - - name = "name" - - def __init__(self, fp: BinaryIO, charset_encoding: str = "iso8859-1"): - self.charset_encoding = charset_encoding - - magic = l32(fp.read(4)) - if magic != PCF_MAGIC: - msg = "not a PCF file" - raise SyntaxError(msg) - - super().__init__() - - count = l32(fp.read(4)) - self.toc = {} - for i in range(count): - type = l32(fp.read(4)) - self.toc[type] = l32(fp.read(4)), l32(fp.read(4)), l32(fp.read(4)) - - self.fp = fp - - self.info = self._load_properties() - - metrics = self._load_metrics() - bitmaps = self._load_bitmaps(metrics) - encoding = self._load_encoding() - - # - # create glyph structure - - for ch, ix in enumerate(encoding): - if ix is not None: - ( - xsize, - ysize, - left, - right, - width, - ascent, - descent, - attributes, - ) = metrics[ix] - self.glyph[ch] = ( - (width, 0), - (left, descent - ysize, xsize + left, descent), - (0, 0, xsize, ysize), - bitmaps[ix], - ) - - def _getformat( - self, tag: int - ) -> tuple[BinaryIO, int, Callable[[bytes], int], Callable[[bytes], int]]: - format, size, offset = self.toc[tag] - - fp = self.fp - fp.seek(offset) - - format = l32(fp.read(4)) - - if format & 4: - i16, i32 = b16, b32 - else: - i16, i32 = l16, l32 - - return fp, format, i16, i32 - - def _load_properties(self) -> dict[bytes, bytes | int]: - # - # font properties - - properties = {} - - fp, format, i16, i32 = self._getformat(PCF_PROPERTIES) - - nprops = i32(fp.read(4)) - - # read property description - p = [(i32(fp.read(4)), i8(fp.read(1)), i32(fp.read(4))) for _ in range(nprops)] - - if nprops & 3: - fp.seek(4 - (nprops & 3), io.SEEK_CUR) # pad - - data = fp.read(i32(fp.read(4))) - - for k, s, v in p: - property_value: bytes | int = sz(data, v) if s else v - properties[sz(data, k)] = property_value - - return properties - - def _load_metrics(self) -> list[tuple[int, int, int, int, int, int, int, int]]: - # - # font metrics - - metrics: list[tuple[int, int, int, int, int, int, int, int]] = [] - - fp, format, i16, i32 = self._getformat(PCF_METRICS) - - append = metrics.append - - if (format & 0xFF00) == 0x100: - # "compressed" metrics - for i in range(i16(fp.read(2))): - left = i8(fp.read(1)) - 128 - right = i8(fp.read(1)) - 128 - width = i8(fp.read(1)) - 128 - ascent = i8(fp.read(1)) - 128 - descent = i8(fp.read(1)) - 128 - xsize = right - left - ysize = ascent + descent - append((xsize, ysize, left, right, width, ascent, descent, 0)) - - else: - # "jumbo" metrics - for i in range(i32(fp.read(4))): - left = i16(fp.read(2)) - right = i16(fp.read(2)) - width = i16(fp.read(2)) - ascent = i16(fp.read(2)) - descent = i16(fp.read(2)) - attributes = i16(fp.read(2)) - xsize = right - left - ysize = ascent + descent - append((xsize, ysize, left, right, width, ascent, descent, attributes)) - - return metrics - - def _load_bitmaps( - self, metrics: list[tuple[int, int, int, int, int, int, int, int]] - ) -> list[Image.Image]: - # - # bitmap data - - fp, format, i16, i32 = self._getformat(PCF_BITMAPS) - - nbitmaps = i32(fp.read(4)) - - if nbitmaps != len(metrics): - msg = "Wrong number of bitmaps" - raise OSError(msg) - - offsets = [i32(fp.read(4)) for _ in range(nbitmaps)] - - bitmap_sizes = [i32(fp.read(4)) for _ in range(4)] - - # byteorder = format & 4 # non-zero => MSB - bitorder = format & 8 # non-zero => MSB - padindex = format & 3 - - bitmapsize = bitmap_sizes[padindex] - offsets.append(bitmapsize) - - data = fp.read(bitmapsize) - - pad = BYTES_PER_ROW[padindex] - mode = "1;R" - if bitorder: - mode = "1" - - bitmaps = [] - for i in range(nbitmaps): - xsize, ysize = metrics[i][:2] - b, e = offsets[i : i + 2] - bitmaps.append( - Image.frombytes("1", (xsize, ysize), data[b:e], "raw", mode, pad(xsize)) - ) - - return bitmaps - - def _load_encoding(self) -> list[int | None]: - fp, format, i16, i32 = self._getformat(PCF_BDF_ENCODINGS) - - first_col, last_col = i16(fp.read(2)), i16(fp.read(2)) - first_row, last_row = i16(fp.read(2)), i16(fp.read(2)) - - i16(fp.read(2)) # default - - nencoding = (last_col - first_col + 1) * (last_row - first_row + 1) - - # map character code to bitmap index - encoding: list[int | None] = [None] * min(256, nencoding) - - encoding_offsets = [i16(fp.read(2)) for _ in range(nencoding)] - - for i in range(first_col, len(encoding)): - try: - encoding_offset = encoding_offsets[ - ord(bytearray([i]).decode(self.charset_encoding)) - ] - if encoding_offset != 0xFFFF: - encoding[i] = encoding_offset - except UnicodeDecodeError: - # character is not supported in selected encoding - pass - - return encoding diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PcxImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PcxImagePlugin.py deleted file mode 100644 index 6b16d538..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PcxImagePlugin.py +++ /dev/null @@ -1,228 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PCX file handling -# -# This format was originally used by ZSoft's popular PaintBrush -# program for the IBM PC. It is also supported by many MS-DOS and -# Windows applications, including the Windows PaintBrush program in -# Windows 3. -# -# history: -# 1995-09-01 fl Created -# 1996-05-20 fl Fixed RGB support -# 1997-01-03 fl Fixed 2-bit and 4-bit support -# 1999-02-03 fl Fixed 8-bit support (broken in 1.0b1) -# 1999-02-07 fl Added write support -# 2002-06-09 fl Made 2-bit and 4-bit support a bit more robust -# 2002-07-30 fl Seek from to current position, not beginning of file -# 2003-06-03 fl Extract DPI settings (info["dpi"]) -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1995-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -import logging -from typing import IO - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - -logger = logging.getLogger(__name__) - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 2 and prefix[0] == 10 and prefix[1] in [0, 2, 3, 5] - - -## -# Image plugin for Paintbrush images. - - -class PcxImageFile(ImageFile.ImageFile): - format = "PCX" - format_description = "Paintbrush" - - def _open(self) -> None: - # header - assert self.fp is not None - - s = self.fp.read(68) - if not _accept(s): - msg = "not a PCX file" - raise SyntaxError(msg) - - # image - bbox = i16(s, 4), i16(s, 6), i16(s, 8) + 1, i16(s, 10) + 1 - if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]: - msg = "bad PCX image size" - raise SyntaxError(msg) - logger.debug("BBox: %s %s %s %s", *bbox) - - offset = self.fp.tell() + 60 - - # format - version = s[1] - bits = s[3] - planes = s[65] - provided_stride = i16(s, 66) - logger.debug( - "PCX version %s, bits %s, planes %s, stride %s", - version, - bits, - planes, - provided_stride, - ) - - self.info["dpi"] = i16(s, 12), i16(s, 14) - - if bits == 1 and planes == 1: - mode = rawmode = "1" - - elif bits == 1 and planes in (2, 4): - mode = "P" - rawmode = f"P;{planes}L" - self.palette = ImagePalette.raw("RGB", s[16:64]) - - elif version == 5 and bits == 8 and planes == 1: - mode = rawmode = "L" - # FIXME: hey, this doesn't work with the incremental loader !!! - self.fp.seek(-769, io.SEEK_END) - s = self.fp.read(769) - if len(s) == 769 and s[0] == 12: - # check if the palette is linear grayscale - for i in range(256): - if s[i * 3 + 1 : i * 3 + 4] != o8(i) * 3: - mode = rawmode = "P" - break - if mode == "P": - self.palette = ImagePalette.raw("RGB", s[1:]) - - elif version == 5 and bits == 8 and planes == 3: - mode = "RGB" - rawmode = "RGB;L" - - else: - msg = "unknown PCX mode" - raise OSError(msg) - - self._mode = mode - self._size = bbox[2] - bbox[0], bbox[3] - bbox[1] - - # Don't trust the passed in stride. - # Calculate the approximate position for ourselves. - # CVE-2020-35653 - stride = (self._size[0] * bits + 7) // 8 - - # While the specification states that this must be even, - # not all images follow this - if provided_stride != stride: - stride += stride % 2 - - bbox = (0, 0) + self.size - logger.debug("size: %sx%s", *self.size) - - self.tile = [ImageFile._Tile("pcx", bbox, offset, (rawmode, planes * stride))] - - -# -------------------------------------------------------------------- -# save PCX files - - -SAVE = { - # mode: (version, bits, planes, raw mode) - "1": (2, 1, 1, "1"), - "L": (5, 8, 1, "L"), - "P": (5, 8, 1, "P"), - "RGB": (5, 8, 3, "RGB;L"), -} - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - try: - version, bits, planes, rawmode = SAVE[im.mode] - except KeyError as e: - msg = f"Cannot save {im.mode} images as PCX" - raise ValueError(msg) from e - - # bytes per plane - stride = (im.size[0] * bits + 7) // 8 - # stride should be even - stride += stride % 2 - # Stride needs to be kept in sync with the PcxEncode.c version. - # Ideally it should be passed in in the state, but the bytes value - # gets overwritten. - - logger.debug( - "PcxImagePlugin._save: xwidth: %d, bits: %d, stride: %d", - im.size[0], - bits, - stride, - ) - - # under windows, we could determine the current screen size with - # "Image.core.display_mode()[1]", but I think that's overkill... - - screen = im.size - - dpi = 100, 100 - - # PCX header - fp.write( - o8(10) - + o8(version) - + o8(1) - + o8(bits) - + o16(0) - + o16(0) - + o16(im.size[0] - 1) - + o16(im.size[1] - 1) - + o16(dpi[0]) - + o16(dpi[1]) - + b"\0" * 24 - + b"\xff" * 24 - + b"\0" - + o8(planes) - + o16(stride) - + o16(1) - + o16(screen[0]) - + o16(screen[1]) - + b"\0" * 54 - ) - - assert fp.tell() == 128 - - ImageFile._save( - im, fp, [ImageFile._Tile("pcx", (0, 0) + im.size, 0, (rawmode, bits * planes))] - ) - - if im.mode == "P": - # colour palette - fp.write(o8(12)) - palette = im.im.getpalette("RGB", "RGB") - palette += b"\x00" * (768 - len(palette)) - fp.write(palette) # 768 bytes - elif im.mode == "L": - # grayscale palette - fp.write(o8(12)) - for i in range(256): - fp.write(o8(i) * 3) - - -# -------------------------------------------------------------------- -# registry - - -Image.register_open(PcxImageFile.format, PcxImageFile, _accept) -Image.register_save(PcxImageFile.format, _save) - -Image.register_extension(PcxImageFile.format, ".pcx") - -Image.register_mime(PcxImageFile.format, "image/x-pcx") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PdfImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PdfImagePlugin.py deleted file mode 100644 index 5594c7e0..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PdfImagePlugin.py +++ /dev/null @@ -1,311 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PDF (Acrobat) file handling -# -# History: -# 1996-07-16 fl Created -# 1997-01-18 fl Fixed header -# 2004-02-21 fl Fixes for 1/L/CMYK images, etc. -# 2004-02-24 fl Fixes for 1 and P images. -# -# Copyright (c) 1997-2004 by Secret Labs AB. All rights reserved. -# Copyright (c) 1996-1997 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -## -# Image plugin for PDF images (output only). -## -from __future__ import annotations - -import io -import math -import os -import time -from typing import IO, Any - -from . import Image, ImageFile, ImageSequence, PdfParser, features - -# -# -------------------------------------------------------------------- - -# object ids: -# 1. catalogue -# 2. pages -# 3. image -# 4. page -# 5. page contents - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - _save(im, fp, filename, save_all=True) - - -## -# (Internal) Image save plugin for the PDF format. - - -def _write_image( - im: Image.Image, - filename: str | bytes, - existing_pdf: PdfParser.PdfParser, - image_refs: list[PdfParser.IndirectReference], -) -> tuple[PdfParser.IndirectReference, str]: - # FIXME: Should replace ASCIIHexDecode with RunLengthDecode - # (packbits) or LZWDecode (tiff/lzw compression). Note that - # PDF 1.2 also supports Flatedecode (zip compression). - - params = None - decode = None - - # - # Get image characteristics - - width, height = im.size - - dict_obj: dict[str, Any] = {"BitsPerComponent": 8} - if im.mode == "1": - if features.check("libtiff"): - decode_filter = "CCITTFaxDecode" - dict_obj["BitsPerComponent"] = 1 - params = PdfParser.PdfArray( - [ - PdfParser.PdfDict( - { - "K": -1, - "BlackIs1": True, - "Columns": width, - "Rows": height, - } - ) - ] - ) - else: - decode_filter = "DCTDecode" - dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceGray") - procset = "ImageB" # grayscale - elif im.mode == "L": - decode_filter = "DCTDecode" - # params = f"<< /Predictor 15 /Columns {width-2} >>" - dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceGray") - procset = "ImageB" # grayscale - elif im.mode == "LA": - decode_filter = "JPXDecode" - # params = f"<< /Predictor 15 /Columns {width-2} >>" - procset = "ImageB" # grayscale - dict_obj["SMaskInData"] = 1 - elif im.mode == "P": - decode_filter = "ASCIIHexDecode" - palette = im.getpalette() - assert palette is not None - dict_obj["ColorSpace"] = [ - PdfParser.PdfName("Indexed"), - PdfParser.PdfName("DeviceRGB"), - len(palette) // 3 - 1, - PdfParser.PdfBinary(palette), - ] - procset = "ImageI" # indexed color - - if "transparency" in im.info: - smask = im.convert("LA").getchannel("A") - smask.encoderinfo = {} - - image_ref = _write_image(smask, filename, existing_pdf, image_refs)[0] - dict_obj["SMask"] = image_ref - elif im.mode == "RGB": - decode_filter = "DCTDecode" - dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceRGB") - procset = "ImageC" # color images - elif im.mode == "RGBA": - decode_filter = "JPXDecode" - procset = "ImageC" # color images - dict_obj["SMaskInData"] = 1 - elif im.mode == "CMYK": - decode_filter = "DCTDecode" - dict_obj["ColorSpace"] = PdfParser.PdfName("DeviceCMYK") - procset = "ImageC" # color images - decode = [1, 0, 1, 0, 1, 0, 1, 0] - else: - msg = f"cannot save mode {im.mode}" - raise ValueError(msg) - - # - # image - - op = io.BytesIO() - - if decode_filter == "ASCIIHexDecode": - ImageFile._save(im, op, [ImageFile._Tile("hex", (0, 0) + im.size, 0, im.mode)]) - elif decode_filter == "CCITTFaxDecode": - im.save( - op, - "TIFF", - compression="group4", - # use a single strip - strip_size=math.ceil(width / 8) * height, - ) - elif decode_filter == "DCTDecode": - Image.SAVE["JPEG"](im, op, filename) - elif decode_filter == "JPXDecode": - del dict_obj["BitsPerComponent"] - Image.SAVE["JPEG2000"](im, op, filename) - else: - msg = f"unsupported PDF filter ({decode_filter})" - raise ValueError(msg) - - stream = op.getvalue() - filter: PdfParser.PdfArray | PdfParser.PdfName - if decode_filter == "CCITTFaxDecode": - stream = stream[8:] - filter = PdfParser.PdfArray([PdfParser.PdfName(decode_filter)]) - else: - filter = PdfParser.PdfName(decode_filter) - - image_ref = image_refs.pop(0) - existing_pdf.write_obj( - image_ref, - stream=stream, - Type=PdfParser.PdfName("XObject"), - Subtype=PdfParser.PdfName("Image"), - Width=width, # * 72.0 / x_resolution, - Height=height, # * 72.0 / y_resolution, - Filter=filter, - Decode=decode, - DecodeParms=params, - **dict_obj, - ) - - return image_ref, procset - - -def _save( - im: Image.Image, fp: IO[bytes], filename: str | bytes, save_all: bool = False -) -> None: - is_appending = im.encoderinfo.get("append", False) - filename_str = filename.decode() if isinstance(filename, bytes) else filename - if is_appending: - existing_pdf = PdfParser.PdfParser(f=fp, filename=filename_str, mode="r+b") - else: - existing_pdf = PdfParser.PdfParser(f=fp, filename=filename_str, mode="w+b") - - dpi = im.encoderinfo.get("dpi") - if dpi: - x_resolution = dpi[0] - y_resolution = dpi[1] - else: - x_resolution = y_resolution = im.encoderinfo.get("resolution", 72.0) - - info = { - "title": ( - None if is_appending else os.path.splitext(os.path.basename(filename))[0] - ), - "author": None, - "subject": None, - "keywords": None, - "creator": None, - "producer": None, - "creationDate": None if is_appending else time.gmtime(), - "modDate": None if is_appending else time.gmtime(), - } - for k, default in info.items(): - v = im.encoderinfo.get(k) if k in im.encoderinfo else default - if v: - existing_pdf.info[k[0].upper() + k[1:]] = v - - # - # make sure image data is available - im.load() - - existing_pdf.start_writing() - existing_pdf.write_header() - existing_pdf.write_comment("created by Pillow PDF driver") - - # - # pages - ims = [im] - if save_all: - append_images = im.encoderinfo.get("append_images", []) - for append_im in append_images: - append_im.encoderinfo = im.encoderinfo.copy() - ims.append(append_im) - number_of_pages = 0 - image_refs = [] - page_refs = [] - contents_refs = [] - for im in ims: - im_number_of_pages = 1 - if save_all: - im_number_of_pages = getattr(im, "n_frames", 1) - number_of_pages += im_number_of_pages - for i in range(im_number_of_pages): - image_refs.append(existing_pdf.next_object_id(0)) - if im.mode == "P" and "transparency" in im.info: - image_refs.append(existing_pdf.next_object_id(0)) - - page_refs.append(existing_pdf.next_object_id(0)) - contents_refs.append(existing_pdf.next_object_id(0)) - existing_pdf.pages.append(page_refs[-1]) - - # - # catalog and list of pages - existing_pdf.write_catalog() - - page_number = 0 - for im_sequence in ims: - im_pages: ImageSequence.Iterator | list[Image.Image] = ( - ImageSequence.Iterator(im_sequence) if save_all else [im_sequence] - ) - for im in im_pages: - image_ref, procset = _write_image(im, filename, existing_pdf, image_refs) - - # - # page - - existing_pdf.write_page( - page_refs[page_number], - Resources=PdfParser.PdfDict( - ProcSet=[PdfParser.PdfName("PDF"), PdfParser.PdfName(procset)], - XObject=PdfParser.PdfDict(image=image_ref), - ), - MediaBox=[ - 0, - 0, - im.width * 72.0 / x_resolution, - im.height * 72.0 / y_resolution, - ], - Contents=contents_refs[page_number], - ) - - # - # page contents - - page_contents = b"q %f 0 0 %f 0 0 cm /image Do Q\n" % ( - im.width * 72.0 / x_resolution, - im.height * 72.0 / y_resolution, - ) - - existing_pdf.write_obj(contents_refs[page_number], stream=page_contents) - - page_number += 1 - - # - # trailer - existing_pdf.write_xref_and_trailer() - if hasattr(fp, "flush"): - fp.flush() - existing_pdf.close() - - -# -# -------------------------------------------------------------------- - - -Image.register_save("PDF", _save) -Image.register_save_all("PDF", _save_all) - -Image.register_extension("PDF", ".pdf") - -Image.register_mime("PDF", "application/pdf") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PdfParser.py b/pptx-env/lib/python3.12/site-packages/PIL/PdfParser.py deleted file mode 100644 index 2c903146..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PdfParser.py +++ /dev/null @@ -1,1075 +0,0 @@ -from __future__ import annotations - -import calendar -import codecs -import collections -import mmap -import os -import re -import time -import zlib -from typing import Any, NamedTuple - -TYPE_CHECKING = False -if TYPE_CHECKING: - from typing import IO - - _DictBase = collections.UserDict[str | bytes, Any] -else: - _DictBase = collections.UserDict - - -# see 7.9.2.2 Text String Type on page 86 and D.3 PDFDocEncoding Character Set -# on page 656 -def encode_text(s: str) -> bytes: - return codecs.BOM_UTF16_BE + s.encode("utf_16_be") - - -PDFDocEncoding = { - 0x16: "\u0017", - 0x18: "\u02d8", - 0x19: "\u02c7", - 0x1A: "\u02c6", - 0x1B: "\u02d9", - 0x1C: "\u02dd", - 0x1D: "\u02db", - 0x1E: "\u02da", - 0x1F: "\u02dc", - 0x80: "\u2022", - 0x81: "\u2020", - 0x82: "\u2021", - 0x83: "\u2026", - 0x84: "\u2014", - 0x85: "\u2013", - 0x86: "\u0192", - 0x87: "\u2044", - 0x88: "\u2039", - 0x89: "\u203a", - 0x8A: "\u2212", - 0x8B: "\u2030", - 0x8C: "\u201e", - 0x8D: "\u201c", - 0x8E: "\u201d", - 0x8F: "\u2018", - 0x90: "\u2019", - 0x91: "\u201a", - 0x92: "\u2122", - 0x93: "\ufb01", - 0x94: "\ufb02", - 0x95: "\u0141", - 0x96: "\u0152", - 0x97: "\u0160", - 0x98: "\u0178", - 0x99: "\u017d", - 0x9A: "\u0131", - 0x9B: "\u0142", - 0x9C: "\u0153", - 0x9D: "\u0161", - 0x9E: "\u017e", - 0xA0: "\u20ac", -} - - -def decode_text(b: bytes) -> str: - if b[: len(codecs.BOM_UTF16_BE)] == codecs.BOM_UTF16_BE: - return b[len(codecs.BOM_UTF16_BE) :].decode("utf_16_be") - else: - return "".join(PDFDocEncoding.get(byte, chr(byte)) for byte in b) - - -class PdfFormatError(RuntimeError): - """An error that probably indicates a syntactic or semantic error in the - PDF file structure""" - - pass - - -def check_format_condition(condition: bool, error_message: str) -> None: - if not condition: - raise PdfFormatError(error_message) - - -class IndirectReferenceTuple(NamedTuple): - object_id: int - generation: int - - -class IndirectReference(IndirectReferenceTuple): - def __str__(self) -> str: - return f"{self.object_id} {self.generation} R" - - def __bytes__(self) -> bytes: - return self.__str__().encode("us-ascii") - - def __eq__(self, other: object) -> bool: - if self.__class__ is not other.__class__: - return False - assert isinstance(other, IndirectReference) - return other.object_id == self.object_id and other.generation == self.generation - - def __ne__(self, other: object) -> bool: - return not (self == other) - - def __hash__(self) -> int: - return hash((self.object_id, self.generation)) - - -class IndirectObjectDef(IndirectReference): - def __str__(self) -> str: - return f"{self.object_id} {self.generation} obj" - - -class XrefTable: - def __init__(self) -> None: - self.existing_entries: dict[int, tuple[int, int]] = ( - {} - ) # object ID => (offset, generation) - self.new_entries: dict[int, tuple[int, int]] = ( - {} - ) # object ID => (offset, generation) - self.deleted_entries = {0: 65536} # object ID => generation - self.reading_finished = False - - def __setitem__(self, key: int, value: tuple[int, int]) -> None: - if self.reading_finished: - self.new_entries[key] = value - else: - self.existing_entries[key] = value - if key in self.deleted_entries: - del self.deleted_entries[key] - - def __getitem__(self, key: int) -> tuple[int, int]: - try: - return self.new_entries[key] - except KeyError: - return self.existing_entries[key] - - def __delitem__(self, key: int) -> None: - if key in self.new_entries: - generation = self.new_entries[key][1] + 1 - del self.new_entries[key] - self.deleted_entries[key] = generation - elif key in self.existing_entries: - generation = self.existing_entries[key][1] + 1 - self.deleted_entries[key] = generation - elif key in self.deleted_entries: - generation = self.deleted_entries[key] - else: - msg = f"object ID {key} cannot be deleted because it doesn't exist" - raise IndexError(msg) - - def __contains__(self, key: int) -> bool: - return key in self.existing_entries or key in self.new_entries - - def __len__(self) -> int: - return len( - set(self.existing_entries.keys()) - | set(self.new_entries.keys()) - | set(self.deleted_entries.keys()) - ) - - def keys(self) -> set[int]: - return ( - set(self.existing_entries.keys()) - set(self.deleted_entries.keys()) - ) | set(self.new_entries.keys()) - - def write(self, f: IO[bytes]) -> int: - keys = sorted(set(self.new_entries.keys()) | set(self.deleted_entries.keys())) - deleted_keys = sorted(set(self.deleted_entries.keys())) - startxref = f.tell() - f.write(b"xref\n") - while keys: - # find a contiguous sequence of object IDs - prev: int | None = None - for index, key in enumerate(keys): - if prev is None or prev + 1 == key: - prev = key - else: - contiguous_keys = keys[:index] - keys = keys[index:] - break - else: - contiguous_keys = keys - keys = [] - f.write(b"%d %d\n" % (contiguous_keys[0], len(contiguous_keys))) - for object_id in contiguous_keys: - if object_id in self.new_entries: - f.write(b"%010d %05d n \n" % self.new_entries[object_id]) - else: - this_deleted_object_id = deleted_keys.pop(0) - check_format_condition( - object_id == this_deleted_object_id, - f"expected the next deleted object ID to be {object_id}, " - f"instead found {this_deleted_object_id}", - ) - try: - next_in_linked_list = deleted_keys[0] - except IndexError: - next_in_linked_list = 0 - f.write( - b"%010d %05d f \n" - % (next_in_linked_list, self.deleted_entries[object_id]) - ) - return startxref - - -class PdfName: - name: bytes - - def __init__(self, name: PdfName | bytes | str) -> None: - if isinstance(name, PdfName): - self.name = name.name - elif isinstance(name, bytes): - self.name = name - else: - self.name = name.encode("us-ascii") - - def name_as_str(self) -> str: - return self.name.decode("us-ascii") - - def __eq__(self, other: object) -> bool: - return ( - isinstance(other, PdfName) and other.name == self.name - ) or other == self.name - - def __hash__(self) -> int: - return hash(self.name) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({repr(self.name)})" - - @classmethod - def from_pdf_stream(cls, data: bytes) -> PdfName: - return cls(PdfParser.interpret_name(data)) - - allowed_chars = set(range(33, 127)) - {ord(c) for c in "#%/()<>[]{}"} - - def __bytes__(self) -> bytes: - result = bytearray(b"/") - for b in self.name: - if b in self.allowed_chars: - result.append(b) - else: - result.extend(b"#%02X" % b) - return bytes(result) - - -class PdfArray(list[Any]): - def __bytes__(self) -> bytes: - return b"[ " + b" ".join(pdf_repr(x) for x in self) + b" ]" - - -class PdfDict(_DictBase): - def __setattr__(self, key: str, value: Any) -> None: - if key == "data": - collections.UserDict.__setattr__(self, key, value) - else: - self[key.encode("us-ascii")] = value - - def __getattr__(self, key: str) -> str | time.struct_time: - try: - value = self[key.encode("us-ascii")] - except KeyError as e: - raise AttributeError(key) from e - if isinstance(value, bytes): - value = decode_text(value) - if key.endswith("Date"): - if value.startswith("D:"): - value = value[2:] - - relationship = "Z" - if len(value) > 17: - relationship = value[14] - offset = int(value[15:17]) * 60 - if len(value) > 20: - offset += int(value[18:20]) - - format = "%Y%m%d%H%M%S"[: len(value) - 2] - value = time.strptime(value[: len(format) + 2], format) - if relationship in ["+", "-"]: - offset *= 60 - if relationship == "+": - offset *= -1 - value = time.gmtime(calendar.timegm(value) + offset) - return value - - def __bytes__(self) -> bytes: - out = bytearray(b"<<") - for key, value in self.items(): - if value is None: - continue - value = pdf_repr(value) - out.extend(b"\n") - out.extend(bytes(PdfName(key))) - out.extend(b" ") - out.extend(value) - out.extend(b"\n>>") - return bytes(out) - - -class PdfBinary: - def __init__(self, data: list[int] | bytes) -> None: - self.data = data - - def __bytes__(self) -> bytes: - return b"<%s>" % b"".join(b"%02X" % b for b in self.data) - - -class PdfStream: - def __init__(self, dictionary: PdfDict, buf: bytes) -> None: - self.dictionary = dictionary - self.buf = buf - - def decode(self) -> bytes: - try: - filter = self.dictionary[b"Filter"] - except KeyError: - return self.buf - if filter == b"FlateDecode": - try: - expected_length = self.dictionary[b"DL"] - except KeyError: - expected_length = self.dictionary[b"Length"] - return zlib.decompress(self.buf, bufsize=int(expected_length)) - else: - msg = f"stream filter {repr(filter)} unknown/unsupported" - raise NotImplementedError(msg) - - -def pdf_repr(x: Any) -> bytes: - if x is True: - return b"true" - elif x is False: - return b"false" - elif x is None: - return b"null" - elif isinstance(x, (PdfName, PdfDict, PdfArray, PdfBinary)): - return bytes(x) - elif isinstance(x, (int, float)): - return str(x).encode("us-ascii") - elif isinstance(x, time.struct_time): - return b"(D:" + time.strftime("%Y%m%d%H%M%SZ", x).encode("us-ascii") + b")" - elif isinstance(x, dict): - return bytes(PdfDict(x)) - elif isinstance(x, list): - return bytes(PdfArray(x)) - elif isinstance(x, str): - return pdf_repr(encode_text(x)) - elif isinstance(x, bytes): - # XXX escape more chars? handle binary garbage - x = x.replace(b"\\", b"\\\\") - x = x.replace(b"(", b"\\(") - x = x.replace(b")", b"\\)") - return b"(" + x + b")" - else: - return bytes(x) - - -class PdfParser: - """Based on - https://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/PDF32000_2008.pdf - Supports PDF up to 1.4 - """ - - def __init__( - self, - filename: str | None = None, - f: IO[bytes] | None = None, - buf: bytes | bytearray | None = None, - start_offset: int = 0, - mode: str = "rb", - ) -> None: - if buf and f: - msg = "specify buf or f or filename, but not both buf and f" - raise RuntimeError(msg) - self.filename = filename - self.buf: bytes | bytearray | mmap.mmap | None = buf - self.f = f - self.start_offset = start_offset - self.should_close_buf = False - self.should_close_file = False - if filename is not None and f is None: - self.f = f = open(filename, mode) - self.should_close_file = True - if f is not None: - self.buf = self.get_buf_from_file(f) - self.should_close_buf = True - if not filename and hasattr(f, "name"): - self.filename = f.name - self.cached_objects: dict[IndirectReference, Any] = {} - self.root_ref: IndirectReference | None - self.info_ref: IndirectReference | None - self.pages_ref: IndirectReference | None - self.last_xref_section_offset: int | None - if self.buf: - self.read_pdf_info() - else: - self.file_size_total = self.file_size_this = 0 - self.root = PdfDict() - self.root_ref = None - self.info = PdfDict() - self.info_ref = None - self.page_tree_root = PdfDict() - self.pages: list[IndirectReference] = [] - self.orig_pages: list[IndirectReference] = [] - self.pages_ref = None - self.last_xref_section_offset = None - self.trailer_dict: dict[bytes, Any] = {} - self.xref_table = XrefTable() - self.xref_table.reading_finished = True - if f: - self.seek_end() - - def __enter__(self) -> PdfParser: - return self - - def __exit__(self, *args: object) -> None: - self.close() - - def start_writing(self) -> None: - self.close_buf() - self.seek_end() - - def close_buf(self) -> None: - if isinstance(self.buf, mmap.mmap): - self.buf.close() - self.buf = None - - def close(self) -> None: - if self.should_close_buf: - self.close_buf() - if self.f is not None and self.should_close_file: - self.f.close() - self.f = None - - def seek_end(self) -> None: - assert self.f is not None - self.f.seek(0, os.SEEK_END) - - def write_header(self) -> None: - assert self.f is not None - self.f.write(b"%PDF-1.4\n") - - def write_comment(self, s: str) -> None: - assert self.f is not None - self.f.write(f"% {s}\n".encode()) - - def write_catalog(self) -> IndirectReference: - assert self.f is not None - self.del_root() - self.root_ref = self.next_object_id(self.f.tell()) - self.pages_ref = self.next_object_id(0) - self.rewrite_pages() - self.write_obj(self.root_ref, Type=PdfName(b"Catalog"), Pages=self.pages_ref) - self.write_obj( - self.pages_ref, - Type=PdfName(b"Pages"), - Count=len(self.pages), - Kids=self.pages, - ) - return self.root_ref - - def rewrite_pages(self) -> None: - pages_tree_nodes_to_delete = [] - for i, page_ref in enumerate(self.orig_pages): - page_info = self.cached_objects[page_ref] - del self.xref_table[page_ref.object_id] - pages_tree_nodes_to_delete.append(page_info[PdfName(b"Parent")]) - if page_ref not in self.pages: - # the page has been deleted - continue - # make dict keys into strings for passing to write_page - stringified_page_info = {} - for key, value in page_info.items(): - # key should be a PdfName - stringified_page_info[key.name_as_str()] = value - stringified_page_info["Parent"] = self.pages_ref - new_page_ref = self.write_page(None, **stringified_page_info) - for j, cur_page_ref in enumerate(self.pages): - if cur_page_ref == page_ref: - # replace the page reference with the new one - self.pages[j] = new_page_ref - # delete redundant Pages tree nodes from xref table - for pages_tree_node_ref in pages_tree_nodes_to_delete: - while pages_tree_node_ref: - pages_tree_node = self.cached_objects[pages_tree_node_ref] - if pages_tree_node_ref.object_id in self.xref_table: - del self.xref_table[pages_tree_node_ref.object_id] - pages_tree_node_ref = pages_tree_node.get(b"Parent", None) - self.orig_pages = [] - - def write_xref_and_trailer( - self, new_root_ref: IndirectReference | None = None - ) -> None: - assert self.f is not None - if new_root_ref: - self.del_root() - self.root_ref = new_root_ref - if self.info: - self.info_ref = self.write_obj(None, self.info) - start_xref = self.xref_table.write(self.f) - num_entries = len(self.xref_table) - trailer_dict: dict[str | bytes, Any] = { - b"Root": self.root_ref, - b"Size": num_entries, - } - if self.last_xref_section_offset is not None: - trailer_dict[b"Prev"] = self.last_xref_section_offset - if self.info: - trailer_dict[b"Info"] = self.info_ref - self.last_xref_section_offset = start_xref - self.f.write( - b"trailer\n" - + bytes(PdfDict(trailer_dict)) - + b"\nstartxref\n%d\n%%%%EOF" % start_xref - ) - - def write_page( - self, ref: int | IndirectReference | None, *objs: Any, **dict_obj: Any - ) -> IndirectReference: - obj_ref = self.pages[ref] if isinstance(ref, int) else ref - if "Type" not in dict_obj: - dict_obj["Type"] = PdfName(b"Page") - if "Parent" not in dict_obj: - dict_obj["Parent"] = self.pages_ref - return self.write_obj(obj_ref, *objs, **dict_obj) - - def write_obj( - self, ref: IndirectReference | None, *objs: Any, **dict_obj: Any - ) -> IndirectReference: - assert self.f is not None - f = self.f - if ref is None: - ref = self.next_object_id(f.tell()) - else: - self.xref_table[ref.object_id] = (f.tell(), ref.generation) - f.write(bytes(IndirectObjectDef(*ref))) - stream = dict_obj.pop("stream", None) - if stream is not None: - dict_obj["Length"] = len(stream) - if dict_obj: - f.write(pdf_repr(dict_obj)) - for obj in objs: - f.write(pdf_repr(obj)) - if stream is not None: - f.write(b"stream\n") - f.write(stream) - f.write(b"\nendstream\n") - f.write(b"endobj\n") - return ref - - def del_root(self) -> None: - if self.root_ref is None: - return - del self.xref_table[self.root_ref.object_id] - del self.xref_table[self.root[b"Pages"].object_id] - - @staticmethod - def get_buf_from_file(f: IO[bytes]) -> bytes | mmap.mmap: - if hasattr(f, "getbuffer"): - return f.getbuffer() - elif hasattr(f, "getvalue"): - return f.getvalue() - else: - try: - return mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ) - except ValueError: # cannot mmap an empty file - return b"" - - def read_pdf_info(self) -> None: - assert self.buf is not None - self.file_size_total = len(self.buf) - self.file_size_this = self.file_size_total - self.start_offset - self.read_trailer() - check_format_condition( - self.trailer_dict.get(b"Root") is not None, "Root is missing" - ) - self.root_ref = self.trailer_dict[b"Root"] - assert self.root_ref is not None - self.info_ref = self.trailer_dict.get(b"Info", None) - self.root = PdfDict(self.read_indirect(self.root_ref)) - if self.info_ref is None: - self.info = PdfDict() - else: - self.info = PdfDict(self.read_indirect(self.info_ref)) - check_format_condition(b"Type" in self.root, "/Type missing in Root") - check_format_condition( - self.root[b"Type"] == b"Catalog", "/Type in Root is not /Catalog" - ) - check_format_condition( - self.root.get(b"Pages") is not None, "/Pages missing in Root" - ) - check_format_condition( - isinstance(self.root[b"Pages"], IndirectReference), - "/Pages in Root is not an indirect reference", - ) - self.pages_ref = self.root[b"Pages"] - assert self.pages_ref is not None - self.page_tree_root = self.read_indirect(self.pages_ref) - self.pages = self.linearize_page_tree(self.page_tree_root) - # save the original list of page references - # in case the user modifies, adds or deletes some pages - # and we need to rewrite the pages and their list - self.orig_pages = self.pages[:] - - def next_object_id(self, offset: int | None = None) -> IndirectReference: - try: - # TODO: support reuse of deleted objects - reference = IndirectReference(max(self.xref_table.keys()) + 1, 0) - except ValueError: - reference = IndirectReference(1, 0) - if offset is not None: - self.xref_table[reference.object_id] = (offset, 0) - return reference - - delimiter = rb"[][()<>{}/%]" - delimiter_or_ws = rb"[][()<>{}/%\000\011\012\014\015\040]" - whitespace = rb"[\000\011\012\014\015\040]" - whitespace_or_hex = rb"[\000\011\012\014\015\0400-9a-fA-F]" - whitespace_optional = whitespace + b"*" - whitespace_mandatory = whitespace + b"+" - # No "\012" aka "\n" or "\015" aka "\r": - whitespace_optional_no_nl = rb"[\000\011\014\040]*" - newline_only = rb"[\r\n]+" - newline = whitespace_optional_no_nl + newline_only + whitespace_optional_no_nl - re_trailer_end = re.compile( - whitespace_mandatory - + rb"trailer" - + whitespace_optional - + rb"<<(.*>>)" - + newline - + rb"startxref" - + newline - + rb"([0-9]+)" - + newline - + rb"%%EOF" - + whitespace_optional - + rb"$", - re.DOTALL, - ) - re_trailer_prev = re.compile( - whitespace_optional - + rb"trailer" - + whitespace_optional - + rb"<<(.*?>>)" - + newline - + rb"startxref" - + newline - + rb"([0-9]+)" - + newline - + rb"%%EOF" - + whitespace_optional, - re.DOTALL, - ) - - def read_trailer(self) -> None: - assert self.buf is not None - search_start_offset = len(self.buf) - 16384 - if search_start_offset < self.start_offset: - search_start_offset = self.start_offset - m = self.re_trailer_end.search(self.buf, search_start_offset) - check_format_condition(m is not None, "trailer end not found") - # make sure we found the LAST trailer - last_match = m - while m: - last_match = m - m = self.re_trailer_end.search(self.buf, m.start() + 16) - if not m: - m = last_match - assert m is not None - trailer_data = m.group(1) - self.last_xref_section_offset = int(m.group(2)) - self.trailer_dict = self.interpret_trailer(trailer_data) - self.xref_table = XrefTable() - self.read_xref_table(xref_section_offset=self.last_xref_section_offset) - if b"Prev" in self.trailer_dict: - self.read_prev_trailer(self.trailer_dict[b"Prev"]) - - def read_prev_trailer(self, xref_section_offset: int) -> None: - assert self.buf is not None - trailer_offset = self.read_xref_table(xref_section_offset=xref_section_offset) - m = self.re_trailer_prev.search( - self.buf[trailer_offset : trailer_offset + 16384] - ) - check_format_condition(m is not None, "previous trailer not found") - assert m is not None - trailer_data = m.group(1) - check_format_condition( - int(m.group(2)) == xref_section_offset, - "xref section offset in previous trailer doesn't match what was expected", - ) - trailer_dict = self.interpret_trailer(trailer_data) - if b"Prev" in trailer_dict: - self.read_prev_trailer(trailer_dict[b"Prev"]) - - re_whitespace_optional = re.compile(whitespace_optional) - re_name = re.compile( - whitespace_optional - + rb"/([!-$&'*-.0-;=?-Z\\^-z|~]+)(?=" - + delimiter_or_ws - + rb")" - ) - re_dict_start = re.compile(whitespace_optional + rb"<<") - re_dict_end = re.compile(whitespace_optional + rb">>" + whitespace_optional) - - @classmethod - def interpret_trailer(cls, trailer_data: bytes) -> dict[bytes, Any]: - trailer = {} - offset = 0 - while True: - m = cls.re_name.match(trailer_data, offset) - if not m: - m = cls.re_dict_end.match(trailer_data, offset) - check_format_condition( - m is not None and m.end() == len(trailer_data), - "name not found in trailer, remaining data: " - + repr(trailer_data[offset:]), - ) - break - key = cls.interpret_name(m.group(1)) - assert isinstance(key, bytes) - value, value_offset = cls.get_value(trailer_data, m.end()) - trailer[key] = value - if value_offset is None: - break - offset = value_offset - check_format_condition( - b"Size" in trailer and isinstance(trailer[b"Size"], int), - "/Size not in trailer or not an integer", - ) - check_format_condition( - b"Root" in trailer and isinstance(trailer[b"Root"], IndirectReference), - "/Root not in trailer or not an indirect reference", - ) - return trailer - - re_hashes_in_name = re.compile(rb"([^#]*)(#([0-9a-fA-F]{2}))?") - - @classmethod - def interpret_name(cls, raw: bytes, as_text: bool = False) -> str | bytes: - name = b"" - for m in cls.re_hashes_in_name.finditer(raw): - if m.group(3): - name += m.group(1) + bytearray.fromhex(m.group(3).decode("us-ascii")) - else: - name += m.group(1) - if as_text: - return name.decode("utf-8") - else: - return bytes(name) - - re_null = re.compile(whitespace_optional + rb"null(?=" + delimiter_or_ws + rb")") - re_true = re.compile(whitespace_optional + rb"true(?=" + delimiter_or_ws + rb")") - re_false = re.compile(whitespace_optional + rb"false(?=" + delimiter_or_ws + rb")") - re_int = re.compile( - whitespace_optional + rb"([-+]?[0-9]+)(?=" + delimiter_or_ws + rb")" - ) - re_real = re.compile( - whitespace_optional - + rb"([-+]?([0-9]+\.[0-9]*|[0-9]*\.[0-9]+))(?=" - + delimiter_or_ws - + rb")" - ) - re_array_start = re.compile(whitespace_optional + rb"\[") - re_array_end = re.compile(whitespace_optional + rb"]") - re_string_hex = re.compile( - whitespace_optional + rb"<(" + whitespace_or_hex + rb"*)>" - ) - re_string_lit = re.compile(whitespace_optional + rb"\(") - re_indirect_reference = re.compile( - whitespace_optional - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"R(?=" - + delimiter_or_ws - + rb")" - ) - re_indirect_def_start = re.compile( - whitespace_optional - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"([-+]?[0-9]+)" - + whitespace_mandatory - + rb"obj(?=" - + delimiter_or_ws - + rb")" - ) - re_indirect_def_end = re.compile( - whitespace_optional + rb"endobj(?=" + delimiter_or_ws + rb")" - ) - re_comment = re.compile( - rb"(" + whitespace_optional + rb"%[^\r\n]*" + newline + rb")*" - ) - re_stream_start = re.compile(whitespace_optional + rb"stream\r?\n") - re_stream_end = re.compile( - whitespace_optional + rb"endstream(?=" + delimiter_or_ws + rb")" - ) - - @classmethod - def get_value( - cls, - data: bytes | bytearray | mmap.mmap, - offset: int, - expect_indirect: IndirectReference | None = None, - max_nesting: int = -1, - ) -> tuple[Any, int | None]: - if max_nesting == 0: - return None, None - m = cls.re_comment.match(data, offset) - if m: - offset = m.end() - m = cls.re_indirect_def_start.match(data, offset) - if m: - check_format_condition( - int(m.group(1)) > 0, - "indirect object definition: object ID must be greater than 0", - ) - check_format_condition( - int(m.group(2)) >= 0, - "indirect object definition: generation must be non-negative", - ) - check_format_condition( - expect_indirect is None - or expect_indirect - == IndirectReference(int(m.group(1)), int(m.group(2))), - "indirect object definition different than expected", - ) - object, object_offset = cls.get_value( - data, m.end(), max_nesting=max_nesting - 1 - ) - if object_offset is None: - return object, None - m = cls.re_indirect_def_end.match(data, object_offset) - check_format_condition( - m is not None, "indirect object definition end not found" - ) - assert m is not None - return object, m.end() - check_format_condition( - not expect_indirect, "indirect object definition not found" - ) - m = cls.re_indirect_reference.match(data, offset) - if m: - check_format_condition( - int(m.group(1)) > 0, - "indirect object reference: object ID must be greater than 0", - ) - check_format_condition( - int(m.group(2)) >= 0, - "indirect object reference: generation must be non-negative", - ) - return IndirectReference(int(m.group(1)), int(m.group(2))), m.end() - m = cls.re_dict_start.match(data, offset) - if m: - offset = m.end() - result: dict[Any, Any] = {} - m = cls.re_dict_end.match(data, offset) - current_offset: int | None = offset - while not m: - assert current_offset is not None - key, current_offset = cls.get_value( - data, current_offset, max_nesting=max_nesting - 1 - ) - if current_offset is None: - return result, None - value, current_offset = cls.get_value( - data, current_offset, max_nesting=max_nesting - 1 - ) - result[key] = value - if current_offset is None: - return result, None - m = cls.re_dict_end.match(data, current_offset) - current_offset = m.end() - m = cls.re_stream_start.match(data, current_offset) - if m: - stream_len = result.get(b"Length") - if stream_len is None or not isinstance(stream_len, int): - msg = f"bad or missing Length in stream dict ({stream_len})" - raise PdfFormatError(msg) - stream_data = data[m.end() : m.end() + stream_len] - m = cls.re_stream_end.match(data, m.end() + stream_len) - check_format_condition(m is not None, "stream end not found") - assert m is not None - current_offset = m.end() - return PdfStream(PdfDict(result), stream_data), current_offset - return PdfDict(result), current_offset - m = cls.re_array_start.match(data, offset) - if m: - offset = m.end() - results = [] - m = cls.re_array_end.match(data, offset) - current_offset = offset - while not m: - assert current_offset is not None - value, current_offset = cls.get_value( - data, current_offset, max_nesting=max_nesting - 1 - ) - results.append(value) - if current_offset is None: - return results, None - m = cls.re_array_end.match(data, current_offset) - return results, m.end() - m = cls.re_null.match(data, offset) - if m: - return None, m.end() - m = cls.re_true.match(data, offset) - if m: - return True, m.end() - m = cls.re_false.match(data, offset) - if m: - return False, m.end() - m = cls.re_name.match(data, offset) - if m: - return PdfName(cls.interpret_name(m.group(1))), m.end() - m = cls.re_int.match(data, offset) - if m: - return int(m.group(1)), m.end() - m = cls.re_real.match(data, offset) - if m: - # XXX Decimal instead of float??? - return float(m.group(1)), m.end() - m = cls.re_string_hex.match(data, offset) - if m: - # filter out whitespace - hex_string = bytearray( - b for b in m.group(1) if b in b"0123456789abcdefABCDEF" - ) - if len(hex_string) % 2 == 1: - # append a 0 if the length is not even - yes, at the end - hex_string.append(ord(b"0")) - return bytearray.fromhex(hex_string.decode("us-ascii")), m.end() - m = cls.re_string_lit.match(data, offset) - if m: - return cls.get_literal_string(data, m.end()) - # return None, offset # fallback (only for debugging) - msg = f"unrecognized object: {repr(data[offset : offset + 32])}" - raise PdfFormatError(msg) - - re_lit_str_token = re.compile( - rb"(\\[nrtbf()\\])|(\\[0-9]{1,3})|(\\(\r\n|\r|\n))|(\r\n|\r|\n)|(\()|(\))" - ) - escaped_chars = { - b"n": b"\n", - b"r": b"\r", - b"t": b"\t", - b"b": b"\b", - b"f": b"\f", - b"(": b"(", - b")": b")", - b"\\": b"\\", - ord(b"n"): b"\n", - ord(b"r"): b"\r", - ord(b"t"): b"\t", - ord(b"b"): b"\b", - ord(b"f"): b"\f", - ord(b"("): b"(", - ord(b")"): b")", - ord(b"\\"): b"\\", - } - - @classmethod - def get_literal_string( - cls, data: bytes | bytearray | mmap.mmap, offset: int - ) -> tuple[bytes, int]: - nesting_depth = 0 - result = bytearray() - for m in cls.re_lit_str_token.finditer(data, offset): - result.extend(data[offset : m.start()]) - if m.group(1): - result.extend(cls.escaped_chars[m.group(1)[1]]) - elif m.group(2): - result.append(int(m.group(2)[1:], 8)) - elif m.group(3): - pass - elif m.group(5): - result.extend(b"\n") - elif m.group(6): - result.extend(b"(") - nesting_depth += 1 - elif m.group(7): - if nesting_depth == 0: - return bytes(result), m.end() - result.extend(b")") - nesting_depth -= 1 - offset = m.end() - msg = "unfinished literal string" - raise PdfFormatError(msg) - - re_xref_section_start = re.compile(whitespace_optional + rb"xref" + newline) - re_xref_subsection_start = re.compile( - whitespace_optional - + rb"([0-9]+)" - + whitespace_mandatory - + rb"([0-9]+)" - + whitespace_optional - + newline_only - ) - re_xref_entry = re.compile(rb"([0-9]{10}) ([0-9]{5}) ([fn])( \r| \n|\r\n)") - - def read_xref_table(self, xref_section_offset: int) -> int: - assert self.buf is not None - subsection_found = False - m = self.re_xref_section_start.match( - self.buf, xref_section_offset + self.start_offset - ) - check_format_condition(m is not None, "xref section start not found") - assert m is not None - offset = m.end() - while True: - m = self.re_xref_subsection_start.match(self.buf, offset) - if not m: - check_format_condition( - subsection_found, "xref subsection start not found" - ) - break - subsection_found = True - offset = m.end() - first_object = int(m.group(1)) - num_objects = int(m.group(2)) - for i in range(first_object, first_object + num_objects): - m = self.re_xref_entry.match(self.buf, offset) - check_format_condition(m is not None, "xref entry not found") - assert m is not None - offset = m.end() - is_free = m.group(3) == b"f" - if not is_free: - generation = int(m.group(2)) - new_entry = (int(m.group(1)), generation) - if i not in self.xref_table: - self.xref_table[i] = new_entry - return offset - - def read_indirect(self, ref: IndirectReference, max_nesting: int = -1) -> Any: - offset, generation = self.xref_table[ref[0]] - check_format_condition( - generation == ref[1], - f"expected to find generation {ref[1]} for object ID {ref[0]} in xref " - f"table, instead found generation {generation} at offset {offset}", - ) - assert self.buf is not None - value = self.get_value( - self.buf, - offset + self.start_offset, - expect_indirect=IndirectReference(*ref), - max_nesting=max_nesting, - )[0] - self.cached_objects[ref] = value - return value - - def linearize_page_tree( - self, node: PdfDict | None = None - ) -> list[IndirectReference]: - page_node = node if node is not None else self.page_tree_root - check_format_condition( - page_node[b"Type"] == b"Pages", "/Type of page tree node is not /Pages" - ) - pages = [] - for kid in page_node[b"Kids"]: - kid_object = self.read_indirect(kid) - if kid_object[b"Type"] == b"Page": - pages.append(kid) - else: - pages.extend(self.linearize_page_tree(node=kid_object)) - return pages diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PixarImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PixarImagePlugin.py deleted file mode 100644 index d2b6d0a9..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PixarImagePlugin.py +++ /dev/null @@ -1,72 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PIXAR raster support for PIL -# -# history: -# 97-01-29 fl Created -# -# notes: -# This is incomplete; it is based on a few samples created with -# Photoshop 2.5 and 3.0, and a summary description provided by -# Greg Coats . Hopefully, "L" and -# "RGBA" support will be added in future versions. -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image, ImageFile -from ._binary import i16le as i16 - -# -# helpers - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"\200\350\000\000") - - -## -# Image plugin for PIXAR raster images. - - -class PixarImageFile(ImageFile.ImageFile): - format = "PIXAR" - format_description = "PIXAR raster image" - - def _open(self) -> None: - # assuming a 4-byte magic label - assert self.fp is not None - - s = self.fp.read(4) - if not _accept(s): - msg = "not a PIXAR file" - raise SyntaxError(msg) - - # read rest of header - s = s + self.fp.read(508) - - self._size = i16(s, 418), i16(s, 416) - - # get channel/depth descriptions - mode = i16(s, 424), i16(s, 426) - - if mode == (14, 2): - self._mode = "RGB" - # FIXME: to be continued... - - # create tile descriptor (assuming "dumped") - self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, 1024, self.mode)] - - -# -# -------------------------------------------------------------------- - -Image.register_open(PixarImageFile.format, PixarImageFile, _accept) - -Image.register_extension(PixarImageFile.format, ".pxr") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PngImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PngImagePlugin.py deleted file mode 100644 index d0f22f81..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PngImagePlugin.py +++ /dev/null @@ -1,1553 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PNG support code -# -# See "PNG (Portable Network Graphics) Specification, version 1.0; -# W3C Recommendation", 1996-10-01, Thomas Boutell (ed.). -# -# history: -# 1996-05-06 fl Created (couldn't resist it) -# 1996-12-14 fl Upgraded, added read and verify support (0.2) -# 1996-12-15 fl Separate PNG stream parser -# 1996-12-29 fl Added write support, added getchunks -# 1996-12-30 fl Eliminated circular references in decoder (0.3) -# 1998-07-12 fl Read/write 16-bit images as mode I (0.4) -# 2001-02-08 fl Added transparency support (from Zircon) (0.5) -# 2001-04-16 fl Don't close data source in "open" method (0.6) -# 2004-02-24 fl Don't even pretend to support interlaced files (0.7) -# 2004-08-31 fl Do basic sanity check on chunk identifiers (0.8) -# 2004-09-20 fl Added PngInfo chunk container -# 2004-12-18 fl Added DPI read support (based on code by Niki Spahiev) -# 2008-08-13 fl Added tRNS support for RGB images -# 2009-03-06 fl Support for preserving ICC profiles (by Florian Hoech) -# 2009-03-08 fl Added zTXT support (from Lowell Alleman) -# 2009-03-29 fl Read interlaced PNG files (from Conrado Porto Lopes Gouvua) -# -# Copyright (c) 1997-2009 by Secret Labs AB -# Copyright (c) 1996 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import itertools -import logging -import re -import struct -import warnings -import zlib -from enum import IntEnum -from typing import IO, NamedTuple, cast - -from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o16be as o16 -from ._binary import o32be as o32 -from ._deprecate import deprecate -from ._util import DeferredError - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Callable - from typing import Any, NoReturn - - from . import _imaging - -logger = logging.getLogger(__name__) - -is_cid = re.compile(rb"\w\w\w\w").match - - -_MAGIC = b"\211PNG\r\n\032\n" - - -_MODES = { - # supported bits/color combinations, and corresponding modes/rawmodes - # Grayscale - (1, 0): ("1", "1"), - (2, 0): ("L", "L;2"), - (4, 0): ("L", "L;4"), - (8, 0): ("L", "L"), - (16, 0): ("I;16", "I;16B"), - # Truecolour - (8, 2): ("RGB", "RGB"), - (16, 2): ("RGB", "RGB;16B"), - # Indexed-colour - (1, 3): ("P", "P;1"), - (2, 3): ("P", "P;2"), - (4, 3): ("P", "P;4"), - (8, 3): ("P", "P"), - # Grayscale with alpha - (8, 4): ("LA", "LA"), - (16, 4): ("RGBA", "LA;16B"), # LA;16B->LA not yet available - # Truecolour with alpha - (8, 6): ("RGBA", "RGBA"), - (16, 6): ("RGBA", "RGBA;16B"), -} - - -_simple_palette = re.compile(b"^\xff*\x00\xff*$") - -MAX_TEXT_CHUNK = ImageFile.SAFEBLOCK -""" -Maximum decompressed size for a iTXt or zTXt chunk. -Eliminates decompression bombs where compressed chunks can expand 1000x. -See :ref:`Text in PNG File Format`. -""" -MAX_TEXT_MEMORY = 64 * MAX_TEXT_CHUNK -""" -Set the maximum total text chunk size. -See :ref:`Text in PNG File Format`. -""" - - -# APNG frame disposal modes -class Disposal(IntEnum): - OP_NONE = 0 - """ - No disposal is done on this frame before rendering the next frame. - See :ref:`Saving APNG sequences`. - """ - OP_BACKGROUND = 1 - """ - This frame’s modified region is cleared to fully transparent black before rendering - the next frame. - See :ref:`Saving APNG sequences`. - """ - OP_PREVIOUS = 2 - """ - This frame’s modified region is reverted to the previous frame’s contents before - rendering the next frame. - See :ref:`Saving APNG sequences`. - """ - - -# APNG frame blend modes -class Blend(IntEnum): - OP_SOURCE = 0 - """ - All color components of this frame, including alpha, overwrite the previous output - image contents. - See :ref:`Saving APNG sequences`. - """ - OP_OVER = 1 - """ - This frame should be alpha composited with the previous output image contents. - See :ref:`Saving APNG sequences`. - """ - - -def _safe_zlib_decompress(s: bytes) -> bytes: - dobj = zlib.decompressobj() - plaintext = dobj.decompress(s, MAX_TEXT_CHUNK) - if dobj.unconsumed_tail: - msg = "Decompressed data too large for PngImagePlugin.MAX_TEXT_CHUNK" - raise ValueError(msg) - return plaintext - - -def _crc32(data: bytes, seed: int = 0) -> int: - return zlib.crc32(data, seed) & 0xFFFFFFFF - - -# -------------------------------------------------------------------- -# Support classes. Suitable for PNG and related formats like MNG etc. - - -class ChunkStream: - def __init__(self, fp: IO[bytes]) -> None: - self.fp: IO[bytes] | None = fp - self.queue: list[tuple[bytes, int, int]] | None = [] - - def read(self) -> tuple[bytes, int, int]: - """Fetch a new chunk. Returns header information.""" - cid = None - - assert self.fp is not None - if self.queue: - cid, pos, length = self.queue.pop() - self.fp.seek(pos) - else: - s = self.fp.read(8) - cid = s[4:] - pos = self.fp.tell() - length = i32(s) - - if not is_cid(cid): - if not ImageFile.LOAD_TRUNCATED_IMAGES: - msg = f"broken PNG file (chunk {repr(cid)})" - raise SyntaxError(msg) - - return cid, pos, length - - def __enter__(self) -> ChunkStream: - return self - - def __exit__(self, *args: object) -> None: - self.close() - - def close(self) -> None: - self.queue = self.fp = None - - def push(self, cid: bytes, pos: int, length: int) -> None: - assert self.queue is not None - self.queue.append((cid, pos, length)) - - def call(self, cid: bytes, pos: int, length: int) -> bytes: - """Call the appropriate chunk handler""" - - logger.debug("STREAM %r %s %s", cid, pos, length) - return getattr(self, f"chunk_{cid.decode('ascii')}")(pos, length) - - def crc(self, cid: bytes, data: bytes) -> None: - """Read and verify checksum""" - - # Skip CRC checks for ancillary chunks if allowed to load truncated - # images - # 5th byte of first char is 1 [specs, section 5.4] - if ImageFile.LOAD_TRUNCATED_IMAGES and (cid[0] >> 5 & 1): - self.crc_skip(cid, data) - return - - assert self.fp is not None - try: - crc1 = _crc32(data, _crc32(cid)) - crc2 = i32(self.fp.read(4)) - if crc1 != crc2: - msg = f"broken PNG file (bad header checksum in {repr(cid)})" - raise SyntaxError(msg) - except struct.error as e: - msg = f"broken PNG file (incomplete checksum in {repr(cid)})" - raise SyntaxError(msg) from e - - def crc_skip(self, cid: bytes, data: bytes) -> None: - """Read checksum""" - - assert self.fp is not None - self.fp.read(4) - - def verify(self, endchunk: bytes = b"IEND") -> list[bytes]: - # Simple approach; just calculate checksum for all remaining - # blocks. Must be called directly after open. - - cids = [] - - assert self.fp is not None - while True: - try: - cid, pos, length = self.read() - except struct.error as e: - msg = "truncated PNG file" - raise OSError(msg) from e - - if cid == endchunk: - break - self.crc(cid, ImageFile._safe_read(self.fp, length)) - cids.append(cid) - - return cids - - -class iTXt(str): - """ - Subclass of string to allow iTXt chunks to look like strings while - keeping their extra information - - """ - - lang: str | bytes | None - tkey: str | bytes | None - - @staticmethod - def __new__( - cls, text: str, lang: str | None = None, tkey: str | None = None - ) -> iTXt: - """ - :param cls: the class to use when creating the instance - :param text: value for this key - :param lang: language code - :param tkey: UTF-8 version of the key name - """ - - self = str.__new__(cls, text) - self.lang = lang - self.tkey = tkey - return self - - -class PngInfo: - """ - PNG chunk container (for use with save(pnginfo=)) - - """ - - def __init__(self) -> None: - self.chunks: list[tuple[bytes, bytes, bool]] = [] - - def add(self, cid: bytes, data: bytes, after_idat: bool = False) -> None: - """Appends an arbitrary chunk. Use with caution. - - :param cid: a byte string, 4 bytes long. - :param data: a byte string of the encoded data - :param after_idat: for use with private chunks. Whether the chunk - should be written after IDAT - - """ - - self.chunks.append((cid, data, after_idat)) - - def add_itxt( - self, - key: str | bytes, - value: str | bytes, - lang: str | bytes = "", - tkey: str | bytes = "", - zip: bool = False, - ) -> None: - """Appends an iTXt chunk. - - :param key: latin-1 encodable text key name - :param value: value for this key - :param lang: language code - :param tkey: UTF-8 version of the key name - :param zip: compression flag - - """ - - if not isinstance(key, bytes): - key = key.encode("latin-1", "strict") - if not isinstance(value, bytes): - value = value.encode("utf-8", "strict") - if not isinstance(lang, bytes): - lang = lang.encode("utf-8", "strict") - if not isinstance(tkey, bytes): - tkey = tkey.encode("utf-8", "strict") - - if zip: - self.add( - b"iTXt", - key + b"\0\x01\0" + lang + b"\0" + tkey + b"\0" + zlib.compress(value), - ) - else: - self.add(b"iTXt", key + b"\0\0\0" + lang + b"\0" + tkey + b"\0" + value) - - def add_text( - self, key: str | bytes, value: str | bytes | iTXt, zip: bool = False - ) -> None: - """Appends a text chunk. - - :param key: latin-1 encodable text key name - :param value: value for this key, text or an - :py:class:`PIL.PngImagePlugin.iTXt` instance - :param zip: compression flag - - """ - if isinstance(value, iTXt): - return self.add_itxt( - key, - value, - value.lang if value.lang is not None else b"", - value.tkey if value.tkey is not None else b"", - zip=zip, - ) - - # The tEXt chunk stores latin-1 text - if not isinstance(value, bytes): - try: - value = value.encode("latin-1", "strict") - except UnicodeError: - return self.add_itxt(key, value, zip=zip) - - if not isinstance(key, bytes): - key = key.encode("latin-1", "strict") - - if zip: - self.add(b"zTXt", key + b"\0\0" + zlib.compress(value)) - else: - self.add(b"tEXt", key + b"\0" + value) - - -# -------------------------------------------------------------------- -# PNG image stream (IHDR/IEND) - - -class _RewindState(NamedTuple): - info: dict[str | tuple[int, int], Any] - tile: list[ImageFile._Tile] - seq_num: int | None - - -class PngStream(ChunkStream): - def __init__(self, fp: IO[bytes]) -> None: - super().__init__(fp) - - # local copies of Image attributes - self.im_info: dict[str | tuple[int, int], Any] = {} - self.im_text: dict[str, str | iTXt] = {} - self.im_size = (0, 0) - self.im_mode = "" - self.im_tile: list[ImageFile._Tile] = [] - self.im_palette: tuple[str, bytes] | None = None - self.im_custom_mimetype: str | None = None - self.im_n_frames: int | None = None - self._seq_num: int | None = None - self.rewind_state = _RewindState({}, [], None) - - self.text_memory = 0 - - def check_text_memory(self, chunklen: int) -> None: - self.text_memory += chunklen - if self.text_memory > MAX_TEXT_MEMORY: - msg = ( - "Too much memory used in text chunks: " - f"{self.text_memory}>MAX_TEXT_MEMORY" - ) - raise ValueError(msg) - - def save_rewind(self) -> None: - self.rewind_state = _RewindState( - self.im_info.copy(), - self.im_tile, - self._seq_num, - ) - - def rewind(self) -> None: - self.im_info = self.rewind_state.info.copy() - self.im_tile = self.rewind_state.tile - self._seq_num = self.rewind_state.seq_num - - def chunk_iCCP(self, pos: int, length: int) -> bytes: - # ICC profile - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - # according to PNG spec, the iCCP chunk contains: - # Profile name 1-79 bytes (character string) - # Null separator 1 byte (null character) - # Compression method 1 byte (0) - # Compressed profile n bytes (zlib with deflate compression) - i = s.find(b"\0") - logger.debug("iCCP profile name %r", s[:i]) - comp_method = s[i + 1] - logger.debug("Compression method %s", comp_method) - if comp_method != 0: - msg = f"Unknown compression method {comp_method} in iCCP chunk" - raise SyntaxError(msg) - try: - icc_profile = _safe_zlib_decompress(s[i + 2 :]) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - icc_profile = None - else: - raise - except zlib.error: - icc_profile = None # FIXME - self.im_info["icc_profile"] = icc_profile - return s - - def chunk_IHDR(self, pos: int, length: int) -> bytes: - # image header - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if length < 13: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated IHDR chunk" - raise ValueError(msg) - self.im_size = i32(s, 0), i32(s, 4) - try: - self.im_mode, self.im_rawmode = _MODES[(s[8], s[9])] - except Exception: - pass - if s[12]: - self.im_info["interlace"] = 1 - if s[11]: - msg = "unknown filter category" - raise SyntaxError(msg) - return s - - def chunk_IDAT(self, pos: int, length: int) -> NoReturn: - # image data - if "bbox" in self.im_info: - tile = [ImageFile._Tile("zip", self.im_info["bbox"], pos, self.im_rawmode)] - else: - if self.im_n_frames is not None: - self.im_info["default_image"] = True - tile = [ImageFile._Tile("zip", (0, 0) + self.im_size, pos, self.im_rawmode)] - self.im_tile = tile - self.im_idat = length - msg = "image data found" - raise EOFError(msg) - - def chunk_IEND(self, pos: int, length: int) -> NoReturn: - msg = "end of PNG image" - raise EOFError(msg) - - def chunk_PLTE(self, pos: int, length: int) -> bytes: - # palette - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if self.im_mode == "P": - self.im_palette = "RGB", s - return s - - def chunk_tRNS(self, pos: int, length: int) -> bytes: - # transparency - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if self.im_mode == "P": - if _simple_palette.match(s): - # tRNS contains only one full-transparent entry, - # other entries are full opaque - i = s.find(b"\0") - if i >= 0: - self.im_info["transparency"] = i - else: - # otherwise, we have a byte string with one alpha value - # for each palette entry - self.im_info["transparency"] = s - elif self.im_mode in ("1", "L", "I;16"): - self.im_info["transparency"] = i16(s) - elif self.im_mode == "RGB": - self.im_info["transparency"] = i16(s), i16(s, 2), i16(s, 4) - return s - - def chunk_gAMA(self, pos: int, length: int) -> bytes: - # gamma setting - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - self.im_info["gamma"] = i32(s) / 100000.0 - return s - - def chunk_cHRM(self, pos: int, length: int) -> bytes: - # chromaticity, 8 unsigned ints, actual value is scaled by 100,000 - # WP x,y, Red x,y, Green x,y Blue x,y - - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - raw_vals = struct.unpack(f">{len(s) // 4}I", s) - self.im_info["chromaticity"] = tuple(elt / 100000.0 for elt in raw_vals) - return s - - def chunk_sRGB(self, pos: int, length: int) -> bytes: - # srgb rendering intent, 1 byte - # 0 perceptual - # 1 relative colorimetric - # 2 saturation - # 3 absolute colorimetric - - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if length < 1: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated sRGB chunk" - raise ValueError(msg) - self.im_info["srgb"] = s[0] - return s - - def chunk_pHYs(self, pos: int, length: int) -> bytes: - # pixels per unit - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if length < 9: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated pHYs chunk" - raise ValueError(msg) - px, py = i32(s, 0), i32(s, 4) - unit = s[8] - if unit == 1: # meter - dpi = px * 0.0254, py * 0.0254 - self.im_info["dpi"] = dpi - elif unit == 0: - self.im_info["aspect"] = px, py - return s - - def chunk_tEXt(self, pos: int, length: int) -> bytes: - # text - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - try: - k, v = s.split(b"\0", 1) - except ValueError: - # fallback for broken tEXt tags - k = s - v = b"" - if k: - k_str = k.decode("latin-1", "strict") - v_str = v.decode("latin-1", "replace") - - self.im_info[k_str] = v if k == b"exif" else v_str - self.im_text[k_str] = v_str - self.check_text_memory(len(v_str)) - - return s - - def chunk_zTXt(self, pos: int, length: int) -> bytes: - # compressed text - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - try: - k, v = s.split(b"\0", 1) - except ValueError: - k = s - v = b"" - if v: - comp_method = v[0] - else: - comp_method = 0 - if comp_method != 0: - msg = f"Unknown compression method {comp_method} in zTXt chunk" - raise SyntaxError(msg) - try: - v = _safe_zlib_decompress(v[1:]) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - v = b"" - else: - raise - except zlib.error: - v = b"" - - if k: - k_str = k.decode("latin-1", "strict") - v_str = v.decode("latin-1", "replace") - - self.im_info[k_str] = self.im_text[k_str] = v_str - self.check_text_memory(len(v_str)) - - return s - - def chunk_iTXt(self, pos: int, length: int) -> bytes: - # international text - assert self.fp is not None - r = s = ImageFile._safe_read(self.fp, length) - try: - k, r = r.split(b"\0", 1) - except ValueError: - return s - if len(r) < 2: - return s - cf, cm, r = r[0], r[1], r[2:] - try: - lang, tk, v = r.split(b"\0", 2) - except ValueError: - return s - if cf != 0: - if cm == 0: - try: - v = _safe_zlib_decompress(v) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - else: - raise - except zlib.error: - return s - else: - return s - if k == b"XML:com.adobe.xmp": - self.im_info["xmp"] = v - try: - k_str = k.decode("latin-1", "strict") - lang_str = lang.decode("utf-8", "strict") - tk_str = tk.decode("utf-8", "strict") - v_str = v.decode("utf-8", "strict") - except UnicodeError: - return s - - self.im_info[k_str] = self.im_text[k_str] = iTXt(v_str, lang_str, tk_str) - self.check_text_memory(len(v_str)) - - return s - - def chunk_eXIf(self, pos: int, length: int) -> bytes: - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - self.im_info["exif"] = b"Exif\x00\x00" + s - return s - - # APNG chunks - def chunk_acTL(self, pos: int, length: int) -> bytes: - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if length < 8: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "APNG contains truncated acTL chunk" - raise ValueError(msg) - if self.im_n_frames is not None: - self.im_n_frames = None - warnings.warn("Invalid APNG, will use default PNG image if possible") - return s - n_frames = i32(s) - if n_frames == 0 or n_frames > 0x80000000: - warnings.warn("Invalid APNG, will use default PNG image if possible") - return s - self.im_n_frames = n_frames - self.im_info["loop"] = i32(s, 4) - self.im_custom_mimetype = "image/apng" - return s - - def chunk_fcTL(self, pos: int, length: int) -> bytes: - assert self.fp is not None - s = ImageFile._safe_read(self.fp, length) - if length < 26: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "APNG contains truncated fcTL chunk" - raise ValueError(msg) - seq = i32(s) - if (self._seq_num is None and seq != 0) or ( - self._seq_num is not None and self._seq_num != seq - 1 - ): - msg = "APNG contains frame sequence errors" - raise SyntaxError(msg) - self._seq_num = seq - width, height = i32(s, 4), i32(s, 8) - px, py = i32(s, 12), i32(s, 16) - im_w, im_h = self.im_size - if px + width > im_w or py + height > im_h: - msg = "APNG contains invalid frames" - raise SyntaxError(msg) - self.im_info["bbox"] = (px, py, px + width, py + height) - delay_num, delay_den = i16(s, 20), i16(s, 22) - if delay_den == 0: - delay_den = 100 - self.im_info["duration"] = float(delay_num) / float(delay_den) * 1000 - self.im_info["disposal"] = s[24] - self.im_info["blend"] = s[25] - return s - - def chunk_fdAT(self, pos: int, length: int) -> bytes: - assert self.fp is not None - if length < 4: - if ImageFile.LOAD_TRUNCATED_IMAGES: - s = ImageFile._safe_read(self.fp, length) - return s - msg = "APNG contains truncated fDAT chunk" - raise ValueError(msg) - s = ImageFile._safe_read(self.fp, 4) - seq = i32(s) - if self._seq_num != seq - 1: - msg = "APNG contains frame sequence errors" - raise SyntaxError(msg) - self._seq_num = seq - return self.chunk_IDAT(pos + 4, length - 4) - - -# -------------------------------------------------------------------- -# PNG reader - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(_MAGIC) - - -## -# Image plugin for PNG images. - - -class PngImageFile(ImageFile.ImageFile): - format = "PNG" - format_description = "Portable network graphics" - - def _open(self) -> None: - if not _accept(self.fp.read(8)): - msg = "not a PNG file" - raise SyntaxError(msg) - self._fp = self.fp - self.__frame = 0 - - # - # Parse headers up to the first IDAT or fDAT chunk - - self.private_chunks: list[tuple[bytes, bytes] | tuple[bytes, bytes, bool]] = [] - self.png: PngStream | None = PngStream(self.fp) - - while True: - # - # get next chunk - - cid, pos, length = self.png.read() - - try: - s = self.png.call(cid, pos, length) - except EOFError: - break - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - s = ImageFile._safe_read(self.fp, length) - if cid[1:2].islower(): - self.private_chunks.append((cid, s)) - - self.png.crc(cid, s) - - # - # Copy relevant attributes from the PngStream. An alternative - # would be to let the PngStream class modify these attributes - # directly, but that introduces circular references which are - # difficult to break if things go wrong in the decoder... - # (believe me, I've tried ;-) - - self._mode = self.png.im_mode - self._size = self.png.im_size - self.info = self.png.im_info - self._text: dict[str, str | iTXt] | None = None - self.tile = self.png.im_tile - self.custom_mimetype = self.png.im_custom_mimetype - self.n_frames = self.png.im_n_frames or 1 - self.default_image = self.info.get("default_image", False) - - if self.png.im_palette: - rawmode, data = self.png.im_palette - self.palette = ImagePalette.raw(rawmode, data) - - if cid == b"fdAT": - self.__prepare_idat = length - 4 - else: - self.__prepare_idat = length # used by load_prepare() - - if self.png.im_n_frames is not None: - self._close_exclusive_fp_after_loading = False - self.png.save_rewind() - self.__rewind_idat = self.__prepare_idat - self.__rewind = self._fp.tell() - if self.default_image: - # IDAT chunk contains default image and not first animation frame - self.n_frames += 1 - self._seek(0) - self.is_animated = self.n_frames > 1 - - @property - def text(self) -> dict[str, str | iTXt]: - # experimental - if self._text is None: - # iTxt, tEXt and zTXt chunks may appear at the end of the file - # So load the file to ensure that they are read - if self.is_animated: - frame = self.__frame - # for APNG, seek to the final frame before loading - self.seek(self.n_frames - 1) - self.load() - if self.is_animated: - self.seek(frame) - assert self._text is not None - return self._text - - def verify(self) -> None: - """Verify PNG file""" - - if self.fp is None: - msg = "verify must be called directly after open" - raise RuntimeError(msg) - - # back up to beginning of IDAT block - self.fp.seek(self.tile[0][2] - 8) - - assert self.png is not None - self.png.verify() - self.png.close() - - if self._exclusive_fp: - self.fp.close() - self.fp = None - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - if frame < self.__frame: - self._seek(0, True) - - last_frame = self.__frame - for f in range(self.__frame + 1, frame + 1): - try: - self._seek(f) - except EOFError as e: - self.seek(last_frame) - msg = "no more images in APNG file" - raise EOFError(msg) from e - - def _seek(self, frame: int, rewind: bool = False) -> None: - assert self.png is not None - if isinstance(self._fp, DeferredError): - raise self._fp.ex - - self.dispose: _imaging.ImagingCore | None - dispose_extent = None - if frame == 0: - if rewind: - self._fp.seek(self.__rewind) - self.png.rewind() - self.__prepare_idat = self.__rewind_idat - self._im = None - self.info = self.png.im_info - self.tile = self.png.im_tile - self.fp = self._fp - self._prev_im = None - self.dispose = None - self.default_image = self.info.get("default_image", False) - self.dispose_op = self.info.get("disposal") - self.blend_op = self.info.get("blend") - dispose_extent = self.info.get("bbox") - self.__frame = 0 - else: - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - - # ensure previous frame was loaded - self.load() - - if self.dispose: - self.im.paste(self.dispose, self.dispose_extent) - self._prev_im = self.im.copy() - - self.fp = self._fp - - # advance to the next frame - if self.__prepare_idat: - ImageFile._safe_read(self.fp, self.__prepare_idat) - self.__prepare_idat = 0 - frame_start = False - while True: - self.fp.read(4) # CRC - - try: - cid, pos, length = self.png.read() - except (struct.error, SyntaxError): - break - - if cid == b"IEND": - msg = "No more images in APNG file" - raise EOFError(msg) - if cid == b"fcTL": - if frame_start: - # there must be at least one fdAT chunk between fcTL chunks - msg = "APNG missing frame data" - raise SyntaxError(msg) - frame_start = True - - try: - self.png.call(cid, pos, length) - except UnicodeDecodeError: - break - except EOFError: - if cid == b"fdAT": - length -= 4 - if frame_start: - self.__prepare_idat = length - break - ImageFile._safe_read(self.fp, length) - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - ImageFile._safe_read(self.fp, length) - - self.__frame = frame - self.tile = self.png.im_tile - self.dispose_op = self.info.get("disposal") - self.blend_op = self.info.get("blend") - dispose_extent = self.info.get("bbox") - - if not self.tile: - msg = "image not found in APNG frame" - raise EOFError(msg) - if dispose_extent: - self.dispose_extent: tuple[float, float, float, float] = dispose_extent - - # setup frame disposal (actual disposal done when needed in the next _seek()) - if self._prev_im is None and self.dispose_op == Disposal.OP_PREVIOUS: - self.dispose_op = Disposal.OP_BACKGROUND - - self.dispose = None - if self.dispose_op == Disposal.OP_PREVIOUS: - if self._prev_im: - self.dispose = self._prev_im.copy() - self.dispose = self._crop(self.dispose, self.dispose_extent) - elif self.dispose_op == Disposal.OP_BACKGROUND: - self.dispose = Image.core.fill(self.mode, self.size) - self.dispose = self._crop(self.dispose, self.dispose_extent) - - def tell(self) -> int: - return self.__frame - - def load_prepare(self) -> None: - """internal: prepare to read PNG file""" - - if self.info.get("interlace"): - self.decoderconfig = self.decoderconfig + (1,) - - self.__idat = self.__prepare_idat # used by load_read() - ImageFile.ImageFile.load_prepare(self) - - def load_read(self, read_bytes: int) -> bytes: - """internal: read more image data""" - - assert self.png is not None - while self.__idat == 0: - # end of chunk, skip forward to next one - - self.fp.read(4) # CRC - - cid, pos, length = self.png.read() - - if cid not in [b"IDAT", b"DDAT", b"fdAT"]: - self.png.push(cid, pos, length) - return b"" - - if cid == b"fdAT": - try: - self.png.call(cid, pos, length) - except EOFError: - pass - self.__idat = length - 4 # sequence_num has already been read - else: - self.__idat = length # empty chunks are allowed - - # read more data from this chunk - if read_bytes <= 0: - read_bytes = self.__idat - else: - read_bytes = min(read_bytes, self.__idat) - - self.__idat = self.__idat - read_bytes - - return self.fp.read(read_bytes) - - def load_end(self) -> None: - """internal: finished reading image data""" - assert self.png is not None - if self.__idat != 0: - self.fp.read(self.__idat) - while True: - self.fp.read(4) # CRC - - try: - cid, pos, length = self.png.read() - except (struct.error, SyntaxError): - break - - if cid == b"IEND": - break - elif cid == b"fcTL" and self.is_animated: - # start of the next frame, stop reading - self.__prepare_idat = 0 - self.png.push(cid, pos, length) - break - - try: - self.png.call(cid, pos, length) - except UnicodeDecodeError: - break - except EOFError: - if cid == b"fdAT": - length -= 4 - try: - ImageFile._safe_read(self.fp, length) - except OSError as e: - if ImageFile.LOAD_TRUNCATED_IMAGES: - break - else: - raise e - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - s = ImageFile._safe_read(self.fp, length) - if cid[1:2].islower(): - self.private_chunks.append((cid, s, True)) - self._text = self.png.im_text - if not self.is_animated: - self.png.close() - self.png = None - else: - if self._prev_im and self.blend_op == Blend.OP_OVER: - updated = self._crop(self.im, self.dispose_extent) - if self.im.mode == "RGB" and "transparency" in self.info: - mask = updated.convert_transparent( - "RGBA", self.info["transparency"] - ) - else: - if self.im.mode == "P" and "transparency" in self.info: - t = self.info["transparency"] - if isinstance(t, bytes): - updated.putpalettealphas(t) - elif isinstance(t, int): - updated.putpalettealpha(t) - mask = updated.convert("RGBA") - self._prev_im.paste(updated, self.dispose_extent, mask) - self.im = self._prev_im - - def _getexif(self) -> dict[int, Any] | None: - if "exif" not in self.info: - self.load() - if "exif" not in self.info and "Raw profile type exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - def getexif(self) -> Image.Exif: - if "exif" not in self.info: - self.load() - - return super().getexif() - - -# -------------------------------------------------------------------- -# PNG writer - -_OUTMODES = { - # supported PIL modes, and corresponding rawmode, bit depth and color type - "1": ("1", b"\x01", b"\x00"), - "L;1": ("L;1", b"\x01", b"\x00"), - "L;2": ("L;2", b"\x02", b"\x00"), - "L;4": ("L;4", b"\x04", b"\x00"), - "L": ("L", b"\x08", b"\x00"), - "LA": ("LA", b"\x08", b"\x04"), - "I": ("I;16B", b"\x10", b"\x00"), - "I;16": ("I;16B", b"\x10", b"\x00"), - "I;16B": ("I;16B", b"\x10", b"\x00"), - "P;1": ("P;1", b"\x01", b"\x03"), - "P;2": ("P;2", b"\x02", b"\x03"), - "P;4": ("P;4", b"\x04", b"\x03"), - "P": ("P", b"\x08", b"\x03"), - "RGB": ("RGB", b"\x08", b"\x02"), - "RGBA": ("RGBA", b"\x08", b"\x06"), -} - - -def putchunk(fp: IO[bytes], cid: bytes, *data: bytes) -> None: - """Write a PNG chunk (including CRC field)""" - - byte_data = b"".join(data) - - fp.write(o32(len(byte_data)) + cid) - fp.write(byte_data) - crc = _crc32(byte_data, _crc32(cid)) - fp.write(o32(crc)) - - -class _idat: - # wrap output from the encoder in IDAT chunks - - def __init__(self, fp: IO[bytes], chunk: Callable[..., None]) -> None: - self.fp = fp - self.chunk = chunk - - def write(self, data: bytes) -> None: - self.chunk(self.fp, b"IDAT", data) - - -class _fdat: - # wrap encoder output in fdAT chunks - - def __init__(self, fp: IO[bytes], chunk: Callable[..., None], seq_num: int) -> None: - self.fp = fp - self.chunk = chunk - self.seq_num = seq_num - - def write(self, data: bytes) -> None: - self.chunk(self.fp, b"fdAT", o32(self.seq_num), data) - self.seq_num += 1 - - -class _Frame(NamedTuple): - im: Image.Image - bbox: tuple[int, int, int, int] | None - encoderinfo: dict[str, Any] - - -def _write_multiple_frames( - im: Image.Image, - fp: IO[bytes], - chunk: Callable[..., None], - mode: str, - rawmode: str, - default_image: Image.Image | None, - append_images: list[Image.Image], -) -> Image.Image | None: - duration = im.encoderinfo.get("duration") - loop = im.encoderinfo.get("loop", im.info.get("loop", 0)) - disposal = im.encoderinfo.get("disposal", im.info.get("disposal", Disposal.OP_NONE)) - blend = im.encoderinfo.get("blend", im.info.get("blend", Blend.OP_SOURCE)) - - if default_image: - chain = itertools.chain(append_images) - else: - chain = itertools.chain([im], append_images) - - im_frames: list[_Frame] = [] - frame_count = 0 - for im_seq in chain: - for im_frame in ImageSequence.Iterator(im_seq): - if im_frame.mode == mode: - im_frame = im_frame.copy() - else: - im_frame = im_frame.convert(mode) - encoderinfo = im.encoderinfo.copy() - if isinstance(duration, (list, tuple)): - encoderinfo["duration"] = duration[frame_count] - elif duration is None and "duration" in im_frame.info: - encoderinfo["duration"] = im_frame.info["duration"] - if isinstance(disposal, (list, tuple)): - encoderinfo["disposal"] = disposal[frame_count] - if isinstance(blend, (list, tuple)): - encoderinfo["blend"] = blend[frame_count] - frame_count += 1 - - if im_frames: - previous = im_frames[-1] - prev_disposal = previous.encoderinfo.get("disposal") - prev_blend = previous.encoderinfo.get("blend") - if prev_disposal == Disposal.OP_PREVIOUS and len(im_frames) < 2: - prev_disposal = Disposal.OP_BACKGROUND - - if prev_disposal == Disposal.OP_BACKGROUND: - base_im = previous.im.copy() - dispose = Image.core.fill("RGBA", im.size, (0, 0, 0, 0)) - bbox = previous.bbox - if bbox: - dispose = dispose.crop(bbox) - else: - bbox = (0, 0) + im.size - base_im.paste(dispose, bbox) - elif prev_disposal == Disposal.OP_PREVIOUS: - base_im = im_frames[-2].im - else: - base_im = previous.im - delta = ImageChops.subtract_modulo( - im_frame.convert("RGBA"), base_im.convert("RGBA") - ) - bbox = delta.getbbox(alpha_only=False) - if ( - not bbox - and prev_disposal == encoderinfo.get("disposal") - and prev_blend == encoderinfo.get("blend") - and "duration" in encoderinfo - ): - previous.encoderinfo["duration"] += encoderinfo["duration"] - continue - else: - bbox = None - im_frames.append(_Frame(im_frame, bbox, encoderinfo)) - - if len(im_frames) == 1 and not default_image: - return im_frames[0].im - - # animation control - chunk( - fp, - b"acTL", - o32(len(im_frames)), # 0: num_frames - o32(loop), # 4: num_plays - ) - - # default image IDAT (if it exists) - if default_image: - if im.mode != mode: - im = im.convert(mode) - ImageFile._save( - im, - cast(IO[bytes], _idat(fp, chunk)), - [ImageFile._Tile("zip", (0, 0) + im.size, 0, rawmode)], - ) - - seq_num = 0 - for frame, frame_data in enumerate(im_frames): - im_frame = frame_data.im - if not frame_data.bbox: - bbox = (0, 0) + im_frame.size - else: - bbox = frame_data.bbox - im_frame = im_frame.crop(bbox) - size = im_frame.size - encoderinfo = frame_data.encoderinfo - frame_duration = int(round(encoderinfo.get("duration", 0))) - frame_disposal = encoderinfo.get("disposal", disposal) - frame_blend = encoderinfo.get("blend", blend) - # frame control - chunk( - fp, - b"fcTL", - o32(seq_num), # sequence_number - o32(size[0]), # width - o32(size[1]), # height - o32(bbox[0]), # x_offset - o32(bbox[1]), # y_offset - o16(frame_duration), # delay_numerator - o16(1000), # delay_denominator - o8(frame_disposal), # dispose_op - o8(frame_blend), # blend_op - ) - seq_num += 1 - # frame data - if frame == 0 and not default_image: - # first frame must be in IDAT chunks for backwards compatibility - ImageFile._save( - im_frame, - cast(IO[bytes], _idat(fp, chunk)), - [ImageFile._Tile("zip", (0, 0) + im_frame.size, 0, rawmode)], - ) - else: - fdat_chunks = _fdat(fp, chunk, seq_num) - ImageFile._save( - im_frame, - cast(IO[bytes], fdat_chunks), - [ImageFile._Tile("zip", (0, 0) + im_frame.size, 0, rawmode)], - ) - seq_num = fdat_chunks.seq_num - return None - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - _save(im, fp, filename, save_all=True) - - -def _save( - im: Image.Image, - fp: IO[bytes], - filename: str | bytes, - chunk: Callable[..., None] = putchunk, - save_all: bool = False, -) -> None: - # save an image to disk (called by the save method) - - if save_all: - default_image = im.encoderinfo.get( - "default_image", im.info.get("default_image") - ) - modes = set() - sizes = set() - append_images = im.encoderinfo.get("append_images", []) - for im_seq in itertools.chain([im], append_images): - for im_frame in ImageSequence.Iterator(im_seq): - modes.add(im_frame.mode) - sizes.add(im_frame.size) - for mode in ("RGBA", "RGB", "P"): - if mode in modes: - break - else: - mode = modes.pop() - size = tuple(max(frame_size[i] for frame_size in sizes) for i in range(2)) - else: - size = im.size - mode = im.mode - - outmode = mode - if mode == "P": - # - # attempt to minimize storage requirements for palette images - if "bits" in im.encoderinfo: - # number of bits specified by user - colors = min(1 << im.encoderinfo["bits"], 256) - else: - # check palette contents - if im.palette: - colors = max(min(len(im.palette.getdata()[1]) // 3, 256), 1) - else: - colors = 256 - - if colors <= 16: - if colors <= 2: - bits = 1 - elif colors <= 4: - bits = 2 - else: - bits = 4 - outmode += f";{bits}" - - # encoder options - im.encoderconfig = ( - im.encoderinfo.get("optimize", False), - im.encoderinfo.get("compress_level", -1), - im.encoderinfo.get("compress_type", -1), - im.encoderinfo.get("dictionary", b""), - ) - - # get the corresponding PNG mode - try: - rawmode, bit_depth, color_type = _OUTMODES[outmode] - except KeyError as e: - msg = f"cannot write mode {mode} as PNG" - raise OSError(msg) from e - if outmode == "I": - deprecate("Saving I mode images as PNG", 13, stacklevel=4) - - # - # write minimal PNG file - - fp.write(_MAGIC) - - chunk( - fp, - b"IHDR", - o32(size[0]), # 0: size - o32(size[1]), - bit_depth, - color_type, - b"\0", # 10: compression - b"\0", # 11: filter category - b"\0", # 12: interlace flag - ) - - chunks = [b"cHRM", b"cICP", b"gAMA", b"sBIT", b"sRGB", b"tIME"] - - icc = im.encoderinfo.get("icc_profile", im.info.get("icc_profile")) - if icc: - # ICC profile - # according to PNG spec, the iCCP chunk contains: - # Profile name 1-79 bytes (character string) - # Null separator 1 byte (null character) - # Compression method 1 byte (0) - # Compressed profile n bytes (zlib with deflate compression) - name = b"ICC Profile" - data = name + b"\0\0" + zlib.compress(icc) - chunk(fp, b"iCCP", data) - - # You must either have sRGB or iCCP. - # Disallow sRGB chunks when an iCCP-chunk has been emitted. - chunks.remove(b"sRGB") - - info = im.encoderinfo.get("pnginfo") - if info: - chunks_multiple_allowed = [b"sPLT", b"iTXt", b"tEXt", b"zTXt"] - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid in chunks: - chunks.remove(cid) - chunk(fp, cid, data) - elif cid in chunks_multiple_allowed: - chunk(fp, cid, data) - elif cid[1:2].islower(): - # Private chunk - after_idat = len(info_chunk) == 3 and info_chunk[2] - if not after_idat: - chunk(fp, cid, data) - - if im.mode == "P": - palette_byte_number = colors * 3 - palette_bytes = im.im.getpalette("RGB")[:palette_byte_number] - while len(palette_bytes) < palette_byte_number: - palette_bytes += b"\0" - chunk(fp, b"PLTE", palette_bytes) - - transparency = im.encoderinfo.get("transparency", im.info.get("transparency", None)) - - if transparency or transparency == 0: - if im.mode == "P": - # limit to actual palette size - alpha_bytes = colors - if isinstance(transparency, bytes): - chunk(fp, b"tRNS", transparency[:alpha_bytes]) - else: - transparency = max(0, min(255, transparency)) - alpha = b"\xff" * transparency + b"\0" - chunk(fp, b"tRNS", alpha[:alpha_bytes]) - elif im.mode in ("1", "L", "I", "I;16"): - transparency = max(0, min(65535, transparency)) - chunk(fp, b"tRNS", o16(transparency)) - elif im.mode == "RGB": - red, green, blue = transparency - chunk(fp, b"tRNS", o16(red) + o16(green) + o16(blue)) - else: - if "transparency" in im.encoderinfo: - # don't bother with transparency if it's an RGBA - # and it's in the info dict. It's probably just stale. - msg = "cannot use transparency for this mode" - raise OSError(msg) - else: - if im.mode == "P" and im.im.getpalettemode() == "RGBA": - alpha = im.im.getpalette("RGBA", "A") - alpha_bytes = colors - chunk(fp, b"tRNS", alpha[:alpha_bytes]) - - dpi = im.encoderinfo.get("dpi") - if dpi: - chunk( - fp, - b"pHYs", - o32(int(dpi[0] / 0.0254 + 0.5)), - o32(int(dpi[1] / 0.0254 + 0.5)), - b"\x01", - ) - - if info: - chunks = [b"bKGD", b"hIST"] - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid in chunks: - chunks.remove(cid) - chunk(fp, cid, data) - - exif = im.encoderinfo.get("exif") - if exif: - if isinstance(exif, Image.Exif): - exif = exif.tobytes(8) - if exif.startswith(b"Exif\x00\x00"): - exif = exif[6:] - chunk(fp, b"eXIf", exif) - - single_im: Image.Image | None = im - if save_all: - single_im = _write_multiple_frames( - im, fp, chunk, mode, rawmode, default_image, append_images - ) - if single_im: - ImageFile._save( - single_im, - cast(IO[bytes], _idat(fp, chunk)), - [ImageFile._Tile("zip", (0, 0) + single_im.size, 0, rawmode)], - ) - - if info: - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid[1:2].islower(): - # Private chunk - after_idat = len(info_chunk) == 3 and info_chunk[2] - if after_idat: - chunk(fp, cid, data) - - chunk(fp, b"IEND", b"") - - if hasattr(fp, "flush"): - fp.flush() - - -# -------------------------------------------------------------------- -# PNG chunk converter - - -def getchunks(im: Image.Image, **params: Any) -> list[tuple[bytes, bytes, bytes]]: - """Return a list of PNG chunks representing this image.""" - from io import BytesIO - - chunks = [] - - def append(fp: IO[bytes], cid: bytes, *data: bytes) -> None: - byte_data = b"".join(data) - crc = o32(_crc32(byte_data, _crc32(cid))) - chunks.append((cid, byte_data, crc)) - - fp = BytesIO() - - try: - im.encoderinfo = params - _save(im, fp, "", append) - finally: - del im.encoderinfo - - return chunks - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(PngImageFile.format, PngImageFile, _accept) -Image.register_save(PngImageFile.format, _save) -Image.register_save_all(PngImageFile.format, _save_all) - -Image.register_extensions(PngImageFile.format, [".png", ".apng"]) - -Image.register_mime(PngImageFile.format, "image/png") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PpmImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PpmImagePlugin.py deleted file mode 100644 index 307bc97f..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PpmImagePlugin.py +++ /dev/null @@ -1,375 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PPM support for PIL -# -# History: -# 96-03-24 fl Created -# 98-03-06 fl Write RGBA images (as RGB, that is) -# -# Copyright (c) Secret Labs AB 1997-98. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import math -from typing import IO - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import o8 -from ._binary import o32le as o32 - -# -# -------------------------------------------------------------------- - -b_whitespace = b"\x20\x09\x0a\x0b\x0c\x0d" - -MODES = { - # standard - b"P1": "1", - b"P2": "L", - b"P3": "RGB", - b"P4": "1", - b"P5": "L", - b"P6": "RGB", - # extensions - b"P0CMYK": "CMYK", - b"Pf": "F", - # PIL extensions (for test purposes only) - b"PyP": "P", - b"PyRGBA": "RGBA", - b"PyCMYK": "CMYK", -} - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 2 and prefix.startswith(b"P") and prefix[1] in b"0123456fy" - - -## -# Image plugin for PBM, PGM, and PPM images. - - -class PpmImageFile(ImageFile.ImageFile): - format = "PPM" - format_description = "Pbmplus image" - - def _read_magic(self) -> bytes: - assert self.fp is not None - - magic = b"" - # read until whitespace or longest available magic number - for _ in range(6): - c = self.fp.read(1) - if not c or c in b_whitespace: - break - magic += c - return magic - - def _read_token(self) -> bytes: - assert self.fp is not None - - token = b"" - while len(token) <= 10: # read until next whitespace or limit of 10 characters - c = self.fp.read(1) - if not c: - break - elif c in b_whitespace: # token ended - if not token: - # skip whitespace at start - continue - break - elif c == b"#": - # ignores rest of the line; stops at CR, LF or EOF - while self.fp.read(1) not in b"\r\n": - pass - continue - token += c - if not token: - # Token was not even 1 byte - msg = "Reached EOF while reading header" - raise ValueError(msg) - elif len(token) > 10: - msg_too_long = b"Token too long in file header: %s" % token - raise ValueError(msg_too_long) - return token - - def _open(self) -> None: - assert self.fp is not None - - magic_number = self._read_magic() - try: - mode = MODES[magic_number] - except KeyError: - msg = "not a PPM file" - raise SyntaxError(msg) - self._mode = mode - - if magic_number in (b"P1", b"P4"): - self.custom_mimetype = "image/x-portable-bitmap" - elif magic_number in (b"P2", b"P5"): - self.custom_mimetype = "image/x-portable-graymap" - elif magic_number in (b"P3", b"P6"): - self.custom_mimetype = "image/x-portable-pixmap" - - self._size = int(self._read_token()), int(self._read_token()) - - decoder_name = "raw" - if magic_number in (b"P1", b"P2", b"P3"): - decoder_name = "ppm_plain" - - args: str | tuple[str | int, ...] - if mode == "1": - args = "1;I" - elif mode == "F": - scale = float(self._read_token()) - if scale == 0.0 or not math.isfinite(scale): - msg = "scale must be finite and non-zero" - raise ValueError(msg) - self.info["scale"] = abs(scale) - - rawmode = "F;32F" if scale < 0 else "F;32BF" - args = (rawmode, 0, -1) - else: - maxval = int(self._read_token()) - if not 0 < maxval < 65536: - msg = "maxval must be greater than 0 and less than 65536" - raise ValueError(msg) - if maxval > 255 and mode == "L": - self._mode = "I" - - rawmode = mode - if decoder_name != "ppm_plain": - # If maxval matches a bit depth, use the raw decoder directly - if maxval == 65535 and mode == "L": - rawmode = "I;16B" - elif maxval != 255: - decoder_name = "ppm" - - args = rawmode if decoder_name == "raw" else (rawmode, maxval) - self.tile = [ - ImageFile._Tile(decoder_name, (0, 0) + self.size, self.fp.tell(), args) - ] - - -# -# -------------------------------------------------------------------- - - -class PpmPlainDecoder(ImageFile.PyDecoder): - _pulls_fd = True - _comment_spans: bool - - def _read_block(self) -> bytes: - assert self.fd is not None - - return self.fd.read(ImageFile.SAFEBLOCK) - - def _find_comment_end(self, block: bytes, start: int = 0) -> int: - a = block.find(b"\n", start) - b = block.find(b"\r", start) - return min(a, b) if a * b > 0 else max(a, b) # lowest nonnegative index (or -1) - - def _ignore_comments(self, block: bytes) -> bytes: - if self._comment_spans: - # Finish current comment - while block: - comment_end = self._find_comment_end(block) - if comment_end != -1: - # Comment ends in this block - # Delete tail of comment - block = block[comment_end + 1 :] - break - else: - # Comment spans whole block - # So read the next block, looking for the end - block = self._read_block() - - # Search for any further comments - self._comment_spans = False - while True: - comment_start = block.find(b"#") - if comment_start == -1: - # No comment found - break - comment_end = self._find_comment_end(block, comment_start) - if comment_end != -1: - # Comment ends in this block - # Delete comment - block = block[:comment_start] + block[comment_end + 1 :] - else: - # Comment continues to next block(s) - block = block[:comment_start] - self._comment_spans = True - break - return block - - def _decode_bitonal(self) -> bytearray: - """ - This is a separate method because in the plain PBM format, all data tokens are - exactly one byte, so the inter-token whitespace is optional. - """ - data = bytearray() - total_bytes = self.state.xsize * self.state.ysize - - while len(data) != total_bytes: - block = self._read_block() # read next block - if not block: - # eof - break - - block = self._ignore_comments(block) - - tokens = b"".join(block.split()) - for token in tokens: - if token not in (48, 49): - msg = b"Invalid token for this mode: %s" % bytes([token]) - raise ValueError(msg) - data = (data + tokens)[:total_bytes] - invert = bytes.maketrans(b"01", b"\xff\x00") - return data.translate(invert) - - def _decode_blocks(self, maxval: int) -> bytearray: - data = bytearray() - max_len = 10 - out_byte_count = 4 if self.mode == "I" else 1 - out_max = 65535 if self.mode == "I" else 255 - bands = Image.getmodebands(self.mode) - total_bytes = self.state.xsize * self.state.ysize * bands * out_byte_count - - half_token = b"" - while len(data) != total_bytes: - block = self._read_block() # read next block - if not block: - if half_token: - block = bytearray(b" ") # flush half_token - else: - # eof - break - - block = self._ignore_comments(block) - - if half_token: - block = half_token + block # stitch half_token to new block - half_token = b"" - - tokens = block.split() - - if block and not block[-1:].isspace(): # block might split token - half_token = tokens.pop() # save half token for later - if len(half_token) > max_len: # prevent buildup of half_token - msg = ( - b"Token too long found in data: %s" % half_token[: max_len + 1] - ) - raise ValueError(msg) - - for token in tokens: - if len(token) > max_len: - msg = b"Token too long found in data: %s" % token[: max_len + 1] - raise ValueError(msg) - value = int(token) - if value < 0: - msg_str = f"Channel value is negative: {value}" - raise ValueError(msg_str) - if value > maxval: - msg_str = f"Channel value too large for this mode: {value}" - raise ValueError(msg_str) - value = round(value / maxval * out_max) - data += o32(value) if self.mode == "I" else o8(value) - if len(data) == total_bytes: # finished! - break - return data - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - self._comment_spans = False - if self.mode == "1": - data = self._decode_bitonal() - rawmode = "1;8" - else: - maxval = self.args[-1] - data = self._decode_blocks(maxval) - rawmode = "I;32" if self.mode == "I" else self.mode - self.set_as_raw(bytes(data), rawmode) - return -1, 0 - - -class PpmDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - - data = bytearray() - maxval = self.args[-1] - in_byte_count = 1 if maxval < 256 else 2 - out_byte_count = 4 if self.mode == "I" else 1 - out_max = 65535 if self.mode == "I" else 255 - bands = Image.getmodebands(self.mode) - dest_length = self.state.xsize * self.state.ysize * bands * out_byte_count - while len(data) < dest_length: - pixels = self.fd.read(in_byte_count * bands) - if len(pixels) < in_byte_count * bands: - # eof - break - for b in range(bands): - value = ( - pixels[b] if in_byte_count == 1 else i16(pixels, b * in_byte_count) - ) - value = min(out_max, round(value / maxval * out_max)) - data += o32(value) if self.mode == "I" else o8(value) - rawmode = "I;32" if self.mode == "I" else self.mode - self.set_as_raw(bytes(data), rawmode) - return -1, 0 - - -# -# -------------------------------------------------------------------- - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode == "1": - rawmode, head = "1;I", b"P4" - elif im.mode == "L": - rawmode, head = "L", b"P5" - elif im.mode in ("I", "I;16"): - rawmode, head = "I;16B", b"P5" - elif im.mode in ("RGB", "RGBA"): - rawmode, head = "RGB", b"P6" - elif im.mode == "F": - rawmode, head = "F;32F", b"Pf" - else: - msg = f"cannot write mode {im.mode} as PPM" - raise OSError(msg) - fp.write(head + b"\n%d %d\n" % im.size) - if head == b"P6": - fp.write(b"255\n") - elif head == b"P5": - if rawmode == "L": - fp.write(b"255\n") - else: - fp.write(b"65535\n") - elif head == b"Pf": - fp.write(b"-1.0\n") - row_order = -1 if im.mode == "F" else 1 - ImageFile._save( - im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 0, (rawmode, 0, row_order))] - ) - - -# -# -------------------------------------------------------------------- - - -Image.register_open(PpmImageFile.format, PpmImageFile, _accept) -Image.register_save(PpmImageFile.format, _save) - -Image.register_decoder("ppm", PpmDecoder) -Image.register_decoder("ppm_plain", PpmPlainDecoder) - -Image.register_extensions(PpmImageFile.format, [".pbm", ".pgm", ".ppm", ".pnm", ".pfm"]) - -Image.register_mime(PpmImageFile.format, "image/x-portable-anymap") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/PsdImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/PsdImagePlugin.py deleted file mode 100644 index f49aaeeb..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/PsdImagePlugin.py +++ /dev/null @@ -1,333 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# Adobe PSD 2.5/3.0 file handling -# -# History: -# 1995-09-01 fl Created -# 1997-01-03 fl Read most PSD images -# 1997-01-18 fl Fixed P and CMYK support -# 2001-10-21 fl Added seek/tell support (for layers) -# -# Copyright (c) 1997-2001 by Secret Labs AB. -# Copyright (c) 1995-2001 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -from functools import cached_property -from typing import IO - -from . import Image, ImageFile, ImagePalette -from ._binary import i8 -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import si16be as si16 -from ._binary import si32be as si32 -from ._util import DeferredError - -MODES = { - # (photoshop mode, bits) -> (pil mode, required channels) - (0, 1): ("1", 1), - (0, 8): ("L", 1), - (1, 8): ("L", 1), - (2, 8): ("P", 1), - (3, 8): ("RGB", 3), - (4, 8): ("CMYK", 4), - (7, 8): ("L", 1), # FIXME: multilayer - (8, 8): ("L", 1), # duotone - (9, 8): ("LAB", 3), -} - - -# --------------------------------------------------------------------. -# read PSD images - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"8BPS") - - -## -# Image plugin for Photoshop images. - - -class PsdImageFile(ImageFile.ImageFile): - format = "PSD" - format_description = "Adobe Photoshop" - _close_exclusive_fp_after_loading = False - - def _open(self) -> None: - read = self.fp.read - - # - # header - - s = read(26) - if not _accept(s) or i16(s, 4) != 1: - msg = "not a PSD file" - raise SyntaxError(msg) - - psd_bits = i16(s, 22) - psd_channels = i16(s, 12) - psd_mode = i16(s, 24) - - mode, channels = MODES[(psd_mode, psd_bits)] - - if channels > psd_channels: - msg = "not enough channels" - raise OSError(msg) - if mode == "RGB" and psd_channels == 4: - mode = "RGBA" - channels = 4 - - self._mode = mode - self._size = i32(s, 18), i32(s, 14) - - # - # color mode data - - size = i32(read(4)) - if size: - data = read(size) - if mode == "P" and size == 768: - self.palette = ImagePalette.raw("RGB;L", data) - - # - # image resources - - self.resources = [] - - size = i32(read(4)) - if size: - # load resources - end = self.fp.tell() + size - while self.fp.tell() < end: - read(4) # signature - id = i16(read(2)) - name = read(i8(read(1))) - if not (len(name) & 1): - read(1) # padding - data = read(i32(read(4))) - if len(data) & 1: - read(1) # padding - self.resources.append((id, name, data)) - if id == 1039: # ICC profile - self.info["icc_profile"] = data - - # - # layer and mask information - - self._layers_position = None - - size = i32(read(4)) - if size: - end = self.fp.tell() + size - size = i32(read(4)) - if size: - self._layers_position = self.fp.tell() - self._layers_size = size - self.fp.seek(end) - self._n_frames: int | None = None - - # - # image descriptor - - self.tile = _maketile(self.fp, mode, (0, 0) + self.size, channels) - - # keep the file open - self._fp = self.fp - self.frame = 1 - self._min_frame = 1 - - @cached_property - def layers( - self, - ) -> list[tuple[str, str, tuple[int, int, int, int], list[ImageFile._Tile]]]: - layers = [] - if self._layers_position is not None: - if isinstance(self._fp, DeferredError): - raise self._fp.ex - self._fp.seek(self._layers_position) - _layer_data = io.BytesIO(ImageFile._safe_read(self._fp, self._layers_size)) - layers = _layerinfo(_layer_data, self._layers_size) - self._n_frames = len(layers) - return layers - - @property - def n_frames(self) -> int: - if self._n_frames is None: - self._n_frames = len(self.layers) - return self._n_frames - - @property - def is_animated(self) -> bool: - return len(self.layers) > 1 - - def seek(self, layer: int) -> None: - if not self._seek_check(layer): - return - if isinstance(self._fp, DeferredError): - raise self._fp.ex - - # seek to given layer (1..max) - _, mode, _, tile = self.layers[layer - 1] - self._mode = mode - self.tile = tile - self.frame = layer - self.fp = self._fp - - def tell(self) -> int: - # return layer number (0=image, 1..max=layers) - return self.frame - - -def _layerinfo( - fp: IO[bytes], ct_bytes: int -) -> list[tuple[str, str, tuple[int, int, int, int], list[ImageFile._Tile]]]: - # read layerinfo block - layers = [] - - def read(size: int) -> bytes: - return ImageFile._safe_read(fp, size) - - ct = si16(read(2)) - - # sanity check - if ct_bytes < (abs(ct) * 20): - msg = "Layer block too short for number of layers requested" - raise SyntaxError(msg) - - for _ in range(abs(ct)): - # bounding box - y0 = si32(read(4)) - x0 = si32(read(4)) - y1 = si32(read(4)) - x1 = si32(read(4)) - - # image info - bands = [] - ct_types = i16(read(2)) - if ct_types > 4: - fp.seek(ct_types * 6 + 12, io.SEEK_CUR) - size = i32(read(4)) - fp.seek(size, io.SEEK_CUR) - continue - - for _ in range(ct_types): - type = i16(read(2)) - - if type == 65535: - b = "A" - else: - b = "RGBA"[type] - - bands.append(b) - read(4) # size - - # figure out the image mode - bands.sort() - if bands == ["R"]: - mode = "L" - elif bands == ["B", "G", "R"]: - mode = "RGB" - elif bands == ["A", "B", "G", "R"]: - mode = "RGBA" - else: - mode = "" # unknown - - # skip over blend flags and extra information - read(12) # filler - name = "" - size = i32(read(4)) # length of the extra data field - if size: - data_end = fp.tell() + size - - length = i32(read(4)) - if length: - fp.seek(length - 16, io.SEEK_CUR) - - length = i32(read(4)) - if length: - fp.seek(length, io.SEEK_CUR) - - length = i8(read(1)) - if length: - # Don't know the proper encoding, - # Latin-1 should be a good guess - name = read(length).decode("latin-1", "replace") - - fp.seek(data_end) - layers.append((name, mode, (x0, y0, x1, y1))) - - # get tiles - layerinfo = [] - for i, (name, mode, bbox) in enumerate(layers): - tile = [] - for m in mode: - t = _maketile(fp, m, bbox, 1) - if t: - tile.extend(t) - layerinfo.append((name, mode, bbox, tile)) - - return layerinfo - - -def _maketile( - file: IO[bytes], mode: str, bbox: tuple[int, int, int, int], channels: int -) -> list[ImageFile._Tile]: - tiles = [] - read = file.read - - compression = i16(read(2)) - - xsize = bbox[2] - bbox[0] - ysize = bbox[3] - bbox[1] - - offset = file.tell() - - if compression == 0: - # - # raw compression - for channel in range(channels): - layer = mode[channel] - if mode == "CMYK": - layer += ";I" - tiles.append(ImageFile._Tile("raw", bbox, offset, layer)) - offset = offset + xsize * ysize - - elif compression == 1: - # - # packbits compression - i = 0 - bytecount = read(channels * ysize * 2) - offset = file.tell() - for channel in range(channels): - layer = mode[channel] - if mode == "CMYK": - layer += ";I" - tiles.append(ImageFile._Tile("packbits", bbox, offset, layer)) - for y in range(ysize): - offset = offset + i16(bytecount, i) - i += 2 - - file.seek(offset) - - if offset & 1: - read(1) # padding - - return tiles - - -# -------------------------------------------------------------------- -# registry - - -Image.register_open(PsdImageFile.format, PsdImageFile, _accept) - -Image.register_extension(PsdImageFile.format, ".psd") - -Image.register_mime(PsdImageFile.format, "image/vnd.adobe.photoshop") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/QoiImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/QoiImagePlugin.py deleted file mode 100644 index dba5d809..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/QoiImagePlugin.py +++ /dev/null @@ -1,234 +0,0 @@ -# -# The Python Imaging Library. -# -# QOI support for PIL -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -from typing import IO - -from . import Image, ImageFile -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o32be as o32 - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"qoif") - - -class QoiImageFile(ImageFile.ImageFile): - format = "QOI" - format_description = "Quite OK Image" - - def _open(self) -> None: - if not _accept(self.fp.read(4)): - msg = "not a QOI file" - raise SyntaxError(msg) - - self._size = i32(self.fp.read(4)), i32(self.fp.read(4)) - - channels = self.fp.read(1)[0] - self._mode = "RGB" if channels == 3 else "RGBA" - - self.fp.seek(1, os.SEEK_CUR) # colorspace - self.tile = [ImageFile._Tile("qoi", (0, 0) + self._size, self.fp.tell())] - - -class QoiDecoder(ImageFile.PyDecoder): - _pulls_fd = True - _previous_pixel: bytes | bytearray | None = None - _previously_seen_pixels: dict[int, bytes | bytearray] = {} - - def _add_to_previous_pixels(self, value: bytes | bytearray) -> None: - self._previous_pixel = value - - r, g, b, a = value - hash_value = (r * 3 + g * 5 + b * 7 + a * 11) % 64 - self._previously_seen_pixels[hash_value] = value - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - - self._previously_seen_pixels = {} - self._previous_pixel = bytearray((0, 0, 0, 255)) - - data = bytearray() - bands = Image.getmodebands(self.mode) - dest_length = self.state.xsize * self.state.ysize * bands - while len(data) < dest_length: - byte = self.fd.read(1)[0] - value: bytes | bytearray - if byte == 0b11111110 and self._previous_pixel: # QOI_OP_RGB - value = bytearray(self.fd.read(3)) + self._previous_pixel[3:] - elif byte == 0b11111111: # QOI_OP_RGBA - value = self.fd.read(4) - else: - op = byte >> 6 - if op == 0: # QOI_OP_INDEX - op_index = byte & 0b00111111 - value = self._previously_seen_pixels.get( - op_index, bytearray((0, 0, 0, 0)) - ) - elif op == 1 and self._previous_pixel: # QOI_OP_DIFF - value = bytearray( - ( - (self._previous_pixel[0] + ((byte & 0b00110000) >> 4) - 2) - % 256, - (self._previous_pixel[1] + ((byte & 0b00001100) >> 2) - 2) - % 256, - (self._previous_pixel[2] + (byte & 0b00000011) - 2) % 256, - self._previous_pixel[3], - ) - ) - elif op == 2 and self._previous_pixel: # QOI_OP_LUMA - second_byte = self.fd.read(1)[0] - diff_green = (byte & 0b00111111) - 32 - diff_red = ((second_byte & 0b11110000) >> 4) - 8 - diff_blue = (second_byte & 0b00001111) - 8 - - value = bytearray( - tuple( - (self._previous_pixel[i] + diff_green + diff) % 256 - for i, diff in enumerate((diff_red, 0, diff_blue)) - ) - ) - value += self._previous_pixel[3:] - elif op == 3 and self._previous_pixel: # QOI_OP_RUN - run_length = (byte & 0b00111111) + 1 - value = self._previous_pixel - if bands == 3: - value = value[:3] - data += value * run_length - continue - self._add_to_previous_pixels(value) - - if bands == 3: - value = value[:3] - data += value - self.set_as_raw(data) - return -1, 0 - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode == "RGB": - channels = 3 - elif im.mode == "RGBA": - channels = 4 - else: - msg = "Unsupported QOI image mode" - raise ValueError(msg) - - colorspace = 0 if im.encoderinfo.get("colorspace") == "sRGB" else 1 - - fp.write(b"qoif") - fp.write(o32(im.size[0])) - fp.write(o32(im.size[1])) - fp.write(o8(channels)) - fp.write(o8(colorspace)) - - ImageFile._save(im, fp, [ImageFile._Tile("qoi", (0, 0) + im.size)]) - - -class QoiEncoder(ImageFile.PyEncoder): - _pushes_fd = True - _previous_pixel: tuple[int, int, int, int] | None = None - _previously_seen_pixels: dict[int, tuple[int, int, int, int]] = {} - _run = 0 - - def _write_run(self) -> bytes: - data = o8(0b11000000 | (self._run - 1)) # QOI_OP_RUN - self._run = 0 - return data - - def _delta(self, left: int, right: int) -> int: - result = (left - right) & 255 - if result >= 128: - result -= 256 - return result - - def encode(self, bufsize: int) -> tuple[int, int, bytes]: - assert self.im is not None - - self._previously_seen_pixels = {0: (0, 0, 0, 0)} - self._previous_pixel = (0, 0, 0, 255) - - data = bytearray() - w, h = self.im.size - bands = Image.getmodebands(self.mode) - - for y in range(h): - for x in range(w): - pixel = self.im.getpixel((x, y)) - if bands == 3: - pixel = (*pixel, 255) - - if pixel == self._previous_pixel: - self._run += 1 - if self._run == 62: - data += self._write_run() - else: - if self._run: - data += self._write_run() - - r, g, b, a = pixel - hash_value = (r * 3 + g * 5 + b * 7 + a * 11) % 64 - if self._previously_seen_pixels.get(hash_value) == pixel: - data += o8(hash_value) # QOI_OP_INDEX - elif self._previous_pixel: - self._previously_seen_pixels[hash_value] = pixel - - prev_r, prev_g, prev_b, prev_a = self._previous_pixel - if prev_a == a: - delta_r = self._delta(r, prev_r) - delta_g = self._delta(g, prev_g) - delta_b = self._delta(b, prev_b) - - if ( - -2 <= delta_r < 2 - and -2 <= delta_g < 2 - and -2 <= delta_b < 2 - ): - data += o8( - 0b01000000 - | (delta_r + 2) << 4 - | (delta_g + 2) << 2 - | (delta_b + 2) - ) # QOI_OP_DIFF - else: - delta_gr = self._delta(delta_r, delta_g) - delta_gb = self._delta(delta_b, delta_g) - if ( - -8 <= delta_gr < 8 - and -32 <= delta_g < 32 - and -8 <= delta_gb < 8 - ): - data += o8( - 0b10000000 | (delta_g + 32) - ) # QOI_OP_LUMA - data += o8((delta_gr + 8) << 4 | (delta_gb + 8)) - else: - data += o8(0b11111110) # QOI_OP_RGB - data += bytes(pixel[:3]) - else: - data += o8(0b11111111) # QOI_OP_RGBA - data += bytes(pixel) - - self._previous_pixel = pixel - - if self._run: - data += self._write_run() - data += bytes((0, 0, 0, 0, 0, 0, 0, 1)) # padding - - return len(data), 0, data - - -Image.register_open(QoiImageFile.format, QoiImageFile, _accept) -Image.register_decoder("qoi", QoiDecoder) -Image.register_extension(QoiImageFile.format, ".qoi") - -Image.register_save(QoiImageFile.format, _save) -Image.register_encoder("qoi", QoiEncoder) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/SgiImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/SgiImagePlugin.py deleted file mode 100644 index 85302215..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/SgiImagePlugin.py +++ /dev/null @@ -1,231 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# SGI image file handling -# -# See "The SGI Image File Format (Draft version 0.97)", Paul Haeberli. -# -# -# -# History: -# 2017-22-07 mb Add RLE decompression -# 2016-16-10 mb Add save method without compression -# 1995-09-10 fl Created -# -# Copyright (c) 2016 by Mickael Bonfill. -# Copyright (c) 2008 by Karsten Hiddemann. -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1995 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import os -import struct -from typing import IO - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import o8 - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 2 and i16(prefix) == 474 - - -MODES = { - (1, 1, 1): "L", - (1, 2, 1): "L", - (2, 1, 1): "L;16B", - (2, 2, 1): "L;16B", - (1, 3, 3): "RGB", - (2, 3, 3): "RGB;16B", - (1, 3, 4): "RGBA", - (2, 3, 4): "RGBA;16B", -} - - -## -# Image plugin for SGI images. -class SgiImageFile(ImageFile.ImageFile): - format = "SGI" - format_description = "SGI Image File Format" - - def _open(self) -> None: - # HEAD - assert self.fp is not None - - headlen = 512 - s = self.fp.read(headlen) - - if not _accept(s): - msg = "Not an SGI image file" - raise ValueError(msg) - - # compression : verbatim or RLE - compression = s[2] - - # bpc : 1 or 2 bytes (8bits or 16bits) - bpc = s[3] - - # dimension : 1, 2 or 3 (depending on xsize, ysize and zsize) - dimension = i16(s, 4) - - # xsize : width - xsize = i16(s, 6) - - # ysize : height - ysize = i16(s, 8) - - # zsize : channels count - zsize = i16(s, 10) - - # determine mode from bits/zsize - try: - rawmode = MODES[(bpc, dimension, zsize)] - except KeyError: - msg = "Unsupported SGI image mode" - raise ValueError(msg) - - self._size = xsize, ysize - self._mode = rawmode.split(";")[0] - if self.mode == "RGB": - self.custom_mimetype = "image/rgb" - - # orientation -1 : scanlines begins at the bottom-left corner - orientation = -1 - - # decoder info - if compression == 0: - pagesize = xsize * ysize * bpc - if bpc == 2: - self.tile = [ - ImageFile._Tile( - "SGI16", - (0, 0) + self.size, - headlen, - (self.mode, 0, orientation), - ) - ] - else: - self.tile = [] - offset = headlen - for layer in self.mode: - self.tile.append( - ImageFile._Tile( - "raw", (0, 0) + self.size, offset, (layer, 0, orientation) - ) - ) - offset += pagesize - elif compression == 1: - self.tile = [ - ImageFile._Tile( - "sgi_rle", (0, 0) + self.size, headlen, (rawmode, orientation, bpc) - ) - ] - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode not in {"RGB", "RGBA", "L"}: - msg = "Unsupported SGI image mode" - raise ValueError(msg) - - # Get the keyword arguments - info = im.encoderinfo - - # Byte-per-pixel precision, 1 = 8bits per pixel - bpc = info.get("bpc", 1) - - if bpc not in (1, 2): - msg = "Unsupported number of bytes per pixel" - raise ValueError(msg) - - # Flip the image, since the origin of SGI file is the bottom-left corner - orientation = -1 - # Define the file as SGI File Format - magic_number = 474 - # Run-Length Encoding Compression - Unsupported at this time - rle = 0 - - # X Dimension = width / Y Dimension = height - x, y = im.size - # Z Dimension: Number of channels - z = len(im.mode) - # Number of dimensions (x,y,z) - if im.mode == "L": - dimension = 1 if y == 1 else 2 - else: - dimension = 3 - - # Minimum Byte value - pinmin = 0 - # Maximum Byte value (255 = 8bits per pixel) - pinmax = 255 - # Image name (79 characters max, truncated below in write) - img_name = os.path.splitext(os.path.basename(filename))[0] - if isinstance(img_name, str): - img_name = img_name.encode("ascii", "ignore") - # Standard representation of pixel in the file - colormap = 0 - fp.write(struct.pack(">h", magic_number)) - fp.write(o8(rle)) - fp.write(o8(bpc)) - fp.write(struct.pack(">H", dimension)) - fp.write(struct.pack(">H", x)) - fp.write(struct.pack(">H", y)) - fp.write(struct.pack(">H", z)) - fp.write(struct.pack(">l", pinmin)) - fp.write(struct.pack(">l", pinmax)) - fp.write(struct.pack("4s", b"")) # dummy - fp.write(struct.pack("79s", img_name)) # truncates to 79 chars - fp.write(struct.pack("s", b"")) # force null byte after img_name - fp.write(struct.pack(">l", colormap)) - fp.write(struct.pack("404s", b"")) # dummy - - rawmode = "L" - if bpc == 2: - rawmode = "L;16B" - - for channel in im.split(): - fp.write(channel.tobytes("raw", rawmode, 0, orientation)) - - if hasattr(fp, "flush"): - fp.flush() - - -class SGI16Decoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - assert self.im is not None - - rawmode, stride, orientation = self.args - pagesize = self.state.xsize * self.state.ysize - zsize = len(self.mode) - self.fd.seek(512) - - for band in range(zsize): - channel = Image.new("L", (self.state.xsize, self.state.ysize)) - channel.frombytes( - self.fd.read(2 * pagesize), "raw", "L;16B", stride, orientation - ) - self.im.putband(channel.im, band) - - return -1, 0 - - -# -# registry - - -Image.register_decoder("SGI16", SGI16Decoder) -Image.register_open(SgiImageFile.format, SgiImageFile, _accept) -Image.register_save(SgiImageFile.format, _save) -Image.register_mime(SgiImageFile.format, "image/sgi") - -Image.register_extensions(SgiImageFile.format, [".bw", ".rgb", ".rgba", ".sgi"]) - -# End of file diff --git a/pptx-env/lib/python3.12/site-packages/PIL/SpiderImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/SpiderImagePlugin.py deleted file mode 100644 index 868019e8..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/SpiderImagePlugin.py +++ /dev/null @@ -1,331 +0,0 @@ -# -# The Python Imaging Library. -# -# SPIDER image file handling -# -# History: -# 2004-08-02 Created BB -# 2006-03-02 added save method -# 2006-03-13 added support for stack images -# -# Copyright (c) 2004 by Health Research Inc. (HRI) RENSSELAER, NY 12144. -# Copyright (c) 2004 by William Baxter. -# Copyright (c) 2004 by Secret Labs AB. -# Copyright (c) 2004 by Fredrik Lundh. -# - -## -# Image plugin for the Spider image format. This format is used -# by the SPIDER software, in processing image data from electron -# microscopy and tomography. -## - -# -# SpiderImagePlugin.py -# -# The Spider image format is used by SPIDER software, in processing -# image data from electron microscopy and tomography. -# -# Spider home page: -# https://spider.wadsworth.org/spider_doc/spider/docs/spider.html -# -# Details about the Spider image format: -# https://spider.wadsworth.org/spider_doc/spider/docs/image_doc.html -# -from __future__ import annotations - -import os -import struct -import sys -from typing import IO, Any, cast - -from . import Image, ImageFile -from ._util import DeferredError - -TYPE_CHECKING = False - - -def isInt(f: Any) -> int: - try: - i = int(f) - if f - i == 0: - return 1 - else: - return 0 - except (ValueError, OverflowError): - return 0 - - -iforms = [1, 3, -11, -12, -21, -22] - - -# There is no magic number to identify Spider files, so just check a -# series of header locations to see if they have reasonable values. -# Returns no. of bytes in the header, if it is a valid Spider header, -# otherwise returns 0 - - -def isSpiderHeader(t: tuple[float, ...]) -> int: - h = (99,) + t # add 1 value so can use spider header index start=1 - # header values 1,2,5,12,13,22,23 should be integers - for i in [1, 2, 5, 12, 13, 22, 23]: - if not isInt(h[i]): - return 0 - # check iform - iform = int(h[5]) - if iform not in iforms: - return 0 - # check other header values - labrec = int(h[13]) # no. records in file header - labbyt = int(h[22]) # total no. of bytes in header - lenbyt = int(h[23]) # record length in bytes - if labbyt != (labrec * lenbyt): - return 0 - # looks like a valid header - return labbyt - - -def isSpiderImage(filename: str) -> int: - with open(filename, "rb") as fp: - f = fp.read(92) # read 23 * 4 bytes - t = struct.unpack(">23f", f) # try big-endian first - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - t = struct.unpack("<23f", f) # little-endian - hdrlen = isSpiderHeader(t) - return hdrlen - - -class SpiderImageFile(ImageFile.ImageFile): - format = "SPIDER" - format_description = "Spider 2D image" - _close_exclusive_fp_after_loading = False - - def _open(self) -> None: - # check header - n = 27 * 4 # read 27 float values - f = self.fp.read(n) - - try: - self.bigendian = 1 - t = struct.unpack(">27f", f) # try big-endian first - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - self.bigendian = 0 - t = struct.unpack("<27f", f) # little-endian - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - msg = "not a valid Spider file" - raise SyntaxError(msg) - except struct.error as e: - msg = "not a valid Spider file" - raise SyntaxError(msg) from e - - h = (99,) + t # add 1 value : spider header index starts at 1 - iform = int(h[5]) - if iform != 1: - msg = "not a Spider 2D image" - raise SyntaxError(msg) - - self._size = int(h[12]), int(h[2]) # size in pixels (width, height) - self.istack = int(h[24]) - self.imgnumber = int(h[27]) - - if self.istack == 0 and self.imgnumber == 0: - # stk=0, img=0: a regular 2D image - offset = hdrlen - self._nimages = 1 - elif self.istack > 0 and self.imgnumber == 0: - # stk>0, img=0: Opening the stack for the first time - self.imgbytes = int(h[12]) * int(h[2]) * 4 - self.hdrlen = hdrlen - self._nimages = int(h[26]) - # Point to the first image in the stack - offset = hdrlen * 2 - self.imgnumber = 1 - elif self.istack == 0 and self.imgnumber > 0: - # stk=0, img>0: an image within the stack - offset = hdrlen + self.stkoffset - self.istack = 2 # So Image knows it's still a stack - else: - msg = "inconsistent stack header values" - raise SyntaxError(msg) - - if self.bigendian: - self.rawmode = "F;32BF" - else: - self.rawmode = "F;32F" - self._mode = "F" - - self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, offset, self.rawmode)] - self._fp = self.fp # FIXME: hack - - @property - def n_frames(self) -> int: - return self._nimages - - @property - def is_animated(self) -> bool: - return self._nimages > 1 - - # 1st image index is zero (although SPIDER imgnumber starts at 1) - def tell(self) -> int: - if self.imgnumber < 1: - return 0 - else: - return self.imgnumber - 1 - - def seek(self, frame: int) -> None: - if self.istack == 0: - msg = "attempt to seek in a non-stack file" - raise EOFError(msg) - if not self._seek_check(frame): - return - if isinstance(self._fp, DeferredError): - raise self._fp.ex - self.stkoffset = self.hdrlen + frame * (self.hdrlen + self.imgbytes) - self.fp = self._fp - self.fp.seek(self.stkoffset) - self._open() - - # returns a byte image after rescaling to 0..255 - def convert2byte(self, depth: int = 255) -> Image.Image: - extrema = self.getextrema() - assert isinstance(extrema[0], float) - minimum, maximum = cast(tuple[float, float], extrema) - m: float = 1 - if maximum != minimum: - m = depth / (maximum - minimum) - b = -m * minimum - return self.point(lambda i: i * m + b).convert("L") - - if TYPE_CHECKING: - from . import ImageTk - - # returns a ImageTk.PhotoImage object, after rescaling to 0..255 - def tkPhotoImage(self) -> ImageTk.PhotoImage: - from . import ImageTk - - return ImageTk.PhotoImage(self.convert2byte(), palette=256) - - -# -------------------------------------------------------------------- -# Image series - - -# given a list of filenames, return a list of images -def loadImageSeries(filelist: list[str] | None = None) -> list[Image.Image] | None: - """create a list of :py:class:`~PIL.Image.Image` objects for use in a montage""" - if filelist is None or len(filelist) < 1: - return None - - byte_imgs = [] - for img in filelist: - if not os.path.exists(img): - print(f"unable to find {img}") - continue - try: - with Image.open(img) as im: - assert isinstance(im, SpiderImageFile) - byte_im = im.convert2byte() - except Exception: - if not isSpiderImage(img): - print(f"{img} is not a Spider image file") - continue - byte_im.info["filename"] = img - byte_imgs.append(byte_im) - return byte_imgs - - -# -------------------------------------------------------------------- -# For saving images in Spider format - - -def makeSpiderHeader(im: Image.Image) -> list[bytes]: - nsam, nrow = im.size - lenbyt = nsam * 4 # There are labrec records in the header - labrec = int(1024 / lenbyt) - if 1024 % lenbyt != 0: - labrec += 1 - labbyt = labrec * lenbyt - nvalues = int(labbyt / 4) - if nvalues < 23: - return [] - - hdr = [0.0] * nvalues - - # NB these are Fortran indices - hdr[1] = 1.0 # nslice (=1 for an image) - hdr[2] = float(nrow) # number of rows per slice - hdr[3] = float(nrow) # number of records in the image - hdr[5] = 1.0 # iform for 2D image - hdr[12] = float(nsam) # number of pixels per line - hdr[13] = float(labrec) # number of records in file header - hdr[22] = float(labbyt) # total number of bytes in header - hdr[23] = float(lenbyt) # record length in bytes - - # adjust for Fortran indexing - hdr = hdr[1:] - hdr.append(0.0) - # pack binary data into a string - return [struct.pack("f", v) for v in hdr] - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode != "F": - im = im.convert("F") - - hdr = makeSpiderHeader(im) - if len(hdr) < 256: - msg = "Error creating Spider header" - raise OSError(msg) - - # write the SPIDER header - fp.writelines(hdr) - - rawmode = "F;32NF" # 32-bit native floating point - ImageFile._save(im, fp, [ImageFile._Tile("raw", (0, 0) + im.size, 0, rawmode)]) - - -def _save_spider(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - # get the filename extension and register it with Image - filename_ext = os.path.splitext(filename)[1] - ext = filename_ext.decode() if isinstance(filename_ext, bytes) else filename_ext - Image.register_extension(SpiderImageFile.format, ext) - _save(im, fp, filename) - - -# -------------------------------------------------------------------- - - -Image.register_open(SpiderImageFile.format, SpiderImageFile) -Image.register_save(SpiderImageFile.format, _save_spider) - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 SpiderImagePlugin.py [infile] [outfile]") - sys.exit() - - filename = sys.argv[1] - if not isSpiderImage(filename): - print("input image must be in Spider format") - sys.exit() - - with Image.open(filename) as im: - print(f"image: {im}") - print(f"format: {im.format}") - print(f"size: {im.size}") - print(f"mode: {im.mode}") - print("max, min: ", end=" ") - print(im.getextrema()) - - if len(sys.argv) > 2: - outfile = sys.argv[2] - - # perform some image operation - im = im.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - print( - f"saving a flipped version of {os.path.basename(filename)} " - f"as {outfile} " - ) - im.save(outfile, SpiderImageFile.format) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/SunImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/SunImagePlugin.py deleted file mode 100644 index 8912379e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/SunImagePlugin.py +++ /dev/null @@ -1,145 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Sun image file handling -# -# History: -# 1995-09-10 fl Created -# 1996-05-28 fl Fixed 32-bit alignment -# 1998-12-29 fl Import ImagePalette module -# 2001-12-18 fl Fixed palette loading (from Jean-Claude Rimbault) -# -# Copyright (c) 1997-2001 by Secret Labs AB -# Copyright (c) 1995-1996 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -from . import Image, ImageFile, ImagePalette -from ._binary import i32be as i32 - - -def _accept(prefix: bytes) -> bool: - return len(prefix) >= 4 and i32(prefix) == 0x59A66A95 - - -## -# Image plugin for Sun raster files. - - -class SunImageFile(ImageFile.ImageFile): - format = "SUN" - format_description = "Sun Raster File" - - def _open(self) -> None: - # The Sun Raster file header is 32 bytes in length - # and has the following format: - - # typedef struct _SunRaster - # { - # DWORD MagicNumber; /* Magic (identification) number */ - # DWORD Width; /* Width of image in pixels */ - # DWORD Height; /* Height of image in pixels */ - # DWORD Depth; /* Number of bits per pixel */ - # DWORD Length; /* Size of image data in bytes */ - # DWORD Type; /* Type of raster file */ - # DWORD ColorMapType; /* Type of color map */ - # DWORD ColorMapLength; /* Size of the color map in bytes */ - # } SUNRASTER; - - assert self.fp is not None - - # HEAD - s = self.fp.read(32) - if not _accept(s): - msg = "not an SUN raster file" - raise SyntaxError(msg) - - offset = 32 - - self._size = i32(s, 4), i32(s, 8) - - depth = i32(s, 12) - # data_length = i32(s, 16) # unreliable, ignore. - file_type = i32(s, 20) - palette_type = i32(s, 24) # 0: None, 1: RGB, 2: Raw/arbitrary - palette_length = i32(s, 28) - - if depth == 1: - self._mode, rawmode = "1", "1;I" - elif depth == 4: - self._mode, rawmode = "L", "L;4" - elif depth == 8: - self._mode = rawmode = "L" - elif depth == 24: - if file_type == 3: - self._mode, rawmode = "RGB", "RGB" - else: - self._mode, rawmode = "RGB", "BGR" - elif depth == 32: - if file_type == 3: - self._mode, rawmode = "RGB", "RGBX" - else: - self._mode, rawmode = "RGB", "BGRX" - else: - msg = "Unsupported Mode/Bit Depth" - raise SyntaxError(msg) - - if palette_length: - if palette_length > 1024: - msg = "Unsupported Color Palette Length" - raise SyntaxError(msg) - - if palette_type != 1: - msg = "Unsupported Palette Type" - raise SyntaxError(msg) - - offset = offset + palette_length - self.palette = ImagePalette.raw("RGB;L", self.fp.read(palette_length)) - if self.mode == "L": - self._mode = "P" - rawmode = rawmode.replace("L", "P") - - # 16 bit boundaries on stride - stride = ((self.size[0] * depth + 15) // 16) * 2 - - # file type: Type is the version (or flavor) of the bitmap - # file. The following values are typically found in the Type - # field: - # 0000h Old - # 0001h Standard - # 0002h Byte-encoded - # 0003h RGB format - # 0004h TIFF format - # 0005h IFF format - # FFFFh Experimental - - # Old and standard are the same, except for the length tag. - # byte-encoded is run-length-encoded - # RGB looks similar to standard, but RGB byte order - # TIFF and IFF mean that they were converted from T/IFF - # Experimental means that it's something else. - # (https://www.fileformat.info/format/sunraster/egff.htm) - - if file_type in (0, 1, 3, 4, 5): - self.tile = [ - ImageFile._Tile("raw", (0, 0) + self.size, offset, (rawmode, stride)) - ] - elif file_type == 2: - self.tile = [ - ImageFile._Tile("sun_rle", (0, 0) + self.size, offset, rawmode) - ] - else: - msg = "Unsupported Sun Raster file type" - raise SyntaxError(msg) - - -# -# registry - - -Image.register_open(SunImageFile.format, SunImageFile, _accept) - -Image.register_extension(SunImageFile.format, ".ras") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/TarIO.py b/pptx-env/lib/python3.12/site-packages/PIL/TarIO.py deleted file mode 100644 index 86490a49..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/TarIO.py +++ /dev/null @@ -1,61 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# read files from within a tar file -# -# History: -# 95-06-18 fl Created -# 96-05-28 fl Open files in binary mode -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1995-96. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io - -from . import ContainerIO - - -class TarIO(ContainerIO.ContainerIO[bytes]): - """A file object that provides read access to a given member of a TAR file.""" - - def __init__(self, tarfile: str, file: str) -> None: - """ - Create file object. - - :param tarfile: Name of TAR file. - :param file: Name of member file. - """ - self.fh = open(tarfile, "rb") - - while True: - s = self.fh.read(512) - if len(s) != 512: - self.fh.close() - - msg = "unexpected end of tar file" - raise OSError(msg) - - name = s[:100].decode("utf-8") - i = name.find("\0") - if i == 0: - self.fh.close() - - msg = "cannot find subfile" - raise OSError(msg) - if i > 0: - name = name[:i] - - size = int(s[124:135], 8) - - if file == name: - break - - self.fh.seek((size + 511) & (~511), io.SEEK_CUR) - - # Open region - super().__init__(self.fh, self.fh.tell(), size) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/TgaImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/TgaImagePlugin.py deleted file mode 100644 index 90d5b5cf..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/TgaImagePlugin.py +++ /dev/null @@ -1,264 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TGA file handling -# -# History: -# 95-09-01 fl created (reads 24-bit files only) -# 97-01-04 fl support more TGA versions, including compressed images -# 98-07-04 fl fixed orientation and alpha layer bugs -# 98-09-11 fl fixed orientation for runlength decoder -# -# Copyright (c) Secret Labs AB 1997-98. -# Copyright (c) Fredrik Lundh 1995-97. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import warnings -from typing import IO - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - -# -# -------------------------------------------------------------------- -# Read RGA file - - -MODES = { - # map imagetype/depth to rawmode - (1, 8): "P", - (3, 1): "1", - (3, 8): "L", - (3, 16): "LA", - (2, 16): "BGRA;15Z", - (2, 24): "BGR", - (2, 32): "BGRA", -} - - -## -# Image plugin for Targa files. - - -class TgaImageFile(ImageFile.ImageFile): - format = "TGA" - format_description = "Targa" - - def _open(self) -> None: - # process header - assert self.fp is not None - - s = self.fp.read(18) - - id_len = s[0] - - colormaptype = s[1] - imagetype = s[2] - - depth = s[16] - - flags = s[17] - - self._size = i16(s, 12), i16(s, 14) - - # validate header fields - if ( - colormaptype not in (0, 1) - or self.size[0] <= 0 - or self.size[1] <= 0 - or depth not in (1, 8, 16, 24, 32) - ): - msg = "not a TGA file" - raise SyntaxError(msg) - - # image mode - if imagetype in (3, 11): - self._mode = "L" - if depth == 1: - self._mode = "1" # ??? - elif depth == 16: - self._mode = "LA" - elif imagetype in (1, 9): - self._mode = "P" if colormaptype else "L" - elif imagetype in (2, 10): - self._mode = "RGB" if depth == 24 else "RGBA" - else: - msg = "unknown TGA mode" - raise SyntaxError(msg) - - # orientation - orientation = flags & 0x30 - self._flip_horizontally = orientation in [0x10, 0x30] - if orientation in [0x20, 0x30]: - orientation = 1 - elif orientation in [0, 0x10]: - orientation = -1 - else: - msg = "unknown TGA orientation" - raise SyntaxError(msg) - - self.info["orientation"] = orientation - - if imagetype & 8: - self.info["compression"] = "tga_rle" - - if id_len: - self.info["id_section"] = self.fp.read(id_len) - - if colormaptype: - # read palette - start, size, mapdepth = i16(s, 3), i16(s, 5), s[7] - if mapdepth == 16: - self.palette = ImagePalette.raw( - "BGRA;15Z", bytes(2 * start) + self.fp.read(2 * size) - ) - self.palette.mode = "RGBA" - elif mapdepth == 24: - self.palette = ImagePalette.raw( - "BGR", bytes(3 * start) + self.fp.read(3 * size) - ) - elif mapdepth == 32: - self.palette = ImagePalette.raw( - "BGRA", bytes(4 * start) + self.fp.read(4 * size) - ) - else: - msg = "unknown TGA map depth" - raise SyntaxError(msg) - - # setup tile descriptor - try: - rawmode = MODES[(imagetype & 7, depth)] - if imagetype & 8: - # compressed - self.tile = [ - ImageFile._Tile( - "tga_rle", - (0, 0) + self.size, - self.fp.tell(), - (rawmode, orientation, depth), - ) - ] - else: - self.tile = [ - ImageFile._Tile( - "raw", - (0, 0) + self.size, - self.fp.tell(), - (rawmode, 0, orientation), - ) - ] - except KeyError: - pass # cannot decode - - def load_end(self) -> None: - if self._flip_horizontally: - self.im = self.im.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - - -# -# -------------------------------------------------------------------- -# Write TGA file - - -SAVE = { - "1": ("1", 1, 0, 3), - "L": ("L", 8, 0, 3), - "LA": ("LA", 16, 0, 3), - "P": ("P", 8, 1, 1), - "RGB": ("BGR", 24, 0, 2), - "RGBA": ("BGRA", 32, 0, 2), -} - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - try: - rawmode, bits, colormaptype, imagetype = SAVE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as TGA" - raise OSError(msg) from e - - if "rle" in im.encoderinfo: - rle = im.encoderinfo["rle"] - else: - compression = im.encoderinfo.get("compression", im.info.get("compression")) - rle = compression == "tga_rle" - if rle: - imagetype += 8 - - id_section = im.encoderinfo.get("id_section", im.info.get("id_section", "")) - id_len = len(id_section) - if id_len > 255: - id_len = 255 - id_section = id_section[:255] - warnings.warn("id_section has been trimmed to 255 characters") - - if colormaptype: - palette = im.im.getpalette("RGB", "BGR") - colormaplength, colormapentry = len(palette) // 3, 24 - else: - colormaplength, colormapentry = 0, 0 - - if im.mode in ("LA", "RGBA"): - flags = 8 - else: - flags = 0 - - orientation = im.encoderinfo.get("orientation", im.info.get("orientation", -1)) - if orientation > 0: - flags = flags | 0x20 - - fp.write( - o8(id_len) - + o8(colormaptype) - + o8(imagetype) - + o16(0) # colormapfirst - + o16(colormaplength) - + o8(colormapentry) - + o16(0) - + o16(0) - + o16(im.size[0]) - + o16(im.size[1]) - + o8(bits) - + o8(flags) - ) - - if id_section: - fp.write(id_section) - - if colormaptype: - fp.write(palette) - - if rle: - ImageFile._save( - im, - fp, - [ImageFile._Tile("tga_rle", (0, 0) + im.size, 0, (rawmode, orientation))], - ) - else: - ImageFile._save( - im, - fp, - [ImageFile._Tile("raw", (0, 0) + im.size, 0, (rawmode, 0, orientation))], - ) - - # write targa version 2 footer - fp.write(b"\000" * 8 + b"TRUEVISION-XFILE." + b"\000") - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(TgaImageFile.format, TgaImageFile) -Image.register_save(TgaImageFile.format, _save) - -Image.register_extensions(TgaImageFile.format, [".tga", ".icb", ".vda", ".vst"]) - -Image.register_mime(TgaImageFile.format, "image/x-tga") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/TiffImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/TiffImagePlugin.py deleted file mode 100644 index de2ce066..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/TiffImagePlugin.py +++ /dev/null @@ -1,2338 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TIFF file handling -# -# TIFF is a flexible, if somewhat aged, image file format originally -# defined by Aldus. Although TIFF supports a wide variety of pixel -# layouts and compression methods, the name doesn't really stand for -# "thousands of incompatible file formats," it just feels that way. -# -# To read TIFF data from a stream, the stream must be seekable. For -# progressive decoding, make sure to use TIFF files where the tag -# directory is placed first in the file. -# -# History: -# 1995-09-01 fl Created -# 1996-05-04 fl Handle JPEGTABLES tag -# 1996-05-18 fl Fixed COLORMAP support -# 1997-01-05 fl Fixed PREDICTOR support -# 1997-08-27 fl Added support for rational tags (from Perry Stoll) -# 1998-01-10 fl Fixed seek/tell (from Jan Blom) -# 1998-07-15 fl Use private names for internal variables -# 1999-06-13 fl Rewritten for PIL 1.0 (1.0) -# 2000-10-11 fl Additional fixes for Python 2.0 (1.1) -# 2001-04-17 fl Fixed rewind support (seek to frame 0) (1.2) -# 2001-05-12 fl Added write support for more tags (from Greg Couch) (1.3) -# 2001-12-18 fl Added workaround for broken Matrox library -# 2002-01-18 fl Don't mess up if photometric tag is missing (D. Alan Stewart) -# 2003-05-19 fl Check FILLORDER tag -# 2003-09-26 fl Added RGBa support -# 2004-02-24 fl Added DPI support; fixed rational write support -# 2005-02-07 fl Added workaround for broken Corel Draw 10 files -# 2006-01-09 fl Added support for float/double tags (from Russell Nelson) -# -# Copyright (c) 1997-2006 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import io -import itertools -import logging -import math -import os -import struct -import warnings -from collections.abc import Callable, MutableMapping -from fractions import Fraction -from numbers import Number, Rational -from typing import IO, Any, cast - -from . import ExifTags, Image, ImageFile, ImageOps, ImagePalette, TiffTags -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._util import DeferredError, is_path -from .TiffTags import TYPES - -TYPE_CHECKING = False -if TYPE_CHECKING: - from collections.abc import Iterator - from typing import NoReturn - - from ._typing import Buffer, IntegralLike, StrOrBytesPath - -logger = logging.getLogger(__name__) - -# Set these to true to force use of libtiff for reading or writing. -READ_LIBTIFF = False -WRITE_LIBTIFF = False -STRIP_SIZE = 65536 - -II = b"II" # little-endian (Intel style) -MM = b"MM" # big-endian (Motorola style) - -# -# -------------------------------------------------------------------- -# Read TIFF files - -# a few tag names, just to make the code below a bit more readable -OSUBFILETYPE = 255 -IMAGEWIDTH = 256 -IMAGELENGTH = 257 -BITSPERSAMPLE = 258 -COMPRESSION = 259 -PHOTOMETRIC_INTERPRETATION = 262 -FILLORDER = 266 -IMAGEDESCRIPTION = 270 -STRIPOFFSETS = 273 -SAMPLESPERPIXEL = 277 -ROWSPERSTRIP = 278 -STRIPBYTECOUNTS = 279 -X_RESOLUTION = 282 -Y_RESOLUTION = 283 -PLANAR_CONFIGURATION = 284 -RESOLUTION_UNIT = 296 -TRANSFERFUNCTION = 301 -SOFTWARE = 305 -DATE_TIME = 306 -ARTIST = 315 -PREDICTOR = 317 -COLORMAP = 320 -TILEWIDTH = 322 -TILELENGTH = 323 -TILEOFFSETS = 324 -TILEBYTECOUNTS = 325 -SUBIFD = 330 -EXTRASAMPLES = 338 -SAMPLEFORMAT = 339 -JPEGTABLES = 347 -YCBCRSUBSAMPLING = 530 -REFERENCEBLACKWHITE = 532 -COPYRIGHT = 33432 -IPTC_NAA_CHUNK = 33723 # newsphoto properties -PHOTOSHOP_CHUNK = 34377 # photoshop properties -ICCPROFILE = 34675 -EXIFIFD = 34665 -XMP = 700 -JPEGQUALITY = 65537 # pseudo-tag by libtiff - -# https://github.com/imagej/ImageJA/blob/master/src/main/java/ij/io/TiffDecoder.java -IMAGEJ_META_DATA_BYTE_COUNTS = 50838 -IMAGEJ_META_DATA = 50839 - -COMPRESSION_INFO = { - # Compression => pil compression name - 1: "raw", - 2: "tiff_ccitt", - 3: "group3", - 4: "group4", - 5: "tiff_lzw", - 6: "tiff_jpeg", # obsolete - 7: "jpeg", - 8: "tiff_adobe_deflate", - 32771: "tiff_raw_16", # 16-bit padding - 32773: "packbits", - 32809: "tiff_thunderscan", - 32946: "tiff_deflate", - 34676: "tiff_sgilog", - 34677: "tiff_sgilog24", - 34925: "lzma", - 50000: "zstd", - 50001: "webp", -} - -COMPRESSION_INFO_REV = {v: k for k, v in COMPRESSION_INFO.items()} - -OPEN_INFO = { - # (ByteOrder, PhotoInterpretation, SampleFormat, FillOrder, BitsPerSample, - # ExtraSamples) => mode, rawmode - (II, 0, (1,), 1, (1,), ()): ("1", "1;I"), - (MM, 0, (1,), 1, (1,), ()): ("1", "1;I"), - (II, 0, (1,), 2, (1,), ()): ("1", "1;IR"), - (MM, 0, (1,), 2, (1,), ()): ("1", "1;IR"), - (II, 1, (1,), 1, (1,), ()): ("1", "1"), - (MM, 1, (1,), 1, (1,), ()): ("1", "1"), - (II, 1, (1,), 2, (1,), ()): ("1", "1;R"), - (MM, 1, (1,), 2, (1,), ()): ("1", "1;R"), - (II, 0, (1,), 1, (2,), ()): ("L", "L;2I"), - (MM, 0, (1,), 1, (2,), ()): ("L", "L;2I"), - (II, 0, (1,), 2, (2,), ()): ("L", "L;2IR"), - (MM, 0, (1,), 2, (2,), ()): ("L", "L;2IR"), - (II, 1, (1,), 1, (2,), ()): ("L", "L;2"), - (MM, 1, (1,), 1, (2,), ()): ("L", "L;2"), - (II, 1, (1,), 2, (2,), ()): ("L", "L;2R"), - (MM, 1, (1,), 2, (2,), ()): ("L", "L;2R"), - (II, 0, (1,), 1, (4,), ()): ("L", "L;4I"), - (MM, 0, (1,), 1, (4,), ()): ("L", "L;4I"), - (II, 0, (1,), 2, (4,), ()): ("L", "L;4IR"), - (MM, 0, (1,), 2, (4,), ()): ("L", "L;4IR"), - (II, 1, (1,), 1, (4,), ()): ("L", "L;4"), - (MM, 1, (1,), 1, (4,), ()): ("L", "L;4"), - (II, 1, (1,), 2, (4,), ()): ("L", "L;4R"), - (MM, 1, (1,), 2, (4,), ()): ("L", "L;4R"), - (II, 0, (1,), 1, (8,), ()): ("L", "L;I"), - (MM, 0, (1,), 1, (8,), ()): ("L", "L;I"), - (II, 0, (1,), 2, (8,), ()): ("L", "L;IR"), - (MM, 0, (1,), 2, (8,), ()): ("L", "L;IR"), - (II, 1, (1,), 1, (8,), ()): ("L", "L"), - (MM, 1, (1,), 1, (8,), ()): ("L", "L"), - (II, 1, (2,), 1, (8,), ()): ("L", "L"), - (MM, 1, (2,), 1, (8,), ()): ("L", "L"), - (II, 1, (1,), 2, (8,), ()): ("L", "L;R"), - (MM, 1, (1,), 2, (8,), ()): ("L", "L;R"), - (II, 1, (1,), 1, (12,), ()): ("I;16", "I;12"), - (II, 0, (1,), 1, (16,), ()): ("I;16", "I;16"), - (II, 1, (1,), 1, (16,), ()): ("I;16", "I;16"), - (MM, 1, (1,), 1, (16,), ()): ("I;16B", "I;16B"), - (II, 1, (1,), 2, (16,), ()): ("I;16", "I;16R"), - (II, 1, (2,), 1, (16,), ()): ("I", "I;16S"), - (MM, 1, (2,), 1, (16,), ()): ("I", "I;16BS"), - (II, 0, (3,), 1, (32,), ()): ("F", "F;32F"), - (MM, 0, (3,), 1, (32,), ()): ("F", "F;32BF"), - (II, 1, (1,), 1, (32,), ()): ("I", "I;32N"), - (II, 1, (2,), 1, (32,), ()): ("I", "I;32S"), - (MM, 1, (2,), 1, (32,), ()): ("I", "I;32BS"), - (II, 1, (3,), 1, (32,), ()): ("F", "F;32F"), - (MM, 1, (3,), 1, (32,), ()): ("F", "F;32BF"), - (II, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"), - (MM, 1, (1,), 1, (8, 8), (2,)): ("LA", "LA"), - (II, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"), - (MM, 2, (1,), 1, (8, 8, 8), ()): ("RGB", "RGB"), - (II, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"), - (MM, 2, (1,), 2, (8, 8, 8), ()): ("RGB", "RGB;R"), - (II, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples - (MM, 2, (1,), 1, (8, 8, 8, 8), ()): ("RGBA", "RGBA"), # missing ExtraSamples - (II, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGB", "RGBX"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (0,)): ("RGB", "RGBX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGB", "RGBXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (0, 0)): ("RGB", "RGBXX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGB", "RGBXXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0, 0)): ("RGB", "RGBXXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (1,)): ("RGBA", "RGBa"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (1, 0)): ("RGBA", "RGBaX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (1, 0, 0)): ("RGBA", "RGBaXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"), - (MM, 2, (1,), 1, (8, 8, 8, 8), (2,)): ("RGBA", "RGBA"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8), (2, 0)): ("RGBA", "RGBAX"), - (II, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"), - (MM, 2, (1,), 1, (8, 8, 8, 8, 8, 8), (2, 0, 0)): ("RGBA", "RGBAXX"), - (II, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10 - (MM, 2, (1,), 1, (8, 8, 8, 8), (999,)): ("RGBA", "RGBA"), # Corel Draw 10 - (II, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16L"), - (MM, 2, (1,), 1, (16, 16, 16), ()): ("RGB", "RGB;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), ()): ("RGBA", "RGBA;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGB", "RGBX;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (0,)): ("RGB", "RGBX;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (1,)): ("RGBA", "RGBa;16B"), - (II, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16L"), - (MM, 2, (1,), 1, (16, 16, 16, 16), (2,)): ("RGBA", "RGBA;16B"), - (II, 3, (1,), 1, (1,), ()): ("P", "P;1"), - (MM, 3, (1,), 1, (1,), ()): ("P", "P;1"), - (II, 3, (1,), 2, (1,), ()): ("P", "P;1R"), - (MM, 3, (1,), 2, (1,), ()): ("P", "P;1R"), - (II, 3, (1,), 1, (2,), ()): ("P", "P;2"), - (MM, 3, (1,), 1, (2,), ()): ("P", "P;2"), - (II, 3, (1,), 2, (2,), ()): ("P", "P;2R"), - (MM, 3, (1,), 2, (2,), ()): ("P", "P;2R"), - (II, 3, (1,), 1, (4,), ()): ("P", "P;4"), - (MM, 3, (1,), 1, (4,), ()): ("P", "P;4"), - (II, 3, (1,), 2, (4,), ()): ("P", "P;4R"), - (MM, 3, (1,), 2, (4,), ()): ("P", "P;4R"), - (II, 3, (1,), 1, (8,), ()): ("P", "P"), - (MM, 3, (1,), 1, (8,), ()): ("P", "P"), - (II, 3, (1,), 1, (8, 8), (0,)): ("P", "PX"), - (MM, 3, (1,), 1, (8, 8), (0,)): ("P", "PX"), - (II, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"), - (MM, 3, (1,), 1, (8, 8), (2,)): ("PA", "PA"), - (II, 3, (1,), 2, (8,), ()): ("P", "P;R"), - (MM, 3, (1,), 2, (8,), ()): ("P", "P;R"), - (II, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"), - (MM, 5, (1,), 1, (8, 8, 8, 8), ()): ("CMYK", "CMYK"), - (II, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"), - (MM, 5, (1,), 1, (8, 8, 8, 8, 8), (0,)): ("CMYK", "CMYKX"), - (II, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"), - (MM, 5, (1,), 1, (8, 8, 8, 8, 8, 8), (0, 0)): ("CMYK", "CMYKXX"), - (II, 5, (1,), 1, (16, 16, 16, 16), ()): ("CMYK", "CMYK;16L"), - (MM, 5, (1,), 1, (16, 16, 16, 16), ()): ("CMYK", "CMYK;16B"), - (II, 6, (1,), 1, (8,), ()): ("L", "L"), - (MM, 6, (1,), 1, (8,), ()): ("L", "L"), - # JPEG compressed images handled by LibTiff and auto-converted to RGBX - # Minimal Baseline TIFF requires YCbCr images to have 3 SamplesPerPixel - (II, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"), - (MM, 6, (1,), 1, (8, 8, 8), ()): ("RGB", "RGBX"), - (II, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"), - (MM, 8, (1,), 1, (8, 8, 8), ()): ("LAB", "LAB"), -} - -MAX_SAMPLESPERPIXEL = max(len(key_tp[4]) for key_tp in OPEN_INFO) - -PREFIXES = [ - b"MM\x00\x2a", # Valid TIFF header with big-endian byte order - b"II\x2a\x00", # Valid TIFF header with little-endian byte order - b"MM\x2a\x00", # Invalid TIFF header, assume big-endian - b"II\x00\x2a", # Invalid TIFF header, assume little-endian - b"MM\x00\x2b", # BigTIFF with big-endian byte order - b"II\x2b\x00", # BigTIFF with little-endian byte order -] - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(tuple(PREFIXES)) - - -def _limit_rational( - val: float | Fraction | IFDRational, max_val: int -) -> tuple[IntegralLike, IntegralLike]: - inv = abs(val) > 1 - n_d = IFDRational(1 / val if inv else val).limit_rational(max_val) - return n_d[::-1] if inv else n_d - - -def _limit_signed_rational( - val: IFDRational, max_val: int, min_val: int -) -> tuple[IntegralLike, IntegralLike]: - frac = Fraction(val) - n_d: tuple[IntegralLike, IntegralLike] = frac.numerator, frac.denominator - - if min(float(i) for i in n_d) < min_val: - n_d = _limit_rational(val, abs(min_val)) - - n_d_float = tuple(float(i) for i in n_d) - if max(n_d_float) > max_val: - n_d = _limit_rational(n_d_float[0] / n_d_float[1], max_val) - - return n_d - - -## -# Wrapper for TIFF IFDs. - -_load_dispatch = {} -_write_dispatch = {} - - -def _delegate(op: str) -> Any: - def delegate( - self: IFDRational, *args: tuple[float, ...] - ) -> bool | float | Fraction: - return getattr(self._val, op)(*args) - - return delegate - - -class IFDRational(Rational): - """Implements a rational class where 0/0 is a legal value to match - the in the wild use of exif rationals. - - e.g., DigitalZoomRatio - 0.00/0.00 indicates that no digital zoom was used - """ - - """ If the denominator is 0, store this as a float('nan'), otherwise store - as a fractions.Fraction(). Delegate as appropriate - - """ - - __slots__ = ("_numerator", "_denominator", "_val") - - def __init__( - self, value: float | Fraction | IFDRational, denominator: int = 1 - ) -> None: - """ - :param value: either an integer numerator, a - float/rational/other number, or an IFDRational - :param denominator: Optional integer denominator - """ - self._val: Fraction | float - if isinstance(value, IFDRational): - self._numerator = value.numerator - self._denominator = value.denominator - self._val = value._val - return - - if isinstance(value, Fraction): - self._numerator = value.numerator - self._denominator = value.denominator - else: - if TYPE_CHECKING: - self._numerator = cast(IntegralLike, value) - else: - self._numerator = value - self._denominator = denominator - - if denominator == 0: - self._val = float("nan") - elif denominator == 1: - self._val = Fraction(value) - elif int(value) == value: - self._val = Fraction(int(value), denominator) - else: - self._val = Fraction(value / denominator) - - @property - def numerator(self) -> IntegralLike: - return self._numerator - - @property - def denominator(self) -> int: - return self._denominator - - def limit_rational(self, max_denominator: int) -> tuple[IntegralLike, int]: - """ - - :param max_denominator: Integer, the maximum denominator value - :returns: Tuple of (numerator, denominator) - """ - - if self.denominator == 0: - return self.numerator, self.denominator - - assert isinstance(self._val, Fraction) - f = self._val.limit_denominator(max_denominator) - return f.numerator, f.denominator - - def __repr__(self) -> str: - return str(float(self._val)) - - def __hash__(self) -> int: # type: ignore[override] - return self._val.__hash__() - - def __eq__(self, other: object) -> bool: - val = self._val - if isinstance(other, IFDRational): - other = other._val - if isinstance(other, float): - val = float(val) - return val == other - - def __getstate__(self) -> list[float | Fraction | IntegralLike]: - return [self._val, self._numerator, self._denominator] - - def __setstate__(self, state: list[float | Fraction | IntegralLike]) -> None: - IFDRational.__init__(self, 0) - _val, _numerator, _denominator = state - assert isinstance(_val, (float, Fraction)) - self._val = _val - if TYPE_CHECKING: - self._numerator = cast(IntegralLike, _numerator) - else: - self._numerator = _numerator - assert isinstance(_denominator, int) - self._denominator = _denominator - - """ a = ['add','radd', 'sub', 'rsub', 'mul', 'rmul', - 'truediv', 'rtruediv', 'floordiv', 'rfloordiv', - 'mod','rmod', 'pow','rpow', 'pos', 'neg', - 'abs', 'trunc', 'lt', 'gt', 'le', 'ge', 'bool', - 'ceil', 'floor', 'round'] - print("\n".join("__%s__ = _delegate('__%s__')" % (s,s) for s in a)) - """ - - __add__ = _delegate("__add__") - __radd__ = _delegate("__radd__") - __sub__ = _delegate("__sub__") - __rsub__ = _delegate("__rsub__") - __mul__ = _delegate("__mul__") - __rmul__ = _delegate("__rmul__") - __truediv__ = _delegate("__truediv__") - __rtruediv__ = _delegate("__rtruediv__") - __floordiv__ = _delegate("__floordiv__") - __rfloordiv__ = _delegate("__rfloordiv__") - __mod__ = _delegate("__mod__") - __rmod__ = _delegate("__rmod__") - __pow__ = _delegate("__pow__") - __rpow__ = _delegate("__rpow__") - __pos__ = _delegate("__pos__") - __neg__ = _delegate("__neg__") - __abs__ = _delegate("__abs__") - __trunc__ = _delegate("__trunc__") - __lt__ = _delegate("__lt__") - __gt__ = _delegate("__gt__") - __le__ = _delegate("__le__") - __ge__ = _delegate("__ge__") - __bool__ = _delegate("__bool__") - __ceil__ = _delegate("__ceil__") - __floor__ = _delegate("__floor__") - __round__ = _delegate("__round__") - # Python >= 3.11 - if hasattr(Fraction, "__int__"): - __int__ = _delegate("__int__") - - -_LoaderFunc = Callable[["ImageFileDirectory_v2", bytes, bool], Any] - - -def _register_loader(idx: int, size: int) -> Callable[[_LoaderFunc], _LoaderFunc]: - def decorator(func: _LoaderFunc) -> _LoaderFunc: - from .TiffTags import TYPES - - if func.__name__.startswith("load_"): - TYPES[idx] = func.__name__[5:].replace("_", " ") - _load_dispatch[idx] = size, func # noqa: F821 - return func - - return decorator - - -def _register_writer(idx: int) -> Callable[[Callable[..., Any]], Callable[..., Any]]: - def decorator(func: Callable[..., Any]) -> Callable[..., Any]: - _write_dispatch[idx] = func # noqa: F821 - return func - - return decorator - - -def _register_basic(idx_fmt_name: tuple[int, str, str]) -> None: - from .TiffTags import TYPES - - idx, fmt, name = idx_fmt_name - TYPES[idx] = name - size = struct.calcsize(f"={fmt}") - - def basic_handler( - self: ImageFileDirectory_v2, data: bytes, legacy_api: bool = True - ) -> tuple[Any, ...]: - return self._unpack(f"{len(data) // size}{fmt}", data) - - _load_dispatch[idx] = size, basic_handler # noqa: F821 - _write_dispatch[idx] = lambda self, *values: ( # noqa: F821 - b"".join(self._pack(fmt, value) for value in values) - ) - - -if TYPE_CHECKING: - _IFDv2Base = MutableMapping[int, Any] -else: - _IFDv2Base = MutableMapping - - -class ImageFileDirectory_v2(_IFDv2Base): - """This class represents a TIFF tag directory. To speed things up, we - don't decode tags unless they're asked for. - - Exposes a dictionary interface of the tags in the directory:: - - ifd = ImageFileDirectory_v2() - ifd[key] = 'Some Data' - ifd.tagtype[key] = TiffTags.ASCII - print(ifd[key]) - 'Some Data' - - Individual values are returned as the strings or numbers, sequences are - returned as tuples of the values. - - The tiff metadata type of each item is stored in a dictionary of - tag types in - :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v2.tagtype`. The types - are read from a tiff file, guessed from the type added, or added - manually. - - Data Structures: - - * ``self.tagtype = {}`` - - * Key: numerical TIFF tag number - * Value: integer corresponding to the data type from - :py:data:`.TiffTags.TYPES` - - .. versionadded:: 3.0.0 - - 'Internal' data structures: - - * ``self._tags_v2 = {}`` - - * Key: numerical TIFF tag number - * Value: decoded data, as tuple for multiple values - - * ``self._tagdata = {}`` - - * Key: numerical TIFF tag number - * Value: undecoded byte string from file - - * ``self._tags_v1 = {}`` - - * Key: numerical TIFF tag number - * Value: decoded data in the v1 format - - Tags will be found in the private attributes ``self._tagdata``, and in - ``self._tags_v2`` once decoded. - - ``self.legacy_api`` is a value for internal use, and shouldn't be changed - from outside code. In cooperation with - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1`, if ``legacy_api`` - is true, then decoded tags will be populated into both ``_tags_v1`` and - ``_tags_v2``. ``_tags_v2`` will be used if this IFD is used in the TIFF - save routine. Tags should be read from ``_tags_v1`` if - ``legacy_api == true``. - - """ - - _load_dispatch: dict[int, tuple[int, _LoaderFunc]] = {} - _write_dispatch: dict[int, Callable[..., Any]] = {} - - def __init__( - self, - ifh: bytes = b"II\x2a\x00\x00\x00\x00\x00", - prefix: bytes | None = None, - group: int | None = None, - ) -> None: - """Initialize an ImageFileDirectory. - - To construct an ImageFileDirectory from a real file, pass the 8-byte - magic header to the constructor. To only set the endianness, pass it - as the 'prefix' keyword argument. - - :param ifh: One of the accepted magic headers (cf. PREFIXES); also sets - endianness. - :param prefix: Override the endianness of the file. - """ - if not _accept(ifh): - msg = f"not a TIFF file (header {repr(ifh)} not valid)" - raise SyntaxError(msg) - self._prefix = prefix if prefix is not None else ifh[:2] - if self._prefix == MM: - self._endian = ">" - elif self._prefix == II: - self._endian = "<" - else: - msg = "not a TIFF IFD" - raise SyntaxError(msg) - self._bigtiff = ifh[2] == 43 - self.group = group - self.tagtype: dict[int, int] = {} - """ Dictionary of tag types """ - self.reset() - self.next = ( - self._unpack("Q", ifh[8:])[0] - if self._bigtiff - else self._unpack("L", ifh[4:])[0] - ) - self._legacy_api = False - - prefix = property(lambda self: self._prefix) - offset = property(lambda self: self._offset) - - @property - def legacy_api(self) -> bool: - return self._legacy_api - - @legacy_api.setter - def legacy_api(self, value: bool) -> NoReturn: - msg = "Not allowing setting of legacy api" - raise Exception(msg) - - def reset(self) -> None: - self._tags_v1: dict[int, Any] = {} # will remain empty if legacy_api is false - self._tags_v2: dict[int, Any] = {} # main tag storage - self._tagdata: dict[int, bytes] = {} - self.tagtype = {} # added 2008-06-05 by Florian Hoech - self._next = None - self._offset: int | None = None - - def __str__(self) -> str: - return str(dict(self)) - - def named(self) -> dict[str, Any]: - """ - :returns: dict of name|key: value - - Returns the complete tag dictionary, with named tags where possible. - """ - return { - TiffTags.lookup(code, self.group).name: value - for code, value in self.items() - } - - def __len__(self) -> int: - return len(set(self._tagdata) | set(self._tags_v2)) - - def __getitem__(self, tag: int) -> Any: - if tag not in self._tags_v2: # unpack on the fly - data = self._tagdata[tag] - typ = self.tagtype[tag] - size, handler = self._load_dispatch[typ] - self[tag] = handler(self, data, self.legacy_api) # check type - val = self._tags_v2[tag] - if self.legacy_api and not isinstance(val, (tuple, bytes)): - val = (val,) - return val - - def __contains__(self, tag: object) -> bool: - return tag in self._tags_v2 or tag in self._tagdata - - def __setitem__(self, tag: int, value: Any) -> None: - self._setitem(tag, value, self.legacy_api) - - def _setitem(self, tag: int, value: Any, legacy_api: bool) -> None: - basetypes = (Number, bytes, str) - - info = TiffTags.lookup(tag, self.group) - values = [value] if isinstance(value, basetypes) else value - - if tag not in self.tagtype: - if info.type: - self.tagtype[tag] = info.type - else: - self.tagtype[tag] = TiffTags.UNDEFINED - if all(isinstance(v, IFDRational) for v in values): - for v in values: - assert isinstance(v, IFDRational) - if v < 0: - self.tagtype[tag] = TiffTags.SIGNED_RATIONAL - break - else: - self.tagtype[tag] = TiffTags.RATIONAL - elif all(isinstance(v, int) for v in values): - short = True - signed_short = True - long = True - for v in values: - assert isinstance(v, int) - if short and not (0 <= v < 2**16): - short = False - if signed_short and not (-(2**15) < v < 2**15): - signed_short = False - if long and v < 0: - long = False - if short: - self.tagtype[tag] = TiffTags.SHORT - elif signed_short: - self.tagtype[tag] = TiffTags.SIGNED_SHORT - elif long: - self.tagtype[tag] = TiffTags.LONG - else: - self.tagtype[tag] = TiffTags.SIGNED_LONG - elif all(isinstance(v, float) for v in values): - self.tagtype[tag] = TiffTags.DOUBLE - elif all(isinstance(v, str) for v in values): - self.tagtype[tag] = TiffTags.ASCII - elif all(isinstance(v, bytes) for v in values): - self.tagtype[tag] = TiffTags.BYTE - - if self.tagtype[tag] == TiffTags.UNDEFINED: - values = [ - v.encode("ascii", "replace") if isinstance(v, str) else v - for v in values - ] - elif self.tagtype[tag] == TiffTags.RATIONAL: - values = [float(v) if isinstance(v, int) else v for v in values] - - is_ifd = self.tagtype[tag] == TiffTags.LONG and isinstance(values, dict) - if not is_ifd: - values = tuple( - info.cvt_enum(value) if isinstance(value, str) else value - for value in values - ) - - dest = self._tags_v1 if legacy_api else self._tags_v2 - - # Three branches: - # Spec'd length == 1, Actual length 1, store as element - # Spec'd length == 1, Actual > 1, Warn and truncate. Formerly barfed. - # No Spec, Actual length 1, Formerly (<4.2) returned a 1 element tuple. - # Don't mess with the legacy api, since it's frozen. - if not is_ifd and ( - (info.length == 1) - or self.tagtype[tag] == TiffTags.BYTE - or (info.length is None and len(values) == 1 and not legacy_api) - ): - # Don't mess with the legacy api, since it's frozen. - if legacy_api and self.tagtype[tag] in [ - TiffTags.RATIONAL, - TiffTags.SIGNED_RATIONAL, - ]: # rationals - values = (values,) - try: - (dest[tag],) = values - except ValueError: - # We've got a builtin tag with 1 expected entry - warnings.warn( - f"Metadata Warning, tag {tag} had too many entries: " - f"{len(values)}, expected 1" - ) - dest[tag] = values[0] - - else: - # Spec'd length > 1 or undefined - # Unspec'd, and length > 1 - dest[tag] = values - - def __delitem__(self, tag: int) -> None: - self._tags_v2.pop(tag, None) - self._tags_v1.pop(tag, None) - self._tagdata.pop(tag, None) - - def __iter__(self) -> Iterator[int]: - return iter(set(self._tagdata) | set(self._tags_v2)) - - def _unpack(self, fmt: str, data: bytes) -> tuple[Any, ...]: - return struct.unpack(self._endian + fmt, data) - - def _pack(self, fmt: str, *values: Any) -> bytes: - return struct.pack(self._endian + fmt, *values) - - list( - map( - _register_basic, - [ - (TiffTags.SHORT, "H", "short"), - (TiffTags.LONG, "L", "long"), - (TiffTags.SIGNED_BYTE, "b", "signed byte"), - (TiffTags.SIGNED_SHORT, "h", "signed short"), - (TiffTags.SIGNED_LONG, "l", "signed long"), - (TiffTags.FLOAT, "f", "float"), - (TiffTags.DOUBLE, "d", "double"), - (TiffTags.IFD, "L", "long"), - (TiffTags.LONG8, "Q", "long8"), - ], - ) - ) - - @_register_loader(1, 1) # Basic type, except for the legacy API. - def load_byte(self, data: bytes, legacy_api: bool = True) -> bytes: - return data - - @_register_writer(1) # Basic type, except for the legacy API. - def write_byte(self, data: bytes | int | IFDRational) -> bytes: - if isinstance(data, IFDRational): - data = int(data) - if isinstance(data, int): - data = bytes((data,)) - return data - - @_register_loader(2, 1) - def load_string(self, data: bytes, legacy_api: bool = True) -> str: - if data.endswith(b"\0"): - data = data[:-1] - return data.decode("latin-1", "replace") - - @_register_writer(2) - def write_string(self, value: str | bytes | int) -> bytes: - # remerge of https://github.com/python-pillow/Pillow/pull/1416 - if isinstance(value, int): - value = str(value) - if not isinstance(value, bytes): - value = value.encode("ascii", "replace") - return value + b"\0" - - @_register_loader(5, 8) - def load_rational( - self, data: bytes, legacy_api: bool = True - ) -> tuple[tuple[int, int] | IFDRational, ...]: - vals = self._unpack(f"{len(data) // 4}L", data) - - def combine(a: int, b: int) -> tuple[int, int] | IFDRational: - return (a, b) if legacy_api else IFDRational(a, b) - - return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2])) - - @_register_writer(5) - def write_rational(self, *values: IFDRational) -> bytes: - return b"".join( - self._pack("2L", *_limit_rational(frac, 2**32 - 1)) for frac in values - ) - - @_register_loader(7, 1) - def load_undefined(self, data: bytes, legacy_api: bool = True) -> bytes: - return data - - @_register_writer(7) - def write_undefined(self, value: bytes | int | IFDRational) -> bytes: - if isinstance(value, IFDRational): - value = int(value) - if isinstance(value, int): - value = str(value).encode("ascii", "replace") - return value - - @_register_loader(10, 8) - def load_signed_rational( - self, data: bytes, legacy_api: bool = True - ) -> tuple[tuple[int, int] | IFDRational, ...]: - vals = self._unpack(f"{len(data) // 4}l", data) - - def combine(a: int, b: int) -> tuple[int, int] | IFDRational: - return (a, b) if legacy_api else IFDRational(a, b) - - return tuple(combine(num, denom) for num, denom in zip(vals[::2], vals[1::2])) - - @_register_writer(10) - def write_signed_rational(self, *values: IFDRational) -> bytes: - return b"".join( - self._pack("2l", *_limit_signed_rational(frac, 2**31 - 1, -(2**31))) - for frac in values - ) - - def _ensure_read(self, fp: IO[bytes], size: int) -> bytes: - ret = fp.read(size) - if len(ret) != size: - msg = ( - "Corrupt EXIF data. " - f"Expecting to read {size} bytes but only got {len(ret)}. " - ) - raise OSError(msg) - return ret - - def load(self, fp: IO[bytes]) -> None: - self.reset() - self._offset = fp.tell() - - try: - tag_count = ( - self._unpack("Q", self._ensure_read(fp, 8)) - if self._bigtiff - else self._unpack("H", self._ensure_read(fp, 2)) - )[0] - for i in range(tag_count): - tag, typ, count, data = ( - self._unpack("HHQ8s", self._ensure_read(fp, 20)) - if self._bigtiff - else self._unpack("HHL4s", self._ensure_read(fp, 12)) - ) - - tagname = TiffTags.lookup(tag, self.group).name - typname = TYPES.get(typ, "unknown") - msg = f"tag: {tagname} ({tag}) - type: {typname} ({typ})" - - try: - unit_size, handler = self._load_dispatch[typ] - except KeyError: - logger.debug("%s - unsupported type %s", msg, typ) - continue # ignore unsupported type - size = count * unit_size - if size > (8 if self._bigtiff else 4): - here = fp.tell() - (offset,) = self._unpack("Q" if self._bigtiff else "L", data) - msg += f" Tag Location: {here} - Data Location: {offset}" - fp.seek(offset) - data = ImageFile._safe_read(fp, size) - fp.seek(here) - else: - data = data[:size] - - if len(data) != size: - warnings.warn( - "Possibly corrupt EXIF data. " - f"Expecting to read {size} bytes but only got {len(data)}." - f" Skipping tag {tag}" - ) - logger.debug(msg) - continue - - if not data: - logger.debug(msg) - continue - - self._tagdata[tag] = data - self.tagtype[tag] = typ - - msg += " - value: " - msg += f"" if size > 32 else repr(data) - - logger.debug(msg) - - (self.next,) = ( - self._unpack("Q", self._ensure_read(fp, 8)) - if self._bigtiff - else self._unpack("L", self._ensure_read(fp, 4)) - ) - except OSError as msg: - warnings.warn(str(msg)) - return - - def _get_ifh(self) -> bytes: - ifh = self._prefix + self._pack("H", 43 if self._bigtiff else 42) - if self._bigtiff: - ifh += self._pack("HH", 8, 0) - ifh += self._pack("Q", 16) if self._bigtiff else self._pack("L", 8) - - return ifh - - def tobytes(self, offset: int = 0) -> bytes: - # FIXME What about tagdata? - result = self._pack("Q" if self._bigtiff else "H", len(self._tags_v2)) - - entries: list[tuple[int, int, int, bytes, bytes]] = [] - - fmt = "Q" if self._bigtiff else "L" - fmt_size = 8 if self._bigtiff else 4 - offset += ( - len(result) + len(self._tags_v2) * (20 if self._bigtiff else 12) + fmt_size - ) - stripoffsets = None - - # pass 1: convert tags to binary format - # always write tags in ascending order - for tag, value in sorted(self._tags_v2.items()): - if tag == STRIPOFFSETS: - stripoffsets = len(entries) - typ = self.tagtype[tag] - logger.debug("Tag %s, Type: %s, Value: %s", tag, typ, repr(value)) - is_ifd = typ == TiffTags.LONG and isinstance(value, dict) - if is_ifd: - ifd = ImageFileDirectory_v2(self._get_ifh(), group=tag) - values = self._tags_v2[tag] - for ifd_tag, ifd_value in values.items(): - ifd[ifd_tag] = ifd_value - data = ifd.tobytes(offset) - else: - values = value if isinstance(value, tuple) else (value,) - data = self._write_dispatch[typ](self, *values) - - tagname = TiffTags.lookup(tag, self.group).name - typname = "ifd" if is_ifd else TYPES.get(typ, "unknown") - msg = f"save: {tagname} ({tag}) - type: {typname} ({typ}) - value: " - msg += f"" if len(data) >= 16 else str(values) - logger.debug(msg) - - # count is sum of lengths for string and arbitrary data - if is_ifd: - count = 1 - elif typ in [TiffTags.BYTE, TiffTags.ASCII, TiffTags.UNDEFINED]: - count = len(data) - else: - count = len(values) - # figure out if data fits into the entry - if len(data) <= fmt_size: - entries.append((tag, typ, count, data.ljust(fmt_size, b"\0"), b"")) - else: - entries.append((tag, typ, count, self._pack(fmt, offset), data)) - offset += (len(data) + 1) // 2 * 2 # pad to word - - # update strip offset data to point beyond auxiliary data - if stripoffsets is not None: - tag, typ, count, value, data = entries[stripoffsets] - if data: - size, handler = self._load_dispatch[typ] - values = [val + offset for val in handler(self, data, self.legacy_api)] - data = self._write_dispatch[typ](self, *values) - else: - value = self._pack(fmt, self._unpack(fmt, value)[0] + offset) - entries[stripoffsets] = tag, typ, count, value, data - - # pass 2: write entries to file - for tag, typ, count, value, data in entries: - logger.debug("%s %s %s %s %s", tag, typ, count, repr(value), repr(data)) - result += self._pack( - "HHQ8s" if self._bigtiff else "HHL4s", tag, typ, count, value - ) - - # -- overwrite here for multi-page -- - result += self._pack(fmt, 0) # end of entries - - # pass 3: write auxiliary data to file - for tag, typ, count, value, data in entries: - result += data - if len(data) & 1: - result += b"\0" - - return result - - def save(self, fp: IO[bytes]) -> int: - if fp.tell() == 0: # skip TIFF header on subsequent pages - fp.write(self._get_ifh()) - - offset = fp.tell() - result = self.tobytes(offset) - fp.write(result) - return offset + len(result) - - -ImageFileDirectory_v2._load_dispatch = _load_dispatch -ImageFileDirectory_v2._write_dispatch = _write_dispatch -for idx, name in TYPES.items(): - name = name.replace(" ", "_") - setattr(ImageFileDirectory_v2, f"load_{name}", _load_dispatch[idx][1]) - setattr(ImageFileDirectory_v2, f"write_{name}", _write_dispatch[idx]) -del _load_dispatch, _write_dispatch, idx, name - - -# Legacy ImageFileDirectory support. -class ImageFileDirectory_v1(ImageFileDirectory_v2): - """This class represents the **legacy** interface to a TIFF tag directory. - - Exposes a dictionary interface of the tags in the directory:: - - ifd = ImageFileDirectory_v1() - ifd[key] = 'Some Data' - ifd.tagtype[key] = TiffTags.ASCII - print(ifd[key]) - ('Some Data',) - - Also contains a dictionary of tag types as read from the tiff image file, - :attr:`~PIL.TiffImagePlugin.ImageFileDirectory_v1.tagtype`. - - Values are returned as a tuple. - - .. deprecated:: 3.0.0 - """ - - def __init__(self, *args: Any, **kwargs: Any) -> None: - super().__init__(*args, **kwargs) - self._legacy_api = True - - tags = property(lambda self: self._tags_v1) - tagdata = property(lambda self: self._tagdata) - - # defined in ImageFileDirectory_v2 - tagtype: dict[int, int] - """Dictionary of tag types""" - - @classmethod - def from_v2(cls, original: ImageFileDirectory_v2) -> ImageFileDirectory_v1: - """Returns an - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - instance with the same data as is contained in the original - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - instance. - - :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - - """ - - ifd = cls(prefix=original.prefix) - ifd._tagdata = original._tagdata - ifd.tagtype = original.tagtype - ifd.next = original.next # an indicator for multipage tiffs - return ifd - - def to_v2(self) -> ImageFileDirectory_v2: - """Returns an - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - instance with the same data as is contained in the original - :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v1` - instance. - - :returns: :py:class:`~PIL.TiffImagePlugin.ImageFileDirectory_v2` - - """ - - ifd = ImageFileDirectory_v2(prefix=self.prefix) - ifd._tagdata = dict(self._tagdata) - ifd.tagtype = dict(self.tagtype) - ifd._tags_v2 = dict(self._tags_v2) - return ifd - - def __contains__(self, tag: object) -> bool: - return tag in self._tags_v1 or tag in self._tagdata - - def __len__(self) -> int: - return len(set(self._tagdata) | set(self._tags_v1)) - - def __iter__(self) -> Iterator[int]: - return iter(set(self._tagdata) | set(self._tags_v1)) - - def __setitem__(self, tag: int, value: Any) -> None: - for legacy_api in (False, True): - self._setitem(tag, value, legacy_api) - - def __getitem__(self, tag: int) -> Any: - if tag not in self._tags_v1: # unpack on the fly - data = self._tagdata[tag] - typ = self.tagtype[tag] - size, handler = self._load_dispatch[typ] - for legacy in (False, True): - self._setitem(tag, handler(self, data, legacy), legacy) - val = self._tags_v1[tag] - if not isinstance(val, (tuple, bytes)): - val = (val,) - return val - - -# undone -- switch this pointer -ImageFileDirectory = ImageFileDirectory_v1 - - -## -# Image plugin for TIFF files. - - -class TiffImageFile(ImageFile.ImageFile): - format = "TIFF" - format_description = "Adobe TIFF" - _close_exclusive_fp_after_loading = False - - def __init__( - self, - fp: StrOrBytesPath | IO[bytes], - filename: str | bytes | None = None, - ) -> None: - self.tag_v2: ImageFileDirectory_v2 - """ Image file directory (tag dictionary) """ - - self.tag: ImageFileDirectory_v1 - """ Legacy tag entries """ - - super().__init__(fp, filename) - - def _open(self) -> None: - """Open the first image in a TIFF file""" - - # Header - assert self.fp is not None - ifh = self.fp.read(8) - if ifh[2] == 43: - ifh += self.fp.read(8) - - self.tag_v2 = ImageFileDirectory_v2(ifh) - - # setup frame pointers - self.__first = self.__next = self.tag_v2.next - self.__frame = -1 - self._fp = self.fp - self._frame_pos: list[int] = [] - self._n_frames: int | None = None - - logger.debug("*** TiffImageFile._open ***") - logger.debug("- __first: %s", self.__first) - logger.debug("- ifh: %s", repr(ifh)) # Use repr to avoid str(bytes) - - # and load the first frame - self._seek(0) - - @property - def n_frames(self) -> int: - current_n_frames = self._n_frames - if current_n_frames is None: - current = self.tell() - self._seek(len(self._frame_pos)) - while self._n_frames is None: - self._seek(self.tell() + 1) - self.seek(current) - assert self._n_frames is not None - return self._n_frames - - def seek(self, frame: int) -> None: - """Select a given frame as current image""" - if not self._seek_check(frame): - return - self._seek(frame) - if self._im is not None and ( - self.im.size != self._tile_size - or self.im.mode != self.mode - or self.readonly - ): - self._im = None - - def _seek(self, frame: int) -> None: - if isinstance(self._fp, DeferredError): - raise self._fp.ex - self.fp = self._fp - - while len(self._frame_pos) <= frame: - if not self.__next: - msg = "no more images in TIFF file" - raise EOFError(msg) - logger.debug( - "Seeking to frame %s, on frame %s, __next %s, location: %s", - frame, - self.__frame, - self.__next, - self.fp.tell(), - ) - if self.__next >= 2**63: - msg = "Unable to seek to frame" - raise ValueError(msg) - self.fp.seek(self.__next) - self._frame_pos.append(self.__next) - logger.debug("Loading tags, location: %s", self.fp.tell()) - self.tag_v2.load(self.fp) - if self.tag_v2.next in self._frame_pos: - # This IFD has already been processed - # Declare this to be the end of the image - self.__next = 0 - else: - self.__next = self.tag_v2.next - if self.__next == 0: - self._n_frames = frame + 1 - if len(self._frame_pos) == 1: - self.is_animated = self.__next != 0 - self.__frame += 1 - self.fp.seek(self._frame_pos[frame]) - self.tag_v2.load(self.fp) - if XMP in self.tag_v2: - xmp = self.tag_v2[XMP] - if isinstance(xmp, tuple) and len(xmp) == 1: - xmp = xmp[0] - self.info["xmp"] = xmp - elif "xmp" in self.info: - del self.info["xmp"] - self._reload_exif() - # fill the legacy tag/ifd entries - self.tag = self.ifd = ImageFileDirectory_v1.from_v2(self.tag_v2) - self.__frame = frame - self._setup() - - def tell(self) -> int: - """Return the current frame number""" - return self.__frame - - def get_photoshop_blocks(self) -> dict[int, dict[str, bytes]]: - """ - Returns a dictionary of Photoshop "Image Resource Blocks". - The keys are the image resource ID. For more information, see - https://www.adobe.com/devnet-apps/photoshop/fileformatashtml/#50577409_pgfId-1037727 - - :returns: Photoshop "Image Resource Blocks" in a dictionary. - """ - blocks = {} - val = self.tag_v2.get(ExifTags.Base.ImageResources) - if val: - while val.startswith(b"8BIM"): - id = i16(val[4:6]) - n = math.ceil((val[6] + 1) / 2) * 2 - size = i32(val[6 + n : 10 + n]) - data = val[10 + n : 10 + n + size] - blocks[id] = {"data": data} - - val = val[math.ceil((10 + n + size) / 2) * 2 :] - return blocks - - def load(self) -> Image.core.PixelAccess | None: - if self.tile and self.use_load_libtiff: - return self._load_libtiff() - return super().load() - - def load_prepare(self) -> None: - if self._im is None: - Image._decompression_bomb_check(self._tile_size) - self.im = Image.core.new(self.mode, self._tile_size) - ImageFile.ImageFile.load_prepare(self) - - def load_end(self) -> None: - # allow closing if we're on the first frame, there's no next - # This is the ImageFile.load path only, libtiff specific below. - if not self.is_animated: - self._close_exclusive_fp_after_loading = True - - # load IFD data from fp before it is closed - exif = self.getexif() - for key in TiffTags.TAGS_V2_GROUPS: - if key not in exif: - continue - exif.get_ifd(key) - - ImageOps.exif_transpose(self, in_place=True) - if ExifTags.Base.Orientation in self.tag_v2: - del self.tag_v2[ExifTags.Base.Orientation] - - def _load_libtiff(self) -> Image.core.PixelAccess | None: - """Overload method triggered when we detect a compressed tiff - Calls out to libtiff""" - - Image.Image.load(self) - - self.load_prepare() - - if not len(self.tile) == 1: - msg = "Not exactly one tile" - raise OSError(msg) - - # (self._compression, (extents tuple), - # 0, (rawmode, self._compression, fp)) - extents = self.tile[0][1] - args = self.tile[0][3] - - # To be nice on memory footprint, if there's a - # file descriptor, use that instead of reading - # into a string in python. - assert self.fp is not None - try: - fp = hasattr(self.fp, "fileno") and self.fp.fileno() - # flush the file descriptor, prevents error on pypy 2.4+ - # should also eliminate the need for fp.tell - # in _seek - if hasattr(self.fp, "flush"): - self.fp.flush() - except OSError: - # io.BytesIO have a fileno, but returns an OSError if - # it doesn't use a file descriptor. - fp = False - - if fp: - assert isinstance(args, tuple) - args_list = list(args) - args_list[2] = fp - args = tuple(args_list) - - decoder = Image._getdecoder(self.mode, "libtiff", args, self.decoderconfig) - try: - decoder.setimage(self.im, extents) - except ValueError as e: - msg = "Couldn't set the image" - raise OSError(msg) from e - - close_self_fp = self._exclusive_fp and not self.is_animated - if hasattr(self.fp, "getvalue"): - # We've got a stringio like thing passed in. Yay for all in memory. - # The decoder needs the entire file in one shot, so there's not - # a lot we can do here other than give it the entire file. - # unless we could do something like get the address of the - # underlying string for stringio. - # - # Rearranging for supporting byteio items, since they have a fileno - # that returns an OSError if there's no underlying fp. Easier to - # deal with here by reordering. - logger.debug("have getvalue. just sending in a string from getvalue") - n, err = decoder.decode(self.fp.getvalue()) - elif fp: - # we've got a actual file on disk, pass in the fp. - logger.debug("have fileno, calling fileno version of the decoder.") - if not close_self_fp: - self.fp.seek(0) - # Save and restore the file position, because libtiff will move it - # outside of the Python runtime, and that will confuse - # io.BufferedReader and possible others. - # NOTE: This must use os.lseek(), and not fp.tell()/fp.seek(), - # because the buffer read head already may not equal the actual - # file position, and fp.seek() may just adjust it's internal - # pointer and not actually seek the OS file handle. - pos = os.lseek(fp, 0, os.SEEK_CUR) - # 4 bytes, otherwise the trace might error out - n, err = decoder.decode(b"fpfp") - os.lseek(fp, pos, os.SEEK_SET) - else: - # we have something else. - logger.debug("don't have fileno or getvalue. just reading") - self.fp.seek(0) - # UNDONE -- so much for that buffer size thing. - n, err = decoder.decode(self.fp.read()) - - self.tile = [] - self.readonly = 0 - - self.load_end() - - if close_self_fp: - self.fp.close() - self.fp = None # might be shared - - if err < 0: - msg = f"decoder error {err}" - raise OSError(msg) - - return Image.Image.load(self) - - def _setup(self) -> None: - """Setup this image object based on current tags""" - - if 0xBC01 in self.tag_v2: - msg = "Windows Media Photo files not yet supported" - raise OSError(msg) - - # extract relevant tags - self._compression = COMPRESSION_INFO[self.tag_v2.get(COMPRESSION, 1)] - self._planar_configuration = self.tag_v2.get(PLANAR_CONFIGURATION, 1) - - # photometric is a required tag, but not everyone is reading - # the specification - photo = self.tag_v2.get(PHOTOMETRIC_INTERPRETATION, 0) - - # old style jpeg compression images most certainly are YCbCr - if self._compression == "tiff_jpeg": - photo = 6 - - fillorder = self.tag_v2.get(FILLORDER, 1) - - logger.debug("*** Summary ***") - logger.debug("- compression: %s", self._compression) - logger.debug("- photometric_interpretation: %s", photo) - logger.debug("- planar_configuration: %s", self._planar_configuration) - logger.debug("- fill_order: %s", fillorder) - logger.debug("- YCbCr subsampling: %s", self.tag_v2.get(YCBCRSUBSAMPLING)) - - # size - try: - xsize = self.tag_v2[IMAGEWIDTH] - ysize = self.tag_v2[IMAGELENGTH] - except KeyError as e: - msg = "Missing dimensions" - raise TypeError(msg) from e - if not isinstance(xsize, int) or not isinstance(ysize, int): - msg = "Invalid dimensions" - raise ValueError(msg) - self._tile_size = xsize, ysize - orientation = self.tag_v2.get(ExifTags.Base.Orientation) - if orientation in (5, 6, 7, 8): - self._size = ysize, xsize - else: - self._size = xsize, ysize - - logger.debug("- size: %s", self.size) - - sample_format = self.tag_v2.get(SAMPLEFORMAT, (1,)) - if len(sample_format) > 1 and max(sample_format) == min(sample_format) == 1: - # SAMPLEFORMAT is properly per band, so an RGB image will - # be (1,1,1). But, we don't support per band pixel types, - # and anything more than one band is a uint8. So, just - # take the first element. Revisit this if adding support - # for more exotic images. - sample_format = (1,) - - bps_tuple = self.tag_v2.get(BITSPERSAMPLE, (1,)) - extra_tuple = self.tag_v2.get(EXTRASAMPLES, ()) - if photo in (2, 6, 8): # RGB, YCbCr, LAB - bps_count = 3 - elif photo == 5: # CMYK - bps_count = 4 - else: - bps_count = 1 - bps_count += len(extra_tuple) - bps_actual_count = len(bps_tuple) - samples_per_pixel = self.tag_v2.get( - SAMPLESPERPIXEL, - 3 if self._compression == "tiff_jpeg" and photo in (2, 6) else 1, - ) - - if samples_per_pixel > MAX_SAMPLESPERPIXEL: - # DOS check, samples_per_pixel can be a Long, and we extend the tuple below - logger.error( - "More samples per pixel than can be decoded: %s", samples_per_pixel - ) - msg = "Invalid value for samples per pixel" - raise SyntaxError(msg) - - if samples_per_pixel < bps_actual_count: - # If a file has more values in bps_tuple than expected, - # remove the excess. - bps_tuple = bps_tuple[:samples_per_pixel] - elif samples_per_pixel > bps_actual_count and bps_actual_count == 1: - # If a file has only one value in bps_tuple, when it should have more, - # presume it is the same number of bits for all of the samples. - bps_tuple = bps_tuple * samples_per_pixel - - if len(bps_tuple) != samples_per_pixel: - msg = "unknown data organization" - raise SyntaxError(msg) - - # mode: check photometric interpretation and bits per pixel - key = ( - self.tag_v2.prefix, - photo, - sample_format, - fillorder, - bps_tuple, - extra_tuple, - ) - logger.debug("format key: %s", key) - try: - self._mode, rawmode = OPEN_INFO[key] - except KeyError as e: - logger.debug("- unsupported format") - msg = "unknown pixel mode" - raise SyntaxError(msg) from e - - logger.debug("- raw mode: %s", rawmode) - logger.debug("- pil mode: %s", self.mode) - - self.info["compression"] = self._compression - - xres = self.tag_v2.get(X_RESOLUTION, 1) - yres = self.tag_v2.get(Y_RESOLUTION, 1) - - if xres and yres: - resunit = self.tag_v2.get(RESOLUTION_UNIT) - if resunit == 2: # dots per inch - self.info["dpi"] = (xres, yres) - elif resunit == 3: # dots per centimeter. convert to dpi - self.info["dpi"] = (xres * 2.54, yres * 2.54) - elif resunit is None: # used to default to 1, but now 2) - self.info["dpi"] = (xres, yres) - # For backward compatibility, - # we also preserve the old behavior - self.info["resolution"] = xres, yres - else: # No absolute unit of measurement - self.info["resolution"] = xres, yres - - # build tile descriptors - x = y = layer = 0 - self.tile = [] - self.use_load_libtiff = READ_LIBTIFF or self._compression != "raw" - if self.use_load_libtiff: - # Decoder expects entire file as one tile. - # There's a buffer size limit in load (64k) - # so large g4 images will fail if we use that - # function. - # - # Setup the one tile for the whole image, then - # use the _load_libtiff function. - - # libtiff handles the fillmode for us, so 1;IR should - # actually be 1;I. Including the R double reverses the - # bits, so stripes of the image are reversed. See - # https://github.com/python-pillow/Pillow/issues/279 - if fillorder == 2: - # Replace fillorder with fillorder=1 - key = key[:3] + (1,) + key[4:] - logger.debug("format key: %s", key) - # this should always work, since all the - # fillorder==2 modes have a corresponding - # fillorder=1 mode - self._mode, rawmode = OPEN_INFO[key] - # YCbCr images with new jpeg compression with pixels in one plane - # unpacked straight into RGB values - if ( - photo == 6 - and self._compression == "jpeg" - and self._planar_configuration == 1 - ): - rawmode = "RGB" - # libtiff always returns the bytes in native order. - # we're expecting image byte order. So, if the rawmode - # contains I;16, we need to convert from native to image - # byte order. - elif rawmode == "I;16": - rawmode = "I;16N" - elif rawmode.endswith((";16B", ";16L")): - rawmode = rawmode[:-1] + "N" - - # Offset in the tile tuple is 0, we go from 0,0 to - # w,h, and we only do this once -- eds - a = (rawmode, self._compression, False, self.tag_v2.offset) - self.tile.append(ImageFile._Tile("libtiff", (0, 0, xsize, ysize), 0, a)) - - elif STRIPOFFSETS in self.tag_v2 or TILEOFFSETS in self.tag_v2: - # striped image - if STRIPOFFSETS in self.tag_v2: - offsets = self.tag_v2[STRIPOFFSETS] - h = self.tag_v2.get(ROWSPERSTRIP, ysize) - w = xsize - else: - # tiled image - offsets = self.tag_v2[TILEOFFSETS] - tilewidth = self.tag_v2.get(TILEWIDTH) - h = self.tag_v2.get(TILELENGTH) - if not isinstance(tilewidth, int) or not isinstance(h, int): - msg = "Invalid tile dimensions" - raise ValueError(msg) - w = tilewidth - - if w == xsize and h == ysize and self._planar_configuration != 2: - # Every tile covers the image. Only use the last offset - offsets = offsets[-1:] - - for offset in offsets: - if x + w > xsize: - stride = w * sum(bps_tuple) / 8 # bytes per line - else: - stride = 0 - - tile_rawmode = rawmode - if self._planar_configuration == 2: - # each band on it's own layer - tile_rawmode = rawmode[layer] - # adjust stride width accordingly - stride /= bps_count - - args = (tile_rawmode, int(stride), 1) - self.tile.append( - ImageFile._Tile( - self._compression, - (x, y, min(x + w, xsize), min(y + h, ysize)), - offset, - args, - ) - ) - x += w - if x >= xsize: - x, y = 0, y + h - if y >= ysize: - y = 0 - layer += 1 - else: - logger.debug("- unsupported data organization") - msg = "unknown data organization" - raise SyntaxError(msg) - - # Fix up info. - if ICCPROFILE in self.tag_v2: - self.info["icc_profile"] = self.tag_v2[ICCPROFILE] - - # fixup palette descriptor - - if self.mode in ["P", "PA"]: - palette = [o8(b // 256) for b in self.tag_v2[COLORMAP]] - self.palette = ImagePalette.raw("RGB;L", b"".join(palette)) - - -# -# -------------------------------------------------------------------- -# Write TIFF files - -# little endian is default except for image modes with -# explicit big endian byte-order - -SAVE_INFO = { - # mode => rawmode, byteorder, photometrics, - # sampleformat, bitspersample, extra - "1": ("1", II, 1, 1, (1,), None), - "L": ("L", II, 1, 1, (8,), None), - "LA": ("LA", II, 1, 1, (8, 8), 2), - "P": ("P", II, 3, 1, (8,), None), - "PA": ("PA", II, 3, 1, (8, 8), 2), - "I": ("I;32S", II, 1, 2, (32,), None), - "I;16": ("I;16", II, 1, 1, (16,), None), - "I;16L": ("I;16L", II, 1, 1, (16,), None), - "F": ("F;32F", II, 1, 3, (32,), None), - "RGB": ("RGB", II, 2, 1, (8, 8, 8), None), - "RGBX": ("RGBX", II, 2, 1, (8, 8, 8, 8), 0), - "RGBA": ("RGBA", II, 2, 1, (8, 8, 8, 8), 2), - "CMYK": ("CMYK", II, 5, 1, (8, 8, 8, 8), None), - "YCbCr": ("YCbCr", II, 6, 1, (8, 8, 8), None), - "LAB": ("LAB", II, 8, 1, (8, 8, 8), None), - "I;16B": ("I;16B", MM, 1, 1, (16,), None), -} - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - try: - rawmode, prefix, photo, format, bits, extra = SAVE_INFO[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as TIFF" - raise OSError(msg) from e - - encoderinfo = im.encoderinfo - encoderconfig = im.encoderconfig - - ifd = ImageFileDirectory_v2(prefix=prefix) - if encoderinfo.get("big_tiff"): - ifd._bigtiff = True - - try: - compression = encoderinfo["compression"] - except KeyError: - compression = im.info.get("compression") - if isinstance(compression, int): - # compression value may be from BMP. Ignore it - compression = None - if compression is None: - compression = "raw" - elif compression == "tiff_jpeg": - # OJPEG is obsolete, so use new-style JPEG compression instead - compression = "jpeg" - elif compression == "tiff_deflate": - compression = "tiff_adobe_deflate" - - libtiff = WRITE_LIBTIFF or compression != "raw" - - # required for color libtiff images - ifd[PLANAR_CONFIGURATION] = 1 - - ifd[IMAGEWIDTH] = im.size[0] - ifd[IMAGELENGTH] = im.size[1] - - # write any arbitrary tags passed in as an ImageFileDirectory - if "tiffinfo" in encoderinfo: - info = encoderinfo["tiffinfo"] - elif "exif" in encoderinfo: - info = encoderinfo["exif"] - if isinstance(info, bytes): - exif = Image.Exif() - exif.load(info) - info = exif - else: - info = {} - logger.debug("Tiffinfo Keys: %s", list(info)) - if isinstance(info, ImageFileDirectory_v1): - info = info.to_v2() - for key in info: - if isinstance(info, Image.Exif) and key in TiffTags.TAGS_V2_GROUPS: - ifd[key] = info.get_ifd(key) - else: - ifd[key] = info.get(key) - try: - ifd.tagtype[key] = info.tagtype[key] - except Exception: - pass # might not be an IFD. Might not have populated type - - legacy_ifd = {} - if hasattr(im, "tag"): - legacy_ifd = im.tag.to_v2() - - supplied_tags = {**legacy_ifd, **getattr(im, "tag_v2", {})} - for tag in ( - # IFD offset that may not be correct in the saved image - EXIFIFD, - # Determined by the image format and should not be copied from legacy_ifd. - SAMPLEFORMAT, - ): - if tag in supplied_tags: - del supplied_tags[tag] - - # additions written by Greg Couch, gregc@cgl.ucsf.edu - # inspired by image-sig posting from Kevin Cazabon, kcazabon@home.com - if hasattr(im, "tag_v2"): - # preserve tags from original TIFF image file - for key in ( - RESOLUTION_UNIT, - X_RESOLUTION, - Y_RESOLUTION, - IPTC_NAA_CHUNK, - PHOTOSHOP_CHUNK, - XMP, - ): - if key in im.tag_v2: - if key == IPTC_NAA_CHUNK and im.tag_v2.tagtype[key] not in ( - TiffTags.BYTE, - TiffTags.UNDEFINED, - ): - del supplied_tags[key] - else: - ifd[key] = im.tag_v2[key] - ifd.tagtype[key] = im.tag_v2.tagtype[key] - - # preserve ICC profile (should also work when saving other formats - # which support profiles as TIFF) -- 2008-06-06 Florian Hoech - icc = encoderinfo.get("icc_profile", im.info.get("icc_profile")) - if icc: - ifd[ICCPROFILE] = icc - - for key, name in [ - (IMAGEDESCRIPTION, "description"), - (X_RESOLUTION, "resolution"), - (Y_RESOLUTION, "resolution"), - (X_RESOLUTION, "x_resolution"), - (Y_RESOLUTION, "y_resolution"), - (RESOLUTION_UNIT, "resolution_unit"), - (SOFTWARE, "software"), - (DATE_TIME, "date_time"), - (ARTIST, "artist"), - (COPYRIGHT, "copyright"), - ]: - if name in encoderinfo: - ifd[key] = encoderinfo[name] - - dpi = encoderinfo.get("dpi") - if dpi: - ifd[RESOLUTION_UNIT] = 2 - ifd[X_RESOLUTION] = dpi[0] - ifd[Y_RESOLUTION] = dpi[1] - - if bits != (1,): - ifd[BITSPERSAMPLE] = bits - if len(bits) != 1: - ifd[SAMPLESPERPIXEL] = len(bits) - if extra is not None: - ifd[EXTRASAMPLES] = extra - if format != 1: - ifd[SAMPLEFORMAT] = format - - if PHOTOMETRIC_INTERPRETATION not in ifd: - ifd[PHOTOMETRIC_INTERPRETATION] = photo - elif im.mode in ("1", "L") and ifd[PHOTOMETRIC_INTERPRETATION] == 0: - if im.mode == "1": - inverted_im = im.copy() - px = inverted_im.load() - if px is not None: - for y in range(inverted_im.height): - for x in range(inverted_im.width): - px[x, y] = 0 if px[x, y] == 255 else 255 - im = inverted_im - else: - im = ImageOps.invert(im) - - if im.mode in ["P", "PA"]: - lut = im.im.getpalette("RGB", "RGB;L") - colormap = [] - colors = len(lut) // 3 - for i in range(3): - colormap += [v * 256 for v in lut[colors * i : colors * (i + 1)]] - colormap += [0] * (256 - colors) - ifd[COLORMAP] = colormap - # data orientation - w, h = ifd[IMAGEWIDTH], ifd[IMAGELENGTH] - stride = len(bits) * ((w * bits[0] + 7) // 8) - if ROWSPERSTRIP not in ifd: - # aim for given strip size (64 KB by default) when using libtiff writer - if libtiff: - im_strip_size = encoderinfo.get("strip_size", STRIP_SIZE) - rows_per_strip = 1 if stride == 0 else min(im_strip_size // stride, h) - # JPEG encoder expects multiple of 8 rows - if compression == "jpeg": - rows_per_strip = min(((rows_per_strip + 7) // 8) * 8, h) - else: - rows_per_strip = h - if rows_per_strip == 0: - rows_per_strip = 1 - ifd[ROWSPERSTRIP] = rows_per_strip - strip_byte_counts = 1 if stride == 0 else stride * ifd[ROWSPERSTRIP] - strips_per_image = (h + ifd[ROWSPERSTRIP] - 1) // ifd[ROWSPERSTRIP] - if strip_byte_counts >= 2**16: - ifd.tagtype[STRIPBYTECOUNTS] = TiffTags.LONG - ifd[STRIPBYTECOUNTS] = (strip_byte_counts,) * (strips_per_image - 1) + ( - stride * h - strip_byte_counts * (strips_per_image - 1), - ) - ifd[STRIPOFFSETS] = tuple( - range(0, strip_byte_counts * strips_per_image, strip_byte_counts) - ) # this is adjusted by IFD writer - # no compression by default: - ifd[COMPRESSION] = COMPRESSION_INFO_REV.get(compression, 1) - - if im.mode == "YCbCr": - for tag, default_value in { - YCBCRSUBSAMPLING: (1, 1), - REFERENCEBLACKWHITE: (0, 255, 128, 255, 128, 255), - }.items(): - ifd.setdefault(tag, default_value) - - blocklist = [TILEWIDTH, TILELENGTH, TILEOFFSETS, TILEBYTECOUNTS] - if libtiff: - if "quality" in encoderinfo: - quality = encoderinfo["quality"] - if not isinstance(quality, int) or quality < 0 or quality > 100: - msg = "Invalid quality setting" - raise ValueError(msg) - if compression != "jpeg": - msg = "quality setting only supported for 'jpeg' compression" - raise ValueError(msg) - ifd[JPEGQUALITY] = quality - - logger.debug("Saving using libtiff encoder") - logger.debug("Items: %s", sorted(ifd.items())) - _fp = 0 - if hasattr(fp, "fileno"): - try: - fp.seek(0) - _fp = fp.fileno() - except io.UnsupportedOperation: - pass - - # optional types for non core tags - types = {} - # STRIPOFFSETS and STRIPBYTECOUNTS are added by the library - # based on the data in the strip. - # OSUBFILETYPE is deprecated. - # The other tags expect arrays with a certain length (fixed or depending on - # BITSPERSAMPLE, etc), passing arrays with a different length will result in - # segfaults. Block these tags until we add extra validation. - # SUBIFD may also cause a segfault. - blocklist += [ - OSUBFILETYPE, - REFERENCEBLACKWHITE, - STRIPBYTECOUNTS, - STRIPOFFSETS, - TRANSFERFUNCTION, - SUBIFD, - ] - - # bits per sample is a single short in the tiff directory, not a list. - atts: dict[int, Any] = {BITSPERSAMPLE: bits[0]} - # Merge the ones that we have with (optional) more bits from - # the original file, e.g x,y resolution so that we can - # save(load('')) == original file. - for tag, value in itertools.chain(ifd.items(), supplied_tags.items()): - # Libtiff can only process certain core items without adding - # them to the custom dictionary. - # Custom items are supported for int, float, unicode, string and byte - # values. Other types and tuples require a tagtype. - if tag not in TiffTags.LIBTIFF_CORE: - if tag in TiffTags.TAGS_V2_GROUPS: - types[tag] = TiffTags.LONG8 - elif tag in ifd.tagtype: - types[tag] = ifd.tagtype[tag] - elif isinstance(value, (int, float, str, bytes)) or ( - isinstance(value, tuple) - and all(isinstance(v, (int, float, IFDRational)) for v in value) - ): - type = TiffTags.lookup(tag).type - if type: - types[tag] = type - if tag not in atts and tag not in blocklist: - if isinstance(value, str): - atts[tag] = value.encode("ascii", "replace") + b"\0" - elif isinstance(value, IFDRational): - atts[tag] = float(value) - else: - atts[tag] = value - - if SAMPLEFORMAT in atts and len(atts[SAMPLEFORMAT]) == 1: - atts[SAMPLEFORMAT] = atts[SAMPLEFORMAT][0] - - logger.debug("Converted items: %s", sorted(atts.items())) - - # libtiff always expects the bytes in native order. - # we're storing image byte order. So, if the rawmode - # contains I;16, we need to convert from native to image - # byte order. - if im.mode in ("I;16", "I;16B", "I;16L"): - rawmode = "I;16N" - - # Pass tags as sorted list so that the tags are set in a fixed order. - # This is required by libtiff for some tags. For example, the JPEGQUALITY - # pseudo tag requires that the COMPRESS tag was already set. - tags = list(atts.items()) - tags.sort() - a = (rawmode, compression, _fp, filename, tags, types) - encoder = Image._getencoder(im.mode, "libtiff", a, encoderconfig) - encoder.setimage(im.im, (0, 0) + im.size) - while True: - errcode, data = encoder.encode(ImageFile.MAXBLOCK)[1:] - if not _fp: - fp.write(data) - if errcode: - break - if errcode < 0: - msg = f"encoder error {errcode} when writing image file" - raise OSError(msg) - - else: - for tag in blocklist: - del ifd[tag] - offset = ifd.save(fp) - - ImageFile._save( - im, - fp, - [ImageFile._Tile("raw", (0, 0) + im.size, offset, (rawmode, stride, 1))], - ) - - # -- helper for multi-page save -- - if "_debug_multipage" in encoderinfo: - # just to access o32 and o16 (using correct byte order) - setattr(im, "_debug_multipage", ifd) - - -class AppendingTiffWriter(io.BytesIO): - fieldSizes = [ - 0, # None - 1, # byte - 1, # ascii - 2, # short - 4, # long - 8, # rational - 1, # sbyte - 1, # undefined - 2, # sshort - 4, # slong - 8, # srational - 4, # float - 8, # double - 4, # ifd - 2, # unicode - 4, # complex - 8, # long8 - ] - - Tags = { - 273, # StripOffsets - 288, # FreeOffsets - 324, # TileOffsets - 519, # JPEGQTables - 520, # JPEGDCTables - 521, # JPEGACTables - } - - def __init__(self, fn: StrOrBytesPath | IO[bytes], new: bool = False) -> None: - self.f: IO[bytes] - if is_path(fn): - self.name = fn - self.close_fp = True - try: - self.f = open(fn, "w+b" if new else "r+b") - except OSError: - self.f = open(fn, "w+b") - else: - self.f = cast(IO[bytes], fn) - self.close_fp = False - self.beginning = self.f.tell() - self.setup() - - def setup(self) -> None: - # Reset everything. - self.f.seek(self.beginning, os.SEEK_SET) - - self.whereToWriteNewIFDOffset: int | None = None - self.offsetOfNewPage = 0 - - self.IIMM = iimm = self.f.read(4) - self._bigtiff = b"\x2b" in iimm - if not iimm: - # empty file - first page - self.isFirst = True - return - - self.isFirst = False - if iimm not in PREFIXES: - msg = "Invalid TIFF file header" - raise RuntimeError(msg) - - self.setEndian("<" if iimm.startswith(II) else ">") - - if self._bigtiff: - self.f.seek(4, os.SEEK_CUR) - self.skipIFDs() - self.goToEnd() - - def finalize(self) -> None: - if self.isFirst: - return - - # fix offsets - self.f.seek(self.offsetOfNewPage) - - iimm = self.f.read(4) - if not iimm: - # Make it easy to finish a frame without committing to a new one. - return - - if iimm != self.IIMM: - msg = "IIMM of new page doesn't match IIMM of first page" - raise RuntimeError(msg) - - if self._bigtiff: - self.f.seek(4, os.SEEK_CUR) - ifd_offset = self._read(8 if self._bigtiff else 4) - ifd_offset += self.offsetOfNewPage - assert self.whereToWriteNewIFDOffset is not None - self.f.seek(self.whereToWriteNewIFDOffset) - self._write(ifd_offset, 8 if self._bigtiff else 4) - self.f.seek(ifd_offset) - self.fixIFD() - - def newFrame(self) -> None: - # Call this to finish a frame. - self.finalize() - self.setup() - - def __enter__(self) -> AppendingTiffWriter: - return self - - def __exit__(self, *args: object) -> None: - if self.close_fp: - self.close() - - def tell(self) -> int: - return self.f.tell() - self.offsetOfNewPage - - def seek(self, offset: int, whence: int = io.SEEK_SET) -> int: - """ - :param offset: Distance to seek. - :param whence: Whether the distance is relative to the start, - end or current position. - :returns: The resulting position, relative to the start. - """ - if whence == os.SEEK_SET: - offset += self.offsetOfNewPage - - self.f.seek(offset, whence) - return self.tell() - - def goToEnd(self) -> None: - self.f.seek(0, os.SEEK_END) - pos = self.f.tell() - - # pad to 16 byte boundary - pad_bytes = 16 - pos % 16 - if 0 < pad_bytes < 16: - self.f.write(bytes(pad_bytes)) - self.offsetOfNewPage = self.f.tell() - - def setEndian(self, endian: str) -> None: - self.endian = endian - self.longFmt = f"{self.endian}L" - self.shortFmt = f"{self.endian}H" - self.tagFormat = f"{self.endian}HH" + ("Q" if self._bigtiff else "L") - - def skipIFDs(self) -> None: - while True: - ifd_offset = self._read(8 if self._bigtiff else 4) - if ifd_offset == 0: - self.whereToWriteNewIFDOffset = self.f.tell() - ( - 8 if self._bigtiff else 4 - ) - break - - self.f.seek(ifd_offset) - num_tags = self._read(8 if self._bigtiff else 2) - self.f.seek(num_tags * (20 if self._bigtiff else 12), os.SEEK_CUR) - - def write(self, data: Buffer, /) -> int: - return self.f.write(data) - - def _fmt(self, field_size: int) -> str: - try: - return {2: "H", 4: "L", 8: "Q"}[field_size] - except KeyError: - msg = "offset is not supported" - raise RuntimeError(msg) - - def _read(self, field_size: int) -> int: - (value,) = struct.unpack( - self.endian + self._fmt(field_size), self.f.read(field_size) - ) - return value - - def readShort(self) -> int: - return self._read(2) - - def readLong(self) -> int: - return self._read(4) - - @staticmethod - def _verify_bytes_written(bytes_written: int | None, expected: int) -> None: - if bytes_written is not None and bytes_written != expected: - msg = f"wrote only {bytes_written} bytes but wanted {expected}" - raise RuntimeError(msg) - - def _rewriteLast( - self, value: int, field_size: int, new_field_size: int = 0 - ) -> None: - self.f.seek(-field_size, os.SEEK_CUR) - if not new_field_size: - new_field_size = field_size - bytes_written = self.f.write( - struct.pack(self.endian + self._fmt(new_field_size), value) - ) - self._verify_bytes_written(bytes_written, new_field_size) - - def rewriteLastShortToLong(self, value: int) -> None: - self._rewriteLast(value, 2, 4) - - def rewriteLastShort(self, value: int) -> None: - return self._rewriteLast(value, 2) - - def rewriteLastLong(self, value: int) -> None: - return self._rewriteLast(value, 4) - - def _write(self, value: int, field_size: int) -> None: - bytes_written = self.f.write( - struct.pack(self.endian + self._fmt(field_size), value) - ) - self._verify_bytes_written(bytes_written, field_size) - - def writeShort(self, value: int) -> None: - self._write(value, 2) - - def writeLong(self, value: int) -> None: - self._write(value, 4) - - def close(self) -> None: - self.finalize() - if self.close_fp: - self.f.close() - - def fixIFD(self) -> None: - num_tags = self._read(8 if self._bigtiff else 2) - - for i in range(num_tags): - tag, field_type, count = struct.unpack( - self.tagFormat, self.f.read(12 if self._bigtiff else 8) - ) - - field_size = self.fieldSizes[field_type] - total_size = field_size * count - fmt_size = 8 if self._bigtiff else 4 - is_local = total_size <= fmt_size - if not is_local: - offset = self._read(fmt_size) + self.offsetOfNewPage - self._rewriteLast(offset, fmt_size) - - if tag in self.Tags: - cur_pos = self.f.tell() - - logger.debug( - "fixIFD: %s (%d) - type: %s (%d) - type size: %d - count: %d", - TiffTags.lookup(tag).name, - tag, - TYPES.get(field_type, "unknown"), - field_type, - field_size, - count, - ) - - if is_local: - self._fixOffsets(count, field_size) - self.f.seek(cur_pos + fmt_size) - else: - self.f.seek(offset) - self._fixOffsets(count, field_size) - self.f.seek(cur_pos) - - elif is_local: - # skip the locally stored value that is not an offset - self.f.seek(fmt_size, os.SEEK_CUR) - - def _fixOffsets(self, count: int, field_size: int) -> None: - for i in range(count): - offset = self._read(field_size) - offset += self.offsetOfNewPage - - new_field_size = 0 - if self._bigtiff and field_size in (2, 4) and offset >= 2**32: - # offset is now too large - we must convert long to long8 - new_field_size = 8 - elif field_size == 2 and offset >= 2**16: - # offset is now too large - we must convert short to long - new_field_size = 4 - if new_field_size: - if count != 1: - msg = "not implemented" - raise RuntimeError(msg) # XXX TODO - - # simple case - the offset is just one and therefore it is - # local (not referenced with another offset) - self._rewriteLast(offset, field_size, new_field_size) - # Move back past the new offset, past 'count', and before 'field_type' - rewind = -new_field_size - 4 - 2 - self.f.seek(rewind, os.SEEK_CUR) - self.writeShort(new_field_size) # rewrite the type - self.f.seek(2 - rewind, os.SEEK_CUR) - else: - self._rewriteLast(offset, field_size) - - def fixOffsets( - self, count: int, isShort: bool = False, isLong: bool = False - ) -> None: - if isShort: - field_size = 2 - elif isLong: - field_size = 4 - else: - field_size = 0 - return self._fixOffsets(count, field_size) - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - append_images = list(im.encoderinfo.get("append_images", [])) - if not hasattr(im, "n_frames") and not append_images: - return _save(im, fp, filename) - - cur_idx = im.tell() - try: - with AppendingTiffWriter(fp) as tf: - for ims in [im] + append_images: - encoderinfo = ims._attach_default_encoderinfo(im) - if not hasattr(ims, "encoderconfig"): - ims.encoderconfig = () - nfr = getattr(ims, "n_frames", 1) - - for idx in range(nfr): - ims.seek(idx) - ims.load() - _save(ims, tf, filename) - tf.newFrame() - ims.encoderinfo = encoderinfo - finally: - im.seek(cur_idx) - - -# -# -------------------------------------------------------------------- -# Register - -Image.register_open(TiffImageFile.format, TiffImageFile, _accept) -Image.register_save(TiffImageFile.format, _save) -Image.register_save_all(TiffImageFile.format, _save_all) - -Image.register_extensions(TiffImageFile.format, [".tif", ".tiff"]) - -Image.register_mime(TiffImageFile.format, "image/tiff") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/TiffTags.py b/pptx-env/lib/python3.12/site-packages/PIL/TiffTags.py deleted file mode 100644 index 761aa3f6..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/TiffTags.py +++ /dev/null @@ -1,567 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TIFF tags -# -# This module provides clear-text names for various well-known -# TIFF tags. the TIFF codec works just fine without it. -# -# Copyright (c) Secret Labs AB 1999. -# -# See the README file for information on usage and redistribution. -# - -## -# This module provides constants and clear-text names for various -# well-known TIFF tags. -## -from __future__ import annotations - -from typing import NamedTuple - - -class _TagInfo(NamedTuple): - value: int | None - name: str - type: int | None - length: int | None - enum: dict[str, int] - - -class TagInfo(_TagInfo): - __slots__: list[str] = [] - - def __new__( - cls, - value: int | None = None, - name: str = "unknown", - type: int | None = None, - length: int | None = None, - enum: dict[str, int] | None = None, - ) -> TagInfo: - return super().__new__(cls, value, name, type, length, enum or {}) - - def cvt_enum(self, value: str) -> int | str: - # Using get will call hash(value), which can be expensive - # for some types (e.g. Fraction). Since self.enum is rarely - # used, it's usually better to test it first. - return self.enum.get(value, value) if self.enum else value - - -def lookup(tag: int, group: int | None = None) -> TagInfo: - """ - :param tag: Integer tag number - :param group: Which :py:data:`~PIL.TiffTags.TAGS_V2_GROUPS` to look in - - .. versionadded:: 8.3.0 - - :returns: Taginfo namedtuple, From the ``TAGS_V2`` info if possible, - otherwise just populating the value and name from ``TAGS``. - If the tag is not recognized, "unknown" is returned for the name - - """ - - if group is not None: - info = TAGS_V2_GROUPS[group].get(tag) if group in TAGS_V2_GROUPS else None - else: - info = TAGS_V2.get(tag) - return info or TagInfo(tag, TAGS.get(tag, "unknown")) - - -## -# Map tag numbers to tag info. -# -# id: (Name, Type, Length[, enum_values]) -# -# The length here differs from the length in the tiff spec. For -# numbers, the tiff spec is for the number of fields returned. We -# agree here. For string-like types, the tiff spec uses the length of -# field in bytes. In Pillow, we are using the number of expected -# fields, in general 1 for string-like types. - - -BYTE = 1 -ASCII = 2 -SHORT = 3 -LONG = 4 -RATIONAL = 5 -SIGNED_BYTE = 6 -UNDEFINED = 7 -SIGNED_SHORT = 8 -SIGNED_LONG = 9 -SIGNED_RATIONAL = 10 -FLOAT = 11 -DOUBLE = 12 -IFD = 13 -LONG8 = 16 - -_tags_v2: dict[int, tuple[str, int, int] | tuple[str, int, int, dict[str, int]]] = { - 254: ("NewSubfileType", LONG, 1), - 255: ("SubfileType", SHORT, 1), - 256: ("ImageWidth", LONG, 1), - 257: ("ImageLength", LONG, 1), - 258: ("BitsPerSample", SHORT, 0), - 259: ( - "Compression", - SHORT, - 1, - { - "Uncompressed": 1, - "CCITT 1d": 2, - "Group 3 Fax": 3, - "Group 4 Fax": 4, - "LZW": 5, - "JPEG": 6, - "PackBits": 32773, - }, - ), - 262: ( - "PhotometricInterpretation", - SHORT, - 1, - { - "WhiteIsZero": 0, - "BlackIsZero": 1, - "RGB": 2, - "RGB Palette": 3, - "Transparency Mask": 4, - "CMYK": 5, - "YCbCr": 6, - "CieLAB": 8, - "CFA": 32803, # TIFF/EP, Adobe DNG - "LinearRaw": 32892, # Adobe DNG - }, - ), - 263: ("Threshholding", SHORT, 1), - 264: ("CellWidth", SHORT, 1), - 265: ("CellLength", SHORT, 1), - 266: ("FillOrder", SHORT, 1), - 269: ("DocumentName", ASCII, 1), - 270: ("ImageDescription", ASCII, 1), - 271: ("Make", ASCII, 1), - 272: ("Model", ASCII, 1), - 273: ("StripOffsets", LONG, 0), - 274: ("Orientation", SHORT, 1), - 277: ("SamplesPerPixel", SHORT, 1), - 278: ("RowsPerStrip", LONG, 1), - 279: ("StripByteCounts", LONG, 0), - 280: ("MinSampleValue", SHORT, 0), - 281: ("MaxSampleValue", SHORT, 0), - 282: ("XResolution", RATIONAL, 1), - 283: ("YResolution", RATIONAL, 1), - 284: ("PlanarConfiguration", SHORT, 1, {"Contiguous": 1, "Separate": 2}), - 285: ("PageName", ASCII, 1), - 286: ("XPosition", RATIONAL, 1), - 287: ("YPosition", RATIONAL, 1), - 288: ("FreeOffsets", LONG, 1), - 289: ("FreeByteCounts", LONG, 1), - 290: ("GrayResponseUnit", SHORT, 1), - 291: ("GrayResponseCurve", SHORT, 0), - 292: ("T4Options", LONG, 1), - 293: ("T6Options", LONG, 1), - 296: ("ResolutionUnit", SHORT, 1, {"none": 1, "inch": 2, "cm": 3}), - 297: ("PageNumber", SHORT, 2), - 301: ("TransferFunction", SHORT, 0), - 305: ("Software", ASCII, 1), - 306: ("DateTime", ASCII, 1), - 315: ("Artist", ASCII, 1), - 316: ("HostComputer", ASCII, 1), - 317: ("Predictor", SHORT, 1, {"none": 1, "Horizontal Differencing": 2}), - 318: ("WhitePoint", RATIONAL, 2), - 319: ("PrimaryChromaticities", RATIONAL, 6), - 320: ("ColorMap", SHORT, 0), - 321: ("HalftoneHints", SHORT, 2), - 322: ("TileWidth", LONG, 1), - 323: ("TileLength", LONG, 1), - 324: ("TileOffsets", LONG, 0), - 325: ("TileByteCounts", LONG, 0), - 330: ("SubIFDs", LONG, 0), - 332: ("InkSet", SHORT, 1), - 333: ("InkNames", ASCII, 1), - 334: ("NumberOfInks", SHORT, 1), - 336: ("DotRange", SHORT, 0), - 337: ("TargetPrinter", ASCII, 1), - 338: ("ExtraSamples", SHORT, 0), - 339: ("SampleFormat", SHORT, 0), - 340: ("SMinSampleValue", DOUBLE, 0), - 341: ("SMaxSampleValue", DOUBLE, 0), - 342: ("TransferRange", SHORT, 6), - 347: ("JPEGTables", UNDEFINED, 1), - # obsolete JPEG tags - 512: ("JPEGProc", SHORT, 1), - 513: ("JPEGInterchangeFormat", LONG, 1), - 514: ("JPEGInterchangeFormatLength", LONG, 1), - 515: ("JPEGRestartInterval", SHORT, 1), - 517: ("JPEGLosslessPredictors", SHORT, 0), - 518: ("JPEGPointTransforms", SHORT, 0), - 519: ("JPEGQTables", LONG, 0), - 520: ("JPEGDCTables", LONG, 0), - 521: ("JPEGACTables", LONG, 0), - 529: ("YCbCrCoefficients", RATIONAL, 3), - 530: ("YCbCrSubSampling", SHORT, 2), - 531: ("YCbCrPositioning", SHORT, 1), - 532: ("ReferenceBlackWhite", RATIONAL, 6), - 700: ("XMP", BYTE, 0), - # Four private SGI tags - 32995: ("Matteing", SHORT, 1), - 32996: ("DataType", SHORT, 0), - 32997: ("ImageDepth", LONG, 1), - 32998: ("TileDepth", LONG, 1), - 33432: ("Copyright", ASCII, 1), - 33723: ("IptcNaaInfo", UNDEFINED, 1), - 34377: ("PhotoshopInfo", BYTE, 0), - # FIXME add more tags here - 34665: ("ExifIFD", LONG, 1), - 34675: ("ICCProfile", UNDEFINED, 1), - 34853: ("GPSInfoIFD", LONG, 1), - 36864: ("ExifVersion", UNDEFINED, 1), - 37724: ("ImageSourceData", UNDEFINED, 1), - 40965: ("InteroperabilityIFD", LONG, 1), - 41730: ("CFAPattern", UNDEFINED, 1), - # MPInfo - 45056: ("MPFVersion", UNDEFINED, 1), - 45057: ("NumberOfImages", LONG, 1), - 45058: ("MPEntry", UNDEFINED, 1), - 45059: ("ImageUIDList", UNDEFINED, 0), # UNDONE, check - 45060: ("TotalFrames", LONG, 1), - 45313: ("MPIndividualNum", LONG, 1), - 45569: ("PanOrientation", LONG, 1), - 45570: ("PanOverlap_H", RATIONAL, 1), - 45571: ("PanOverlap_V", RATIONAL, 1), - 45572: ("BaseViewpointNum", LONG, 1), - 45573: ("ConvergenceAngle", SIGNED_RATIONAL, 1), - 45574: ("BaselineLength", RATIONAL, 1), - 45575: ("VerticalDivergence", SIGNED_RATIONAL, 1), - 45576: ("AxisDistance_X", SIGNED_RATIONAL, 1), - 45577: ("AxisDistance_Y", SIGNED_RATIONAL, 1), - 45578: ("AxisDistance_Z", SIGNED_RATIONAL, 1), - 45579: ("YawAngle", SIGNED_RATIONAL, 1), - 45580: ("PitchAngle", SIGNED_RATIONAL, 1), - 45581: ("RollAngle", SIGNED_RATIONAL, 1), - 40960: ("FlashPixVersion", UNDEFINED, 1), - 50741: ("MakerNoteSafety", SHORT, 1, {"Unsafe": 0, "Safe": 1}), - 50780: ("BestQualityScale", RATIONAL, 1), - 50838: ("ImageJMetaDataByteCounts", LONG, 0), # Can be more than one - 50839: ("ImageJMetaData", UNDEFINED, 1), # see Issue #2006 -} -_tags_v2_groups = { - # ExifIFD - 34665: { - 36864: ("ExifVersion", UNDEFINED, 1), - 40960: ("FlashPixVersion", UNDEFINED, 1), - 40965: ("InteroperabilityIFD", LONG, 1), - 41730: ("CFAPattern", UNDEFINED, 1), - }, - # GPSInfoIFD - 34853: { - 0: ("GPSVersionID", BYTE, 4), - 1: ("GPSLatitudeRef", ASCII, 2), - 2: ("GPSLatitude", RATIONAL, 3), - 3: ("GPSLongitudeRef", ASCII, 2), - 4: ("GPSLongitude", RATIONAL, 3), - 5: ("GPSAltitudeRef", BYTE, 1), - 6: ("GPSAltitude", RATIONAL, 1), - 7: ("GPSTimeStamp", RATIONAL, 3), - 8: ("GPSSatellites", ASCII, 0), - 9: ("GPSStatus", ASCII, 2), - 10: ("GPSMeasureMode", ASCII, 2), - 11: ("GPSDOP", RATIONAL, 1), - 12: ("GPSSpeedRef", ASCII, 2), - 13: ("GPSSpeed", RATIONAL, 1), - 14: ("GPSTrackRef", ASCII, 2), - 15: ("GPSTrack", RATIONAL, 1), - 16: ("GPSImgDirectionRef", ASCII, 2), - 17: ("GPSImgDirection", RATIONAL, 1), - 18: ("GPSMapDatum", ASCII, 0), - 19: ("GPSDestLatitudeRef", ASCII, 2), - 20: ("GPSDestLatitude", RATIONAL, 3), - 21: ("GPSDestLongitudeRef", ASCII, 2), - 22: ("GPSDestLongitude", RATIONAL, 3), - 23: ("GPSDestBearingRef", ASCII, 2), - 24: ("GPSDestBearing", RATIONAL, 1), - 25: ("GPSDestDistanceRef", ASCII, 2), - 26: ("GPSDestDistance", RATIONAL, 1), - 27: ("GPSProcessingMethod", UNDEFINED, 0), - 28: ("GPSAreaInformation", UNDEFINED, 0), - 29: ("GPSDateStamp", ASCII, 11), - 30: ("GPSDifferential", SHORT, 1), - }, - # InteroperabilityIFD - 40965: {1: ("InteropIndex", ASCII, 1), 2: ("InteropVersion", UNDEFINED, 1)}, -} - -# Legacy Tags structure -# these tags aren't included above, but were in the previous versions -TAGS: dict[int | tuple[int, int], str] = { - 347: "JPEGTables", - 700: "XMP", - # Additional Exif Info - 32932: "Wang Annotation", - 33434: "ExposureTime", - 33437: "FNumber", - 33445: "MD FileTag", - 33446: "MD ScalePixel", - 33447: "MD ColorTable", - 33448: "MD LabName", - 33449: "MD SampleInfo", - 33450: "MD PrepDate", - 33451: "MD PrepTime", - 33452: "MD FileUnits", - 33550: "ModelPixelScaleTag", - 33723: "IptcNaaInfo", - 33918: "INGR Packet Data Tag", - 33919: "INGR Flag Registers", - 33920: "IrasB Transformation Matrix", - 33922: "ModelTiepointTag", - 34264: "ModelTransformationTag", - 34377: "PhotoshopInfo", - 34735: "GeoKeyDirectoryTag", - 34736: "GeoDoubleParamsTag", - 34737: "GeoAsciiParamsTag", - 34850: "ExposureProgram", - 34852: "SpectralSensitivity", - 34855: "ISOSpeedRatings", - 34856: "OECF", - 34864: "SensitivityType", - 34865: "StandardOutputSensitivity", - 34866: "RecommendedExposureIndex", - 34867: "ISOSpeed", - 34868: "ISOSpeedLatitudeyyy", - 34869: "ISOSpeedLatitudezzz", - 34908: "HylaFAX FaxRecvParams", - 34909: "HylaFAX FaxSubAddress", - 34910: "HylaFAX FaxRecvTime", - 36864: "ExifVersion", - 36867: "DateTimeOriginal", - 36868: "DateTimeDigitized", - 37121: "ComponentsConfiguration", - 37122: "CompressedBitsPerPixel", - 37724: "ImageSourceData", - 37377: "ShutterSpeedValue", - 37378: "ApertureValue", - 37379: "BrightnessValue", - 37380: "ExposureBiasValue", - 37381: "MaxApertureValue", - 37382: "SubjectDistance", - 37383: "MeteringMode", - 37384: "LightSource", - 37385: "Flash", - 37386: "FocalLength", - 37396: "SubjectArea", - 37500: "MakerNote", - 37510: "UserComment", - 37520: "SubSec", - 37521: "SubSecTimeOriginal", - 37522: "SubsecTimeDigitized", - 40960: "FlashPixVersion", - 40961: "ColorSpace", - 40962: "PixelXDimension", - 40963: "PixelYDimension", - 40964: "RelatedSoundFile", - 40965: "InteroperabilityIFD", - 41483: "FlashEnergy", - 41484: "SpatialFrequencyResponse", - 41486: "FocalPlaneXResolution", - 41487: "FocalPlaneYResolution", - 41488: "FocalPlaneResolutionUnit", - 41492: "SubjectLocation", - 41493: "ExposureIndex", - 41495: "SensingMethod", - 41728: "FileSource", - 41729: "SceneType", - 41730: "CFAPattern", - 41985: "CustomRendered", - 41986: "ExposureMode", - 41987: "WhiteBalance", - 41988: "DigitalZoomRatio", - 41989: "FocalLengthIn35mmFilm", - 41990: "SceneCaptureType", - 41991: "GainControl", - 41992: "Contrast", - 41993: "Saturation", - 41994: "Sharpness", - 41995: "DeviceSettingDescription", - 41996: "SubjectDistanceRange", - 42016: "ImageUniqueID", - 42032: "CameraOwnerName", - 42033: "BodySerialNumber", - 42034: "LensSpecification", - 42035: "LensMake", - 42036: "LensModel", - 42037: "LensSerialNumber", - 42112: "GDAL_METADATA", - 42113: "GDAL_NODATA", - 42240: "Gamma", - 50215: "Oce Scanjob Description", - 50216: "Oce Application Selector", - 50217: "Oce Identification Number", - 50218: "Oce ImageLogic Characteristics", - # Adobe DNG - 50706: "DNGVersion", - 50707: "DNGBackwardVersion", - 50708: "UniqueCameraModel", - 50709: "LocalizedCameraModel", - 50710: "CFAPlaneColor", - 50711: "CFALayout", - 50712: "LinearizationTable", - 50713: "BlackLevelRepeatDim", - 50714: "BlackLevel", - 50715: "BlackLevelDeltaH", - 50716: "BlackLevelDeltaV", - 50717: "WhiteLevel", - 50718: "DefaultScale", - 50719: "DefaultCropOrigin", - 50720: "DefaultCropSize", - 50721: "ColorMatrix1", - 50722: "ColorMatrix2", - 50723: "CameraCalibration1", - 50724: "CameraCalibration2", - 50725: "ReductionMatrix1", - 50726: "ReductionMatrix2", - 50727: "AnalogBalance", - 50728: "AsShotNeutral", - 50729: "AsShotWhiteXY", - 50730: "BaselineExposure", - 50731: "BaselineNoise", - 50732: "BaselineSharpness", - 50733: "BayerGreenSplit", - 50734: "LinearResponseLimit", - 50735: "CameraSerialNumber", - 50736: "LensInfo", - 50737: "ChromaBlurRadius", - 50738: "AntiAliasStrength", - 50740: "DNGPrivateData", - 50778: "CalibrationIlluminant1", - 50779: "CalibrationIlluminant2", - 50784: "Alias Layer Metadata", -} - -TAGS_V2: dict[int, TagInfo] = {} -TAGS_V2_GROUPS: dict[int, dict[int, TagInfo]] = {} - - -def _populate() -> None: - for k, v in _tags_v2.items(): - # Populate legacy structure. - TAGS[k] = v[0] - if len(v) == 4: - for sk, sv in v[3].items(): - TAGS[(k, sv)] = sk - - TAGS_V2[k] = TagInfo(k, *v) - - for group, tags in _tags_v2_groups.items(): - TAGS_V2_GROUPS[group] = {k: TagInfo(k, *v) for k, v in tags.items()} - - -_populate() -## -# Map type numbers to type names -- defined in ImageFileDirectory. - -TYPES: dict[int, str] = {} - -# -# These tags are handled by default in libtiff, without -# adding to the custom dictionary. From tif_dir.c, searching for -# case TIFFTAG in the _TIFFVSetField function: -# Line: item. -# 148: case TIFFTAG_SUBFILETYPE: -# 151: case TIFFTAG_IMAGEWIDTH: -# 154: case TIFFTAG_IMAGELENGTH: -# 157: case TIFFTAG_BITSPERSAMPLE: -# 181: case TIFFTAG_COMPRESSION: -# 202: case TIFFTAG_PHOTOMETRIC: -# 205: case TIFFTAG_THRESHHOLDING: -# 208: case TIFFTAG_FILLORDER: -# 214: case TIFFTAG_ORIENTATION: -# 221: case TIFFTAG_SAMPLESPERPIXEL: -# 228: case TIFFTAG_ROWSPERSTRIP: -# 238: case TIFFTAG_MINSAMPLEVALUE: -# 241: case TIFFTAG_MAXSAMPLEVALUE: -# 244: case TIFFTAG_SMINSAMPLEVALUE: -# 247: case TIFFTAG_SMAXSAMPLEVALUE: -# 250: case TIFFTAG_XRESOLUTION: -# 256: case TIFFTAG_YRESOLUTION: -# 262: case TIFFTAG_PLANARCONFIG: -# 268: case TIFFTAG_XPOSITION: -# 271: case TIFFTAG_YPOSITION: -# 274: case TIFFTAG_RESOLUTIONUNIT: -# 280: case TIFFTAG_PAGENUMBER: -# 284: case TIFFTAG_HALFTONEHINTS: -# 288: case TIFFTAG_COLORMAP: -# 294: case TIFFTAG_EXTRASAMPLES: -# 298: case TIFFTAG_MATTEING: -# 305: case TIFFTAG_TILEWIDTH: -# 316: case TIFFTAG_TILELENGTH: -# 327: case TIFFTAG_TILEDEPTH: -# 333: case TIFFTAG_DATATYPE: -# 344: case TIFFTAG_SAMPLEFORMAT: -# 361: case TIFFTAG_IMAGEDEPTH: -# 364: case TIFFTAG_SUBIFD: -# 376: case TIFFTAG_YCBCRPOSITIONING: -# 379: case TIFFTAG_YCBCRSUBSAMPLING: -# 383: case TIFFTAG_TRANSFERFUNCTION: -# 389: case TIFFTAG_REFERENCEBLACKWHITE: -# 393: case TIFFTAG_INKNAMES: - -# Following pseudo-tags are also handled by default in libtiff: -# TIFFTAG_JPEGQUALITY 65537 - -# some of these are not in our TAGS_V2 dict and were included from tiff.h - -# This list also exists in encode.c -LIBTIFF_CORE = { - 255, - 256, - 257, - 258, - 259, - 262, - 263, - 266, - 274, - 277, - 278, - 280, - 281, - 340, - 341, - 282, - 283, - 284, - 286, - 287, - 296, - 297, - 321, - 320, - 338, - 32995, - 322, - 323, - 32998, - 32996, - 339, - 32997, - 330, - 531, - 530, - 301, - 532, - 333, - # as above - 269, # this has been in our tests forever, and works - 65537, -} - -LIBTIFF_CORE.remove(255) # We don't have support for subfiletypes -LIBTIFF_CORE.remove(322) # We don't have support for writing tiled images with libtiff -LIBTIFF_CORE.remove(323) # Tiled images -LIBTIFF_CORE.remove(333) # Ink Names either - -# Note to advanced users: There may be combinations of these -# parameters and values that when added properly, will work and -# produce valid tiff images that may work in your application. -# It is safe to add and remove tags from this set from Pillow's point -# of view so long as you test against libtiff. diff --git a/pptx-env/lib/python3.12/site-packages/PIL/WalImageFile.py b/pptx-env/lib/python3.12/site-packages/PIL/WalImageFile.py deleted file mode 100644 index 5494f62e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/WalImageFile.py +++ /dev/null @@ -1,126 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# WAL file handling -# -# History: -# 2003-04-23 fl created -# -# Copyright (c) 2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -""" -This reader is based on the specification available from: -https://www.flipcode.com/archives/Quake_2_BSP_File_Format.shtml -and has been tested with a few sample files found using google. - -.. note:: - This format cannot be automatically recognized, so the reader - is not registered for use with :py:func:`PIL.Image.open()`. - To open a WAL file, use the :py:func:`PIL.WalImageFile.open()` function instead. -""" -from __future__ import annotations - -from typing import IO - -from . import Image, ImageFile -from ._binary import i32le as i32 -from ._typing import StrOrBytesPath - - -class WalImageFile(ImageFile.ImageFile): - format = "WAL" - format_description = "Quake2 Texture" - - def _open(self) -> None: - self._mode = "P" - - # read header fields - header = self.fp.read(32 + 24 + 32 + 12) - self._size = i32(header, 32), i32(header, 36) - Image._decompression_bomb_check(self.size) - - # load pixel data - offset = i32(header, 40) - self.fp.seek(offset) - - # strings are null-terminated - self.info["name"] = header[:32].split(b"\0", 1)[0] - if next_name := header[56 : 56 + 32].split(b"\0", 1)[0]: - self.info["next_name"] = next_name - - def load(self) -> Image.core.PixelAccess | None: - if self._im is None: - self.im = Image.core.new(self.mode, self.size) - self.frombytes(self.fp.read(self.size[0] * self.size[1])) - self.putpalette(quake2palette) - return Image.Image.load(self) - - -def open(filename: StrOrBytesPath | IO[bytes]) -> WalImageFile: - """ - Load texture from a Quake2 WAL texture file. - - By default, a Quake2 standard palette is attached to the texture. - To override the palette, use the :py:func:`PIL.Image.Image.putpalette()` method. - - :param filename: WAL file name, or an opened file handle. - :returns: An image instance. - """ - return WalImageFile(filename) - - -quake2palette = ( - # default palette taken from piffo 0.93 by Hans HΓ€ggstrΓΆm - b"\x01\x01\x01\x0b\x0b\x0b\x12\x12\x12\x17\x17\x17\x1b\x1b\x1b\x1e" - b"\x1e\x1e\x22\x22\x22\x26\x26\x26\x29\x29\x29\x2c\x2c\x2c\x2f\x2f" - b"\x2f\x32\x32\x32\x35\x35\x35\x37\x37\x37\x3a\x3a\x3a\x3c\x3c\x3c" - b"\x24\x1e\x13\x22\x1c\x12\x20\x1b\x12\x1f\x1a\x10\x1d\x19\x10\x1b" - b"\x17\x0f\x1a\x16\x0f\x18\x14\x0d\x17\x13\x0d\x16\x12\x0d\x14\x10" - b"\x0b\x13\x0f\x0b\x10\x0d\x0a\x0f\x0b\x0a\x0d\x0b\x07\x0b\x0a\x07" - b"\x23\x23\x26\x22\x22\x25\x22\x20\x23\x21\x1f\x22\x20\x1e\x20\x1f" - b"\x1d\x1e\x1d\x1b\x1c\x1b\x1a\x1a\x1a\x19\x19\x18\x17\x17\x17\x16" - b"\x16\x14\x14\x14\x13\x13\x13\x10\x10\x10\x0f\x0f\x0f\x0d\x0d\x0d" - b"\x2d\x28\x20\x29\x24\x1c\x27\x22\x1a\x25\x1f\x17\x38\x2e\x1e\x31" - b"\x29\x1a\x2c\x25\x17\x26\x20\x14\x3c\x30\x14\x37\x2c\x13\x33\x28" - b"\x12\x2d\x24\x10\x28\x1f\x0f\x22\x1a\x0b\x1b\x14\x0a\x13\x0f\x07" - b"\x31\x1a\x16\x30\x17\x13\x2e\x16\x10\x2c\x14\x0d\x2a\x12\x0b\x27" - b"\x0f\x0a\x25\x0f\x07\x21\x0d\x01\x1e\x0b\x01\x1c\x0b\x01\x1a\x0b" - b"\x01\x18\x0a\x01\x16\x0a\x01\x13\x0a\x01\x10\x07\x01\x0d\x07\x01" - b"\x29\x23\x1e\x27\x21\x1c\x26\x20\x1b\x25\x1f\x1a\x23\x1d\x19\x21" - b"\x1c\x18\x20\x1b\x17\x1e\x19\x16\x1c\x18\x14\x1b\x17\x13\x19\x14" - b"\x10\x17\x13\x0f\x14\x10\x0d\x12\x0f\x0b\x0f\x0b\x0a\x0b\x0a\x07" - b"\x26\x1a\x0f\x23\x19\x0f\x20\x17\x0f\x1c\x16\x0f\x19\x13\x0d\x14" - b"\x10\x0b\x10\x0d\x0a\x0b\x0a\x07\x33\x22\x1f\x35\x29\x26\x37\x2f" - b"\x2d\x39\x35\x34\x37\x39\x3a\x33\x37\x39\x30\x34\x36\x2b\x31\x34" - b"\x27\x2e\x31\x22\x2b\x2f\x1d\x28\x2c\x17\x25\x2a\x0f\x20\x26\x0d" - b"\x1e\x25\x0b\x1c\x22\x0a\x1b\x20\x07\x19\x1e\x07\x17\x1b\x07\x14" - b"\x18\x01\x12\x16\x01\x0f\x12\x01\x0b\x0d\x01\x07\x0a\x01\x01\x01" - b"\x2c\x21\x21\x2a\x1f\x1f\x29\x1d\x1d\x27\x1c\x1c\x26\x1a\x1a\x24" - b"\x18\x18\x22\x17\x17\x21\x16\x16\x1e\x13\x13\x1b\x12\x12\x18\x10" - b"\x10\x16\x0d\x0d\x12\x0b\x0b\x0d\x0a\x0a\x0a\x07\x07\x01\x01\x01" - b"\x2e\x30\x29\x2d\x2e\x27\x2b\x2c\x26\x2a\x2a\x24\x28\x29\x23\x27" - b"\x27\x21\x26\x26\x1f\x24\x24\x1d\x22\x22\x1c\x1f\x1f\x1a\x1c\x1c" - b"\x18\x19\x19\x16\x17\x17\x13\x13\x13\x10\x0f\x0f\x0d\x0b\x0b\x0a" - b"\x30\x1e\x1b\x2d\x1c\x19\x2c\x1a\x17\x2a\x19\x14\x28\x17\x13\x26" - b"\x16\x10\x24\x13\x0f\x21\x12\x0d\x1f\x10\x0b\x1c\x0f\x0a\x19\x0d" - b"\x0a\x16\x0b\x07\x12\x0a\x07\x0f\x07\x01\x0a\x01\x01\x01\x01\x01" - b"\x28\x29\x38\x26\x27\x36\x25\x26\x34\x24\x24\x31\x22\x22\x2f\x20" - b"\x21\x2d\x1e\x1f\x2a\x1d\x1d\x27\x1b\x1b\x25\x19\x19\x21\x17\x17" - b"\x1e\x14\x14\x1b\x13\x12\x17\x10\x0f\x13\x0d\x0b\x0f\x0a\x07\x07" - b"\x2f\x32\x29\x2d\x30\x26\x2b\x2e\x24\x29\x2c\x21\x27\x2a\x1e\x25" - b"\x28\x1c\x23\x26\x1a\x21\x25\x18\x1e\x22\x14\x1b\x1f\x10\x19\x1c" - b"\x0d\x17\x1a\x0a\x13\x17\x07\x10\x13\x01\x0d\x0f\x01\x0a\x0b\x01" - b"\x01\x3f\x01\x13\x3c\x0b\x1b\x39\x10\x20\x35\x14\x23\x31\x17\x23" - b"\x2d\x18\x23\x29\x18\x3f\x3f\x3f\x3f\x3f\x39\x3f\x3f\x31\x3f\x3f" - b"\x2a\x3f\x3f\x20\x3f\x3f\x14\x3f\x3c\x12\x3f\x39\x0f\x3f\x35\x0b" - b"\x3f\x32\x07\x3f\x2d\x01\x3d\x2a\x01\x3b\x26\x01\x39\x21\x01\x37" - b"\x1d\x01\x34\x1a\x01\x32\x16\x01\x2f\x12\x01\x2d\x0f\x01\x2a\x0b" - b"\x01\x27\x07\x01\x23\x01\x01\x1d\x01\x01\x17\x01\x01\x10\x01\x01" - b"\x3d\x01\x01\x19\x19\x3f\x3f\x01\x01\x01\x01\x3f\x16\x16\x13\x10" - b"\x10\x0f\x0d\x0d\x0b\x3c\x2e\x2a\x36\x27\x20\x30\x21\x18\x29\x1b" - b"\x10\x3c\x39\x37\x37\x32\x2f\x31\x2c\x28\x2b\x26\x21\x30\x22\x20" -) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/WebPImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/WebPImagePlugin.py deleted file mode 100644 index 2847fed2..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/WebPImagePlugin.py +++ /dev/null @@ -1,322 +0,0 @@ -from __future__ import annotations - -from io import BytesIO - -from . import Image, ImageFile - -try: - from . import _webp - - SUPPORTED = True -except ImportError: - SUPPORTED = False - -TYPE_CHECKING = False -if TYPE_CHECKING: - from typing import IO, Any - -_VP8_MODES_BY_IDENTIFIER = { - b"VP8 ": "RGB", - b"VP8X": "RGBA", - b"VP8L": "RGBA", # lossless -} - - -def _accept(prefix: bytes) -> bool | str: - is_riff_file_format = prefix.startswith(b"RIFF") - is_webp_file = prefix[8:12] == b"WEBP" - is_valid_vp8_mode = prefix[12:16] in _VP8_MODES_BY_IDENTIFIER - - if is_riff_file_format and is_webp_file and is_valid_vp8_mode: - if not SUPPORTED: - return ( - "image file could not be identified because WEBP support not installed" - ) - return True - return False - - -class WebPImageFile(ImageFile.ImageFile): - format = "WEBP" - format_description = "WebP image" - __loaded = 0 - __logical_frame = 0 - - def _open(self) -> None: - # Use the newer AnimDecoder API to parse the (possibly) animated file, - # and access muxed chunks like ICC/EXIF/XMP. - self._decoder = _webp.WebPAnimDecoder(self.fp.read()) - - # Get info from decoder - self._size, loop_count, bgcolor, frame_count, mode = self._decoder.get_info() - self.info["loop"] = loop_count - bg_a, bg_r, bg_g, bg_b = ( - (bgcolor >> 24) & 0xFF, - (bgcolor >> 16) & 0xFF, - (bgcolor >> 8) & 0xFF, - bgcolor & 0xFF, - ) - self.info["background"] = (bg_r, bg_g, bg_b, bg_a) - self.n_frames = frame_count - self.is_animated = self.n_frames > 1 - self._mode = "RGB" if mode == "RGBX" else mode - self.rawmode = mode - - # Attempt to read ICC / EXIF / XMP chunks from file - icc_profile = self._decoder.get_chunk("ICCP") - exif = self._decoder.get_chunk("EXIF") - xmp = self._decoder.get_chunk("XMP ") - if icc_profile: - self.info["icc_profile"] = icc_profile - if exif: - self.info["exif"] = exif - if xmp: - self.info["xmp"] = xmp - - # Initialize seek state - self._reset(reset=False) - - def _getexif(self) -> dict[int, Any] | None: - if "exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - def seek(self, frame: int) -> None: - if not self._seek_check(frame): - return - - # Set logical frame to requested position - self.__logical_frame = frame - - def _reset(self, reset: bool = True) -> None: - if reset: - self._decoder.reset() - self.__physical_frame = 0 - self.__loaded = -1 - self.__timestamp = 0 - - def _get_next(self) -> tuple[bytes, int, int]: - # Get next frame - ret = self._decoder.get_next() - self.__physical_frame += 1 - - # Check if an error occurred - if ret is None: - self._reset() # Reset just to be safe - self.seek(0) - msg = "failed to decode next frame in WebP file" - raise EOFError(msg) - - # Compute duration - data, timestamp = ret - duration = timestamp - self.__timestamp - self.__timestamp = timestamp - - # libwebp gives frame end, adjust to start of frame - timestamp -= duration - return data, timestamp, duration - - def _seek(self, frame: int) -> None: - if self.__physical_frame == frame: - return # Nothing to do - if frame < self.__physical_frame: - self._reset() # Rewind to beginning - while self.__physical_frame < frame: - self._get_next() # Advance to the requested frame - - def load(self) -> Image.core.PixelAccess | None: - if self.__loaded != self.__logical_frame: - self._seek(self.__logical_frame) - - # We need to load the image data for this frame - data, timestamp, duration = self._get_next() - self.info["timestamp"] = timestamp - self.info["duration"] = duration - self.__loaded = self.__logical_frame - - # Set tile - if self.fp and self._exclusive_fp: - self.fp.close() - self.fp = BytesIO(data) - self.tile = [ImageFile._Tile("raw", (0, 0) + self.size, 0, self.rawmode)] - - return super().load() - - def load_seek(self, pos: int) -> None: - pass - - def tell(self) -> int: - return self.__logical_frame - - -def _convert_frame(im: Image.Image) -> Image.Image: - # Make sure image mode is supported - if im.mode not in ("RGBX", "RGBA", "RGB"): - im = im.convert("RGBA" if im.has_transparency_data else "RGB") - return im - - -def _save_all(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - encoderinfo = im.encoderinfo.copy() - append_images = list(encoderinfo.get("append_images", [])) - - # If total frame count is 1, then save using the legacy API, which - # will preserve non-alpha modes - total = 0 - for ims in [im] + append_images: - total += getattr(ims, "n_frames", 1) - if total == 1: - _save(im, fp, filename) - return - - background: int | tuple[int, ...] = (0, 0, 0, 0) - if "background" in encoderinfo: - background = encoderinfo["background"] - elif "background" in im.info: - background = im.info["background"] - if isinstance(background, int): - # GifImagePlugin stores a global color table index in - # info["background"]. So it must be converted to an RGBA value - palette = im.getpalette() - if palette: - r, g, b = palette[background * 3 : (background + 1) * 3] - background = (r, g, b, 255) - else: - background = (background, background, background, 255) - - duration = im.encoderinfo.get("duration", im.info.get("duration", 0)) - loop = im.encoderinfo.get("loop", 0) - minimize_size = im.encoderinfo.get("minimize_size", False) - kmin = im.encoderinfo.get("kmin", None) - kmax = im.encoderinfo.get("kmax", None) - allow_mixed = im.encoderinfo.get("allow_mixed", False) - verbose = False - lossless = im.encoderinfo.get("lossless", False) - quality = im.encoderinfo.get("quality", 80) - alpha_quality = im.encoderinfo.get("alpha_quality", 100) - method = im.encoderinfo.get("method", 0) - icc_profile = im.encoderinfo.get("icc_profile") or "" - exif = im.encoderinfo.get("exif", "") - if isinstance(exif, Image.Exif): - exif = exif.tobytes() - xmp = im.encoderinfo.get("xmp", "") - if allow_mixed: - lossless = False - - # Sensible keyframe defaults are from gif2webp.c script - if kmin is None: - kmin = 9 if lossless else 3 - if kmax is None: - kmax = 17 if lossless else 5 - - # Validate background color - if ( - not isinstance(background, (list, tuple)) - or len(background) != 4 - or not all(0 <= v < 256 for v in background) - ): - msg = f"Background color is not an RGBA tuple clamped to (0-255): {background}" - raise OSError(msg) - - # Convert to packed uint - bg_r, bg_g, bg_b, bg_a = background - background = (bg_a << 24) | (bg_r << 16) | (bg_g << 8) | (bg_b << 0) - - # Setup the WebP animation encoder - enc = _webp.WebPAnimEncoder( - im.size, - background, - loop, - minimize_size, - kmin, - kmax, - allow_mixed, - verbose, - ) - - # Add each frame - frame_idx = 0 - timestamp = 0 - cur_idx = im.tell() - try: - for ims in [im] + append_images: - # Get number of frames in this image - nfr = getattr(ims, "n_frames", 1) - - for idx in range(nfr): - ims.seek(idx) - - frame = _convert_frame(ims) - - # Append the frame to the animation encoder - enc.add( - frame.getim(), - round(timestamp), - lossless, - quality, - alpha_quality, - method, - ) - - # Update timestamp and frame index - if isinstance(duration, (list, tuple)): - timestamp += duration[frame_idx] - else: - timestamp += duration - frame_idx += 1 - - finally: - im.seek(cur_idx) - - # Force encoder to flush frames - enc.add(None, round(timestamp), lossless, quality, alpha_quality, 0) - - # Get the final output from the encoder - data = enc.assemble(icc_profile, exif, xmp) - if data is None: - msg = "cannot write file as WebP (encoder returned None)" - raise OSError(msg) - - fp.write(data) - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - lossless = im.encoderinfo.get("lossless", False) - quality = im.encoderinfo.get("quality", 80) - alpha_quality = im.encoderinfo.get("alpha_quality", 100) - icc_profile = im.encoderinfo.get("icc_profile") or "" - exif = im.encoderinfo.get("exif", b"") - if isinstance(exif, Image.Exif): - exif = exif.tobytes() - if exif.startswith(b"Exif\x00\x00"): - exif = exif[6:] - xmp = im.encoderinfo.get("xmp", "") - method = im.encoderinfo.get("method", 4) - exact = 1 if im.encoderinfo.get("exact") else 0 - - im = _convert_frame(im) - - data = _webp.WebPEncode( - im.getim(), - lossless, - float(quality), - float(alpha_quality), - icc_profile, - method, - exact, - exif, - xmp, - ) - if data is None: - msg = "cannot write file as WebP (encoder returned None)" - raise OSError(msg) - - fp.write(data) - - -Image.register_open(WebPImageFile.format, WebPImageFile, _accept) -if SUPPORTED: - Image.register_save(WebPImageFile.format, _save) - Image.register_save_all(WebPImageFile.format, _save_all) - Image.register_extension(WebPImageFile.format, ".webp") - Image.register_mime(WebPImageFile.format, "image/webp") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/WmfImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/WmfImagePlugin.py deleted file mode 100644 index de714d33..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/WmfImagePlugin.py +++ /dev/null @@ -1,186 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# WMF stub codec -# -# history: -# 1996-12-14 fl Created -# 2004-02-22 fl Turned into a stub driver -# 2004-02-23 fl Added EMF support -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -# WMF/EMF reference documentation: -# https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-WMF/[MS-WMF].pdf -# http://wvware.sourceforge.net/caolan/index.html -# http://wvware.sourceforge.net/caolan/ora-wmf.html -from __future__ import annotations - -from typing import IO - -from . import Image, ImageFile -from ._binary import i16le as word -from ._binary import si16le as short -from ._binary import si32le as _long - -_handler = None - - -def register_handler(handler: ImageFile.StubHandler | None) -> None: - """ - Install application-specific WMF image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -if hasattr(Image.core, "drawwmf"): - # install default handler (windows only) - - class WmfHandler(ImageFile.StubHandler): - def open(self, im: ImageFile.StubImageFile) -> None: - im._mode = "RGB" - self.bbox = im.info["wmf_bbox"] - - def load(self, im: ImageFile.StubImageFile) -> Image.Image: - im.fp.seek(0) # rewind - return Image.frombytes( - "RGB", - im.size, - Image.core.drawwmf(im.fp.read(), im.size, self.bbox), - "raw", - "BGR", - (im.size[0] * 3 + 3) & -4, - -1, - ) - - register_handler(WmfHandler()) - -# -# -------------------------------------------------------------------- -# Read WMF file - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith((b"\xd7\xcd\xc6\x9a\x00\x00", b"\x01\x00\x00\x00")) - - -## -# Image plugin for Windows metafiles. - - -class WmfStubImageFile(ImageFile.StubImageFile): - format = "WMF" - format_description = "Windows Metafile" - - def _open(self) -> None: - # check placeable header - s = self.fp.read(44) - - if s.startswith(b"\xd7\xcd\xc6\x9a\x00\x00"): - # placeable windows metafile - - # get units per inch - inch = word(s, 14) - if inch == 0: - msg = "Invalid inch" - raise ValueError(msg) - self._inch: tuple[float, float] = inch, inch - - # get bounding box - x0 = short(s, 6) - y0 = short(s, 8) - x1 = short(s, 10) - y1 = short(s, 12) - - # normalize size to 72 dots per inch - self.info["dpi"] = 72 - size = ( - (x1 - x0) * self.info["dpi"] // inch, - (y1 - y0) * self.info["dpi"] // inch, - ) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - # sanity check (standard metafile header) - if s[22:26] != b"\x01\x00\t\x00": - msg = "Unsupported WMF file format" - raise SyntaxError(msg) - - elif s.startswith(b"\x01\x00\x00\x00") and s[40:44] == b" EMF": - # enhanced metafile - - # get bounding box - x0 = _long(s, 8) - y0 = _long(s, 12) - x1 = _long(s, 16) - y1 = _long(s, 20) - - # get frame (in 0.01 millimeter units) - frame = _long(s, 24), _long(s, 28), _long(s, 32), _long(s, 36) - - size = x1 - x0, y1 - y0 - - # calculate dots per inch from bbox and frame - xdpi = 2540.0 * (x1 - x0) / (frame[2] - frame[0]) - ydpi = 2540.0 * (y1 - y0) / (frame[3] - frame[1]) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - if xdpi == ydpi: - self.info["dpi"] = xdpi - else: - self.info["dpi"] = xdpi, ydpi - self._inch = xdpi, ydpi - - else: - msg = "Unsupported file format" - raise SyntaxError(msg) - - self._mode = "RGB" - self._size = size - - loader = self._load() - if loader: - loader.open(self) - - def _load(self) -> ImageFile.StubHandler | None: - return _handler - - def load( - self, dpi: float | tuple[float, float] | None = None - ) -> Image.core.PixelAccess | None: - if dpi is not None: - self.info["dpi"] = dpi - x0, y0, x1, y1 = self.info["wmf_bbox"] - if not isinstance(dpi, tuple): - dpi = dpi, dpi - self._size = ( - int((x1 - x0) * dpi[0] / self._inch[0]), - int((y1 - y0) * dpi[1] / self._inch[1]), - ) - return super().load() - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if _handler is None or not hasattr(_handler, "save"): - msg = "WMF save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -# -------------------------------------------------------------------- -# Registry stuff - - -Image.register_open(WmfStubImageFile.format, WmfStubImageFile, _accept) -Image.register_save(WmfStubImageFile.format, _save) - -Image.register_extensions(WmfStubImageFile.format, [".wmf", ".emf"]) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/XVThumbImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/XVThumbImagePlugin.py deleted file mode 100644 index cde28388..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/XVThumbImagePlugin.py +++ /dev/null @@ -1,83 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# XV Thumbnail file handler by Charles E. "Gene" Cash -# (gcash@magicnet.net) -# -# see xvcolor.c and xvbrowse.c in the sources to John Bradley's XV, -# available from ftp://ftp.cis.upenn.edu/pub/xv/ -# -# history: -# 98-08-15 cec created (b/w only) -# 98-12-09 cec added color palette -# 98-12-28 fl added to PIL (with only a few very minor modifications) -# -# To do: -# FIXME: make save work (this requires quantization support) -# -from __future__ import annotations - -from . import Image, ImageFile, ImagePalette -from ._binary import o8 - -_MAGIC = b"P7 332" - -# standard color palette for thumbnails (RGB332) -PALETTE = b"" -for r in range(8): - for g in range(8): - for b in range(4): - PALETTE = PALETTE + ( - o8((r * 255) // 7) + o8((g * 255) // 7) + o8((b * 255) // 3) - ) - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(_MAGIC) - - -## -# Image plugin for XV thumbnail images. - - -class XVThumbImageFile(ImageFile.ImageFile): - format = "XVThumb" - format_description = "XV thumbnail image" - - def _open(self) -> None: - # check magic - assert self.fp is not None - - if not _accept(self.fp.read(6)): - msg = "not an XV thumbnail file" - raise SyntaxError(msg) - - # Skip to beginning of next line - self.fp.readline() - - # skip info comments - while True: - s = self.fp.readline() - if not s: - msg = "Unexpected EOF reading XV thumbnail file" - raise SyntaxError(msg) - if s[0] != 35: # ie. when not a comment: '#' - break - - # parse header line (already read) - s = s.strip().split() - - self._mode = "P" - self._size = int(s[0]), int(s[1]) - - self.palette = ImagePalette.raw("RGB", PALETTE) - - self.tile = [ - ImageFile._Tile("raw", (0, 0) + self.size, self.fp.tell(), self.mode) - ] - - -# -------------------------------------------------------------------- - -Image.register_open(XVThumbImageFile.format, XVThumbImageFile, _accept) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/XbmImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/XbmImagePlugin.py deleted file mode 100644 index 1e57aa16..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/XbmImagePlugin.py +++ /dev/null @@ -1,98 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# XBM File handling -# -# History: -# 1995-09-08 fl Created -# 1996-11-01 fl Added save support -# 1997-07-07 fl Made header parser more tolerant -# 1997-07-22 fl Fixed yet another parser bug -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.4) -# 2001-05-13 fl Added hotspot handling (based on code from Bernhard Herzog) -# 2004-02-24 fl Allow some whitespace before first #define -# -# Copyright (c) 1997-2004 by Secret Labs AB -# Copyright (c) 1996-1997 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import re -from typing import IO - -from . import Image, ImageFile - -# XBM header -xbm_head = re.compile( - rb"\s*#define[ \t]+.*_width[ \t]+(?P[0-9]+)[\r\n]+" - b"#define[ \t]+.*_height[ \t]+(?P[0-9]+)[\r\n]+" - b"(?P" - b"#define[ \t]+[^_]*_x_hot[ \t]+(?P[0-9]+)[\r\n]+" - b"#define[ \t]+[^_]*_y_hot[ \t]+(?P[0-9]+)[\r\n]+" - b")?" - rb"[\000-\377]*_bits\[]" -) - - -def _accept(prefix: bytes) -> bool: - return prefix.lstrip().startswith(b"#define") - - -## -# Image plugin for X11 bitmaps. - - -class XbmImageFile(ImageFile.ImageFile): - format = "XBM" - format_description = "X11 Bitmap" - - def _open(self) -> None: - assert self.fp is not None - - m = xbm_head.match(self.fp.read(512)) - - if not m: - msg = "not a XBM file" - raise SyntaxError(msg) - - xsize = int(m.group("width")) - ysize = int(m.group("height")) - - if m.group("hotspot"): - self.info["hotspot"] = (int(m.group("xhot")), int(m.group("yhot"))) - - self._mode = "1" - self._size = xsize, ysize - - self.tile = [ImageFile._Tile("xbm", (0, 0) + self.size, m.end())] - - -def _save(im: Image.Image, fp: IO[bytes], filename: str | bytes) -> None: - if im.mode != "1": - msg = f"cannot write mode {im.mode} as XBM" - raise OSError(msg) - - fp.write(f"#define im_width {im.size[0]}\n".encode("ascii")) - fp.write(f"#define im_height {im.size[1]}\n".encode("ascii")) - - hotspot = im.encoderinfo.get("hotspot") - if hotspot: - fp.write(f"#define im_x_hot {hotspot[0]}\n".encode("ascii")) - fp.write(f"#define im_y_hot {hotspot[1]}\n".encode("ascii")) - - fp.write(b"static char im_bits[] = {\n") - - ImageFile._save(im, fp, [ImageFile._Tile("xbm", (0, 0) + im.size)]) - - fp.write(b"};\n") - - -Image.register_open(XbmImageFile.format, XbmImageFile, _accept) -Image.register_save(XbmImageFile.format, _save) - -Image.register_extension(XbmImageFile.format, ".xbm") - -Image.register_mime(XbmImageFile.format, "image/xbm") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/XpmImagePlugin.py b/pptx-env/lib/python3.12/site-packages/PIL/XpmImagePlugin.py deleted file mode 100644 index 3be240fb..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/XpmImagePlugin.py +++ /dev/null @@ -1,157 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# XPM File handling -# -# History: -# 1996-12-29 fl Created -# 2001-02-17 fl Use 're' instead of 'regex' (Python 2.1) (0.7) -# -# Copyright (c) Secret Labs AB 1997-2001. -# Copyright (c) Fredrik Lundh 1996-2001. -# -# See the README file for information on usage and redistribution. -# -from __future__ import annotations - -import re - -from . import Image, ImageFile, ImagePalette -from ._binary import o8 - -# XPM header -xpm_head = re.compile(b'"([0-9]*) ([0-9]*) ([0-9]*) ([0-9]*)') - - -def _accept(prefix: bytes) -> bool: - return prefix.startswith(b"/* XPM */") - - -## -# Image plugin for X11 pixel maps. - - -class XpmImageFile(ImageFile.ImageFile): - format = "XPM" - format_description = "X11 Pixel Map" - - def _open(self) -> None: - assert self.fp is not None - if not _accept(self.fp.read(9)): - msg = "not an XPM file" - raise SyntaxError(msg) - - # skip forward to next string - while True: - line = self.fp.readline() - if not line: - msg = "broken XPM file" - raise SyntaxError(msg) - m = xpm_head.match(line) - if m: - break - - self._size = int(m.group(1)), int(m.group(2)) - - palette_length = int(m.group(3)) - bpp = int(m.group(4)) - - # - # load palette description - - palette = {} - - for _ in range(palette_length): - line = self.fp.readline().rstrip() - - c = line[1 : bpp + 1] - s = line[bpp + 1 : -2].split() - - for i in range(0, len(s), 2): - if s[i] == b"c": - # process colour key - rgb = s[i + 1] - if rgb == b"None": - self.info["transparency"] = c - elif rgb.startswith(b"#"): - rgb_int = int(rgb[1:], 16) - palette[c] = ( - o8((rgb_int >> 16) & 255) - + o8((rgb_int >> 8) & 255) - + o8(rgb_int & 255) - ) - else: - # unknown colour - msg = "cannot read this XPM file" - raise ValueError(msg) - break - - else: - # missing colour key - msg = "cannot read this XPM file" - raise ValueError(msg) - - args: tuple[int, dict[bytes, bytes] | tuple[bytes, ...]] - if palette_length > 256: - self._mode = "RGB" - args = (bpp, palette) - else: - self._mode = "P" - self.palette = ImagePalette.raw("RGB", b"".join(palette.values())) - args = (bpp, tuple(palette.keys())) - - self.tile = [ImageFile._Tile("xpm", (0, 0) + self.size, self.fp.tell(), args)] - - def load_read(self, read_bytes: int) -> bytes: - # - # load all image data in one chunk - - xsize, ysize = self.size - - assert self.fp is not None - s = [self.fp.readline()[1 : xsize + 1].ljust(xsize) for i in range(ysize)] - - return b"".join(s) - - -class XpmDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer: bytes | Image.SupportsArrayInterface) -> tuple[int, int]: - assert self.fd is not None - - data = bytearray() - bpp, palette = self.args - dest_length = self.state.xsize * self.state.ysize - if self.mode == "RGB": - dest_length *= 3 - pixel_header = False - while len(data) < dest_length: - line = self.fd.readline() - if not line: - break - if line.rstrip() == b"/* pixels */" and not pixel_header: - pixel_header = True - continue - line = b'"'.join(line.split(b'"')[1:-1]) - for i in range(0, len(line), bpp): - key = line[i : i + bpp] - if self.mode == "RGB": - data += palette[key] - else: - data += o8(palette.index(key)) - self.set_as_raw(bytes(data)) - return -1, 0 - - -# -# Registry - - -Image.register_open(XpmImageFile.format, XpmImageFile, _accept) -Image.register_decoder("xpm", XpmDecoder) - -Image.register_extension(XpmImageFile.format, ".xpm") - -Image.register_mime(XpmImageFile.format, "image/xpm") diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__init__.py b/pptx-env/lib/python3.12/site-packages/PIL/__init__.py deleted file mode 100644 index 6e4c23f8..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/__init__.py +++ /dev/null @@ -1,87 +0,0 @@ -"""Pillow (Fork of the Python Imaging Library) - -Pillow is the friendly PIL fork by Jeffrey A. Clark and contributors. - https://github.com/python-pillow/Pillow/ - -Pillow is forked from PIL 1.1.7. - -PIL is the Python Imaging Library by Fredrik Lundh and contributors. -Copyright (c) 1999 by Secret Labs AB. - -Use PIL.__version__ for this Pillow version. - -;-) -""" - -from __future__ import annotations - -from . import _version - -# VERSION was removed in Pillow 6.0.0. -# PILLOW_VERSION was removed in Pillow 9.0.0. -# Use __version__ instead. -__version__ = _version.__version__ -del _version - - -_plugins = [ - "AvifImagePlugin", - "BlpImagePlugin", - "BmpImagePlugin", - "BufrStubImagePlugin", - "CurImagePlugin", - "DcxImagePlugin", - "DdsImagePlugin", - "EpsImagePlugin", - "FitsImagePlugin", - "FliImagePlugin", - "FpxImagePlugin", - "FtexImagePlugin", - "GbrImagePlugin", - "GifImagePlugin", - "GribStubImagePlugin", - "Hdf5StubImagePlugin", - "IcnsImagePlugin", - "IcoImagePlugin", - "ImImagePlugin", - "ImtImagePlugin", - "IptcImagePlugin", - "JpegImagePlugin", - "Jpeg2KImagePlugin", - "McIdasImagePlugin", - "MicImagePlugin", - "MpegImagePlugin", - "MpoImagePlugin", - "MspImagePlugin", - "PalmImagePlugin", - "PcdImagePlugin", - "PcxImagePlugin", - "PdfImagePlugin", - "PixarImagePlugin", - "PngImagePlugin", - "PpmImagePlugin", - "PsdImagePlugin", - "QoiImagePlugin", - "SgiImagePlugin", - "SpiderImagePlugin", - "SunImagePlugin", - "TgaImagePlugin", - "TiffImagePlugin", - "WebPImagePlugin", - "WmfImagePlugin", - "XbmImagePlugin", - "XpmImagePlugin", - "XVThumbImagePlugin", -] - - -class UnidentifiedImageError(OSError): - """ - Raised in :py:meth:`PIL.Image.open` if an image cannot be opened and identified. - - If a PNG image raises this error, setting :data:`.ImageFile.LOAD_TRUNCATED_IMAGES` - to true may allow the image to be opened after all. The setting will ignore missing - data and checksum failures. - """ - - pass diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__main__.py b/pptx-env/lib/python3.12/site-packages/PIL/__main__.py deleted file mode 100644 index 043156e8..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/__main__.py +++ /dev/null @@ -1,7 +0,0 @@ -from __future__ import annotations - -import sys - -from .features import pilinfo - -pilinfo(supported_formats="--report" not in sys.argv) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/AvifImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/AvifImagePlugin.cpython-312.pyc deleted file mode 100644 index 9d9319d8..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/AvifImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BdfFontFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BdfFontFile.cpython-312.pyc deleted file mode 100644 index ba9cb7aa..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BdfFontFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BlpImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BlpImagePlugin.cpython-312.pyc deleted file mode 100644 index d3238c9c..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BlpImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BmpImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BmpImagePlugin.cpython-312.pyc deleted file mode 100644 index b197036c..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BmpImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BufrStubImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BufrStubImagePlugin.cpython-312.pyc deleted file mode 100644 index ab4644e0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/BufrStubImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ContainerIO.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ContainerIO.cpython-312.pyc deleted file mode 100644 index e7ac0b85..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ContainerIO.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/CurImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/CurImagePlugin.cpython-312.pyc deleted file mode 100644 index 37aa7a82..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/CurImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/DcxImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/DcxImagePlugin.cpython-312.pyc deleted file mode 100644 index 470b6265..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/DcxImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/DdsImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/DdsImagePlugin.cpython-312.pyc deleted file mode 100644 index 68ad1aa3..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/DdsImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/EpsImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/EpsImagePlugin.cpython-312.pyc deleted file mode 100644 index f6329fb3..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/EpsImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ExifTags.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ExifTags.cpython-312.pyc deleted file mode 100644 index 282acd95..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ExifTags.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FitsImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FitsImagePlugin.cpython-312.pyc deleted file mode 100644 index f8be5be9..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FitsImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FliImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FliImagePlugin.cpython-312.pyc deleted file mode 100644 index 277999e0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FliImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FontFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FontFile.cpython-312.pyc deleted file mode 100644 index cea20cb8..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FontFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FpxImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FpxImagePlugin.cpython-312.pyc deleted file mode 100644 index 490ddd04..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FpxImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FtexImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FtexImagePlugin.cpython-312.pyc deleted file mode 100644 index 225f5f5a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/FtexImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GbrImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GbrImagePlugin.cpython-312.pyc deleted file mode 100644 index d19756b0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GbrImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GdImageFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GdImageFile.cpython-312.pyc deleted file mode 100644 index f2561763..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GdImageFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GifImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GifImagePlugin.cpython-312.pyc deleted file mode 100644 index fe9bec26..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GifImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GimpGradientFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GimpGradientFile.cpython-312.pyc deleted file mode 100644 index 29aea2c2..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GimpGradientFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GimpPaletteFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GimpPaletteFile.cpython-312.pyc deleted file mode 100644 index bc191b6a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GimpPaletteFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GribStubImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GribStubImagePlugin.cpython-312.pyc deleted file mode 100644 index 418a79ac..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/GribStubImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Hdf5StubImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Hdf5StubImagePlugin.cpython-312.pyc deleted file mode 100644 index 0fa6b580..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Hdf5StubImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IcnsImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IcnsImagePlugin.cpython-312.pyc deleted file mode 100644 index 4303cc33..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IcnsImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IcoImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IcoImagePlugin.cpython-312.pyc deleted file mode 100644 index 42f3c8b7..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IcoImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImImagePlugin.cpython-312.pyc deleted file mode 100644 index d94c9e3a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Image.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Image.cpython-312.pyc deleted file mode 100644 index 89195ff0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Image.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageChops.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageChops.cpython-312.pyc deleted file mode 100644 index 37acb9ef..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageChops.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageCms.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageCms.cpython-312.pyc deleted file mode 100644 index a5d0dbbb..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageCms.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageColor.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageColor.cpython-312.pyc deleted file mode 100644 index beb8777a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageColor.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageDraw.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageDraw.cpython-312.pyc deleted file mode 100644 index 4e6ea0e1..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageDraw.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageDraw2.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageDraw2.cpython-312.pyc deleted file mode 100644 index 6868b087..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageDraw2.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageEnhance.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageEnhance.cpython-312.pyc deleted file mode 100644 index 5b740827..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageEnhance.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFile.cpython-312.pyc deleted file mode 100644 index 49197eff..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFilter.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFilter.cpython-312.pyc deleted file mode 100644 index 68251ddb..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFilter.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFont.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFont.cpython-312.pyc deleted file mode 100644 index 14c437e0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageFont.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageGrab.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageGrab.cpython-312.pyc deleted file mode 100644 index 38aa1bcd..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageGrab.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMath.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMath.cpython-312.pyc deleted file mode 100644 index ec332535..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMath.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMode.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMode.cpython-312.pyc deleted file mode 100644 index 8b64a241..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMode.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMorph.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMorph.cpython-312.pyc deleted file mode 100644 index 493847cf..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageMorph.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageOps.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageOps.cpython-312.pyc deleted file mode 100644 index 53224535..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageOps.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImagePalette.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImagePalette.cpython-312.pyc deleted file mode 100644 index 551ba3aa..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImagePalette.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImagePath.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImagePath.cpython-312.pyc deleted file mode 100644 index 72c4ac4f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImagePath.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageQt.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageQt.cpython-312.pyc deleted file mode 100644 index c0d0d7a7..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageQt.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageSequence.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageSequence.cpython-312.pyc deleted file mode 100644 index bc985bf5..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageSequence.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageShow.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageShow.cpython-312.pyc deleted file mode 100644 index e88e4e77..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageShow.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageStat.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageStat.cpython-312.pyc deleted file mode 100644 index 6aa760b9..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageStat.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageText.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageText.cpython-312.pyc deleted file mode 100644 index fb66bb6e..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageText.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageTk.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageTk.cpython-312.pyc deleted file mode 100644 index 1def7b11..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageTk.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageTransform.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageTransform.cpython-312.pyc deleted file mode 100644 index 6cc36823..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageTransform.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageWin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageWin.cpython-312.pyc deleted file mode 100644 index 3cad3bf8..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImageWin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImtImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImtImagePlugin.cpython-312.pyc deleted file mode 100644 index a56d6a29..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/ImtImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IptcImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IptcImagePlugin.cpython-312.pyc deleted file mode 100644 index 3f5e3085..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/IptcImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Jpeg2KImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Jpeg2KImagePlugin.cpython-312.pyc deleted file mode 100644 index 8551fdcd..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/Jpeg2KImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/JpegImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/JpegImagePlugin.cpython-312.pyc deleted file mode 100644 index aed749ed..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/JpegImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/JpegPresets.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/JpegPresets.cpython-312.pyc deleted file mode 100644 index 6870c78d..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/JpegPresets.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/McIdasImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/McIdasImagePlugin.cpython-312.pyc deleted file mode 100644 index 9b1bbcc1..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/McIdasImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MicImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MicImagePlugin.cpython-312.pyc deleted file mode 100644 index ad395052..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MicImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MpegImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MpegImagePlugin.cpython-312.pyc deleted file mode 100644 index 1cabea0a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MpegImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MpoImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MpoImagePlugin.cpython-312.pyc deleted file mode 100644 index b1cffb6d..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MpoImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MspImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MspImagePlugin.cpython-312.pyc deleted file mode 100644 index ea80e913..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/MspImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PSDraw.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PSDraw.cpython-312.pyc deleted file mode 100644 index 24da696b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PSDraw.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PaletteFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PaletteFile.cpython-312.pyc deleted file mode 100644 index 0d1830ff..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PaletteFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PalmImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PalmImagePlugin.cpython-312.pyc deleted file mode 100644 index 7454ac63..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PalmImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcdImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcdImagePlugin.cpython-312.pyc deleted file mode 100644 index 7d1fb2c6..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcdImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcfFontFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcfFontFile.cpython-312.pyc deleted file mode 100644 index cc4923ee..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcfFontFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcxImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcxImagePlugin.cpython-312.pyc deleted file mode 100644 index 373e9a9a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PcxImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PdfImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PdfImagePlugin.cpython-312.pyc deleted file mode 100644 index 0b078fa8..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PdfImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PdfParser.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PdfParser.cpython-312.pyc deleted file mode 100644 index 159a15bf..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PdfParser.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PixarImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PixarImagePlugin.cpython-312.pyc deleted file mode 100644 index 83c2a00f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PixarImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PngImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PngImagePlugin.cpython-312.pyc deleted file mode 100644 index ddc57507..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PngImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PpmImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PpmImagePlugin.cpython-312.pyc deleted file mode 100644 index 6864758c..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PpmImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PsdImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PsdImagePlugin.cpython-312.pyc deleted file mode 100644 index da819fe3..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/PsdImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/QoiImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/QoiImagePlugin.cpython-312.pyc deleted file mode 100644 index 9ef23a52..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/QoiImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SgiImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SgiImagePlugin.cpython-312.pyc deleted file mode 100644 index f6a20eaa..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SgiImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SpiderImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SpiderImagePlugin.cpython-312.pyc deleted file mode 100644 index 80bd7920..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SpiderImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SunImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SunImagePlugin.cpython-312.pyc deleted file mode 100644 index 3a51f7ce..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/SunImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TarIO.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TarIO.cpython-312.pyc deleted file mode 100644 index f0a914c9..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TarIO.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TgaImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TgaImagePlugin.cpython-312.pyc deleted file mode 100644 index 83416dbb..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TgaImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TiffImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TiffImagePlugin.cpython-312.pyc deleted file mode 100644 index b369cb15..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TiffImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TiffTags.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TiffTags.cpython-312.pyc deleted file mode 100644 index 901bad4b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/TiffTags.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WalImageFile.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WalImageFile.cpython-312.pyc deleted file mode 100644 index e8209346..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WalImageFile.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WebPImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WebPImagePlugin.cpython-312.pyc deleted file mode 100644 index ddc03931..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WebPImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WmfImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WmfImagePlugin.cpython-312.pyc deleted file mode 100644 index f7f72514..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/WmfImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XVThumbImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XVThumbImagePlugin.cpython-312.pyc deleted file mode 100644 index 19ff3db1..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XVThumbImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XbmImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XbmImagePlugin.cpython-312.pyc deleted file mode 100644 index c4ce7e09..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XbmImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XpmImagePlugin.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XpmImagePlugin.cpython-312.pyc deleted file mode 100644 index 6744c499..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/XpmImagePlugin.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 891a7e45..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/__main__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/__main__.cpython-312.pyc deleted file mode 100644 index d486eb3b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/__main__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_binary.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_binary.cpython-312.pyc deleted file mode 100644 index ac69b01a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_binary.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_deprecate.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_deprecate.cpython-312.pyc deleted file mode 100644 index d1a3dc63..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_deprecate.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_tkinter_finder.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_tkinter_finder.cpython-312.pyc deleted file mode 100644 index 39268736..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_tkinter_finder.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_typing.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_typing.cpython-312.pyc deleted file mode 100644 index cd5923dc..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_typing.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_util.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_util.cpython-312.pyc deleted file mode 100644 index 3645b7d3..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_util.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_version.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_version.cpython-312.pyc deleted file mode 100644 index 23ed5523..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/_version.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/features.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/features.cpython-312.pyc deleted file mode 100644 index 7ed8796a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/features.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/report.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/report.cpython-312.pyc deleted file mode 100644 index d204ab1a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/__pycache__/report.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_avif.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_avif.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index c83a451a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_avif.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_avif.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_avif.pyi deleted file mode 100644 index e27843e5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_avif.pyi +++ /dev/null @@ -1,3 +0,0 @@ -from typing import Any - -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_binary.py b/pptx-env/lib/python3.12/site-packages/PIL/_binary.py deleted file mode 100644 index 4594ccce..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_binary.py +++ /dev/null @@ -1,112 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Binary input/output support routines. -# -# Copyright (c) 1997-2003 by Secret Labs AB -# Copyright (c) 1995-2003 by Fredrik Lundh -# Copyright (c) 2012 by Brian Crowell -# -# See the README file for information on usage and redistribution. -# - - -"""Binary input/output support routines.""" -from __future__ import annotations - -from struct import pack, unpack_from - - -def i8(c: bytes) -> int: - return c[0] - - -def o8(i: int) -> bytes: - return bytes((i & 255,)) - - -# Input, le = little endian, be = big endian -def i16le(c: bytes, o: int = 0) -> int: - """ - Converts a 2-bytes (16 bits) string to an unsigned integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from(" int: - """ - Converts a 2-bytes (16 bits) string to a signed integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from(" int: - """ - Converts a 2-bytes (16 bits) string to a signed integer, big endian. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from(">h", c, o)[0] - - -def i32le(c: bytes, o: int = 0) -> int: - """ - Converts a 4-bytes (32 bits) string to an unsigned integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from(" int: - """ - Converts a 4-bytes (32 bits) string to a signed integer. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from(" int: - """ - Converts a 4-bytes (32 bits) string to a signed integer, big endian. - - :param c: string containing bytes to convert - :param o: offset of bytes to convert in string - """ - return unpack_from(">i", c, o)[0] - - -def i16be(c: bytes, o: int = 0) -> int: - return unpack_from(">H", c, o)[0] - - -def i32be(c: bytes, o: int = 0) -> int: - return unpack_from(">I", c, o)[0] - - -# Output, le = little endian, be = big endian -def o16le(i: int) -> bytes: - return pack(" bytes: - return pack(" bytes: - return pack(">H", i) - - -def o32be(i: int) -> bytes: - return pack(">I", i) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_deprecate.py b/pptx-env/lib/python3.12/site-packages/PIL/_deprecate.py deleted file mode 100644 index 616a9aac..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_deprecate.py +++ /dev/null @@ -1,70 +0,0 @@ -from __future__ import annotations - -import warnings - -from . import __version__ - - -def deprecate( - deprecated: str, - when: int | None, - replacement: str | None = None, - *, - action: str | None = None, - plural: bool = False, - stacklevel: int = 3, -) -> None: - """ - Deprecations helper. - - :param deprecated: Name of thing to be deprecated. - :param when: Pillow major version to be removed in. - :param replacement: Name of replacement. - :param action: Instead of "replacement", give a custom call to action - e.g. "Upgrade to new thing". - :param plural: if the deprecated thing is plural, needing "are" instead of "is". - - Usually of the form: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - Use [replacement] instead." - - You can leave out the replacement sentence: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd)" - - Or with another call to action: - - "[deprecated] is deprecated and will be removed in Pillow [when] (yyyy-mm-dd). - [action]." - """ - - is_ = "are" if plural else "is" - - if when is None: - removed = "a future version" - elif when <= int(__version__.split(".")[0]): - msg = f"{deprecated} {is_} deprecated and should be removed." - raise RuntimeError(msg) - elif when == 13: - removed = "Pillow 13 (2026-10-15)" - else: - msg = f"Unknown removal version: {when}. Update {__name__}?" - raise ValueError(msg) - - if replacement and action: - msg = "Use only one of 'replacement' and 'action'" - raise ValueError(msg) - - if replacement: - action = f". Use {replacement} instead." - elif action: - action = f". {action.rstrip('.')}." - else: - action = "" - - warnings.warn( - f"{deprecated} {is_} deprecated and will be removed in {removed}{action}", - DeprecationWarning, - stacklevel=stacklevel, - ) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imaging.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_imaging.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index faa2f9d4..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_imaging.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imaging.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_imaging.pyi deleted file mode 100644 index 998bc52e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_imaging.pyi +++ /dev/null @@ -1,31 +0,0 @@ -from typing import Any - -class ImagingCore: - def __getitem__(self, index: int) -> float: ... - def __getattr__(self, name: str) -> Any: ... - -class ImagingFont: - def __getattr__(self, name: str) -> Any: ... - -class ImagingDraw: - def __getattr__(self, name: str) -> Any: ... - -class PixelAccess: - def __getitem__(self, xy: tuple[int, int]) -> float | tuple[int, ...]: ... - def __setitem__( - self, xy: tuple[int, int], color: float | tuple[int, ...] - ) -> None: ... - -class ImagingDecoder: - def __getattr__(self, name: str) -> Any: ... - -class ImagingEncoder: - def __getattr__(self, name: str) -> Any: ... - -class _Outline: - def close(self) -> None: ... - def __getattr__(self, name: str) -> Any: ... - -def font(image: ImagingCore, glyphdata: bytes) -> ImagingFont: ... -def outline() -> _Outline: ... -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingcms.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_imagingcms.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index 2bcf0744..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_imagingcms.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingcms.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_imagingcms.pyi deleted file mode 100644 index 4fc0d60a..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_imagingcms.pyi +++ /dev/null @@ -1,143 +0,0 @@ -import datetime -import sys -from typing import Literal, SupportsFloat, TypeAlias, TypedDict - -from ._typing import CapsuleType - -littlecms_version: str | None - -_Tuple3f: TypeAlias = tuple[float, float, float] -_Tuple2x3f: TypeAlias = tuple[_Tuple3f, _Tuple3f] -_Tuple3x3f: TypeAlias = tuple[_Tuple3f, _Tuple3f, _Tuple3f] - -class _IccMeasurementCondition(TypedDict): - observer: int - backing: _Tuple3f - geo: str - flare: float - illuminant_type: str - -class _IccViewingCondition(TypedDict): - illuminant: _Tuple3f - surround: _Tuple3f - illuminant_type: str - -class CmsProfile: - @property - def rendering_intent(self) -> int: ... - @property - def creation_date(self) -> datetime.datetime | None: ... - @property - def copyright(self) -> str | None: ... - @property - def target(self) -> str | None: ... - @property - def manufacturer(self) -> str | None: ... - @property - def model(self) -> str | None: ... - @property - def profile_description(self) -> str | None: ... - @property - def screening_description(self) -> str | None: ... - @property - def viewing_condition(self) -> str | None: ... - @property - def version(self) -> float: ... - @property - def icc_version(self) -> int: ... - @property - def attributes(self) -> int: ... - @property - def header_flags(self) -> int: ... - @property - def header_manufacturer(self) -> str: ... - @property - def header_model(self) -> str: ... - @property - def device_class(self) -> str: ... - @property - def connection_space(self) -> str: ... - @property - def xcolor_space(self) -> str: ... - @property - def profile_id(self) -> bytes: ... - @property - def is_matrix_shaper(self) -> bool: ... - @property - def technology(self) -> str | None: ... - @property - def colorimetric_intent(self) -> str | None: ... - @property - def perceptual_rendering_intent_gamut(self) -> str | None: ... - @property - def saturation_rendering_intent_gamut(self) -> str | None: ... - @property - def red_colorant(self) -> _Tuple2x3f | None: ... - @property - def green_colorant(self) -> _Tuple2x3f | None: ... - @property - def blue_colorant(self) -> _Tuple2x3f | None: ... - @property - def red_primary(self) -> _Tuple2x3f | None: ... - @property - def green_primary(self) -> _Tuple2x3f | None: ... - @property - def blue_primary(self) -> _Tuple2x3f | None: ... - @property - def media_white_point_temperature(self) -> float | None: ... - @property - def media_white_point(self) -> _Tuple2x3f | None: ... - @property - def media_black_point(self) -> _Tuple2x3f | None: ... - @property - def luminance(self) -> _Tuple2x3f | None: ... - @property - def chromatic_adaptation(self) -> tuple[_Tuple3x3f, _Tuple3x3f] | None: ... - @property - def chromaticity(self) -> _Tuple3x3f | None: ... - @property - def colorant_table(self) -> list[str] | None: ... - @property - def colorant_table_out(self) -> list[str] | None: ... - @property - def intent_supported(self) -> dict[int, tuple[bool, bool, bool]] | None: ... - @property - def clut(self) -> dict[int, tuple[bool, bool, bool]] | None: ... - @property - def icc_measurement_condition(self) -> _IccMeasurementCondition | None: ... - @property - def icc_viewing_condition(self) -> _IccViewingCondition | None: ... - def is_intent_supported(self, intent: int, direction: int, /) -> int: ... - -class CmsTransform: - def apply(self, id_in: CapsuleType, id_out: CapsuleType) -> int: ... - -def profile_open(profile: str, /) -> CmsProfile: ... -def profile_frombytes(profile: bytes, /) -> CmsProfile: ... -def profile_tobytes(profile: CmsProfile, /) -> bytes: ... -def buildTransform( - input_profile: CmsProfile, - output_profile: CmsProfile, - in_mode: str, - out_mode: str, - rendering_intent: int = 0, - cms_flags: int = 0, - /, -) -> CmsTransform: ... -def buildProofTransform( - input_profile: CmsProfile, - output_profile: CmsProfile, - proof_profile: CmsProfile, - in_mode: str, - out_mode: str, - rendering_intent: int = 0, - proof_intent: int = 0, - cms_flags: int = 0, - /, -) -> CmsTransform: ... -def createProfile( - color_space: Literal["LAB", "XYZ", "sRGB"], color_temp: SupportsFloat = 0.0, / -) -> CmsProfile: ... - -if sys.platform == "win32": - def get_display_profile_win32(handle: int = 0, is_dc: int = 0, /) -> str | None: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingft.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_imagingft.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index ed93a351..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_imagingft.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingft.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_imagingft.pyi deleted file mode 100644 index 2136810b..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_imagingft.pyi +++ /dev/null @@ -1,70 +0,0 @@ -from collections.abc import Callable -from typing import Any - -from . import ImageFont, _imaging - -class Font: - @property - def family(self) -> str | None: ... - @property - def style(self) -> str | None: ... - @property - def ascent(self) -> int: ... - @property - def descent(self) -> int: ... - @property - def height(self) -> int: ... - @property - def x_ppem(self) -> int: ... - @property - def y_ppem(self) -> int: ... - @property - def glyphs(self) -> int: ... - def render( - self, - string: str | bytes, - fill: Callable[[int, int], _imaging.ImagingCore], - mode: str, - dir: str | None, - features: list[str] | None, - lang: str | None, - stroke_width: float, - stroke_filled: bool, - anchor: str | None, - foreground_ink_long: int, - start: tuple[float, float], - /, - ) -> tuple[_imaging.ImagingCore, tuple[int, int]]: ... - def getsize( - self, - string: str | bytes | bytearray, - mode: str, - dir: str | None, - features: list[str] | None, - lang: str | None, - anchor: str | None, - /, - ) -> tuple[tuple[int, int], tuple[int, int]]: ... - def getlength( - self, - string: str | bytes, - mode: str, - dir: str | None, - features: list[str] | None, - lang: str | None, - /, - ) -> float: ... - def getvarnames(self) -> list[bytes]: ... - def getvaraxes(self) -> list[ImageFont.Axis]: ... - def setvarname(self, instance_index: int, /) -> None: ... - def setvaraxes(self, axes: list[float], /) -> None: ... - -def getfont( - filename: str | bytes, - size: float, - index: int, - encoding: str, - font_bytes: bytes, - layout_engine: int, -) -> Font: ... -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmath.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_imagingmath.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index d8e9029b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmath.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmath.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_imagingmath.pyi deleted file mode 100644 index e27843e5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmath.pyi +++ /dev/null @@ -1,3 +0,0 @@ -from typing import Any - -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmorph.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_imagingmorph.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index db855ee0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmorph.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmorph.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_imagingmorph.pyi deleted file mode 100644 index e27843e5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_imagingmorph.pyi +++ /dev/null @@ -1,3 +0,0 @@ -from typing import Any - -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingtk.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_imagingtk.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index 5ca935fc..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_imagingtk.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_imagingtk.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_imagingtk.pyi deleted file mode 100644 index e27843e5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_imagingtk.pyi +++ /dev/null @@ -1,3 +0,0 @@ -from typing import Any - -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_tkinter_finder.py b/pptx-env/lib/python3.12/site-packages/PIL/_tkinter_finder.py deleted file mode 100644 index 9c014300..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_tkinter_finder.py +++ /dev/null @@ -1,20 +0,0 @@ -"""Find compiled module linking to Tcl / Tk libraries""" - -from __future__ import annotations - -import sys -import tkinter - -tk = getattr(tkinter, "_tkinter") - -try: - if hasattr(sys, "pypy_find_executable"): - TKINTER_LIB = tk.tklib_cffi.__file__ - else: - TKINTER_LIB = tk.__file__ -except AttributeError: - # _tkinter may be compiled directly into Python, in which case __file__ is - # not available. load_tkinter_funcs will check the binary first in any case. - TKINTER_LIB = None - -tk_version = str(tkinter.TkVersion) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_typing.py b/pptx-env/lib/python3.12/site-packages/PIL/_typing.py deleted file mode 100644 index a941f898..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_typing.py +++ /dev/null @@ -1,45 +0,0 @@ -from __future__ import annotations - -import os -import sys -from collections.abc import Sequence -from typing import Any, Protocol, TypeVar - -TYPE_CHECKING = False -if TYPE_CHECKING: - from numbers import _IntegralLike as IntegralLike - - try: - import numpy.typing as npt - - NumpyArray = npt.NDArray[Any] - except ImportError: - pass - -if sys.version_info >= (3, 13): - from types import CapsuleType -else: - CapsuleType = object - -if sys.version_info >= (3, 12): - from collections.abc import Buffer -else: - Buffer = Any - - -_Ink = float | tuple[int, ...] | str - -Coords = Sequence[float] | Sequence[Sequence[float]] - - -_T_co = TypeVar("_T_co", covariant=True) - - -class SupportsRead(Protocol[_T_co]): - def read(self, length: int = ..., /) -> _T_co: ... - - -StrOrBytesPath = str | bytes | os.PathLike[str] | os.PathLike[bytes] - - -__all__ = ["Buffer", "IntegralLike", "StrOrBytesPath", "SupportsRead"] diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_util.py b/pptx-env/lib/python3.12/site-packages/PIL/_util.py deleted file mode 100644 index b1fa6a0f..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_util.py +++ /dev/null @@ -1,29 +0,0 @@ -from __future__ import annotations - -import os - -TYPE_CHECKING = False -if TYPE_CHECKING: - from typing import Any, NoReturn, TypeGuard - - from ._typing import StrOrBytesPath - - -def is_path(f: Any) -> TypeGuard[StrOrBytesPath]: - return isinstance(f, (bytes, str, os.PathLike)) - - -class DeferredError: - def __init__(self, ex: BaseException): - self.ex = ex - - def __getattr__(self, elt: str) -> NoReturn: - raise self.ex - - @staticmethod - def new(ex: BaseException) -> Any: - """ - Creates an object that raises the wrapped exception ``ex`` when used, - and casts it to :py:obj:`~typing.Any` type. - """ - return DeferredError(ex) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_version.py b/pptx-env/lib/python3.12/site-packages/PIL/_version.py deleted file mode 100644 index 79ce194c..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_version.py +++ /dev/null @@ -1,4 +0,0 @@ -# Master version for Pillow -from __future__ import annotations - -__version__ = "12.0.0" diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_webp.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/PIL/_webp.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index 9fc096b3..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PIL/_webp.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PIL/_webp.pyi b/pptx-env/lib/python3.12/site-packages/PIL/_webp.pyi deleted file mode 100644 index e27843e5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/_webp.pyi +++ /dev/null @@ -1,3 +0,0 @@ -from typing import Any - -def __getattr__(name: str) -> Any: ... diff --git a/pptx-env/lib/python3.12/site-packages/PIL/features.py b/pptx-env/lib/python3.12/site-packages/PIL/features.py deleted file mode 100644 index ff32c251..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/features.py +++ /dev/null @@ -1,343 +0,0 @@ -from __future__ import annotations - -import collections -import os -import sys -import warnings -from typing import IO - -import PIL - -from . import Image - -modules = { - "pil": ("PIL._imaging", "PILLOW_VERSION"), - "tkinter": ("PIL._tkinter_finder", "tk_version"), - "freetype2": ("PIL._imagingft", "freetype2_version"), - "littlecms2": ("PIL._imagingcms", "littlecms_version"), - "webp": ("PIL._webp", "webpdecoder_version"), - "avif": ("PIL._avif", "libavif_version"), -} - - -def check_module(feature: str) -> bool: - """ - Checks if a module is available. - - :param feature: The module to check for. - :returns: ``True`` if available, ``False`` otherwise. - :raises ValueError: If the module is not defined in this version of Pillow. - """ - if feature not in modules: - msg = f"Unknown module {feature}" - raise ValueError(msg) - - module, ver = modules[feature] - - try: - __import__(module) - return True - except ModuleNotFoundError: - return False - except ImportError as ex: - warnings.warn(str(ex)) - return False - - -def version_module(feature: str) -> str | None: - """ - :param feature: The module to check for. - :returns: - The loaded version number as a string, or ``None`` if unknown or not available. - :raises ValueError: If the module is not defined in this version of Pillow. - """ - if not check_module(feature): - return None - - module, ver = modules[feature] - - return getattr(__import__(module, fromlist=[ver]), ver) - - -def get_supported_modules() -> list[str]: - """ - :returns: A list of all supported modules. - """ - return [f for f in modules if check_module(f)] - - -codecs = { - "jpg": ("jpeg", "jpeglib"), - "jpg_2000": ("jpeg2k", "jp2klib"), - "zlib": ("zip", "zlib"), - "libtiff": ("libtiff", "libtiff"), -} - - -def check_codec(feature: str) -> bool: - """ - Checks if a codec is available. - - :param feature: The codec to check for. - :returns: ``True`` if available, ``False`` otherwise. - :raises ValueError: If the codec is not defined in this version of Pillow. - """ - if feature not in codecs: - msg = f"Unknown codec {feature}" - raise ValueError(msg) - - codec, lib = codecs[feature] - - return f"{codec}_encoder" in dir(Image.core) - - -def version_codec(feature: str) -> str | None: - """ - :param feature: The codec to check for. - :returns: - The version number as a string, or ``None`` if not available. - Checked at compile time for ``jpg``, run-time otherwise. - :raises ValueError: If the codec is not defined in this version of Pillow. - """ - if not check_codec(feature): - return None - - codec, lib = codecs[feature] - - version = getattr(Image.core, f"{lib}_version") - - if feature == "libtiff": - return version.split("\n")[0].split("Version ")[1] - - return version - - -def get_supported_codecs() -> list[str]: - """ - :returns: A list of all supported codecs. - """ - return [f for f in codecs if check_codec(f)] - - -features: dict[str, tuple[str, str, str | None]] = { - "raqm": ("PIL._imagingft", "HAVE_RAQM", "raqm_version"), - "fribidi": ("PIL._imagingft", "HAVE_FRIBIDI", "fribidi_version"), - "harfbuzz": ("PIL._imagingft", "HAVE_HARFBUZZ", "harfbuzz_version"), - "libjpeg_turbo": ("PIL._imaging", "HAVE_LIBJPEGTURBO", "libjpeg_turbo_version"), - "mozjpeg": ("PIL._imaging", "HAVE_MOZJPEG", "libjpeg_turbo_version"), - "zlib_ng": ("PIL._imaging", "HAVE_ZLIBNG", "zlib_ng_version"), - "libimagequant": ("PIL._imaging", "HAVE_LIBIMAGEQUANT", "imagequant_version"), - "xcb": ("PIL._imaging", "HAVE_XCB", None), -} - - -def check_feature(feature: str) -> bool | None: - """ - Checks if a feature is available. - - :param feature: The feature to check for. - :returns: ``True`` if available, ``False`` if unavailable, ``None`` if unknown. - :raises ValueError: If the feature is not defined in this version of Pillow. - """ - if feature not in features: - msg = f"Unknown feature {feature}" - raise ValueError(msg) - - module, flag, ver = features[feature] - - try: - imported_module = __import__(module, fromlist=["PIL"]) - return getattr(imported_module, flag) - except ModuleNotFoundError: - return None - except ImportError as ex: - warnings.warn(str(ex)) - return None - - -def version_feature(feature: str) -> str | None: - """ - :param feature: The feature to check for. - :returns: The version number as a string, or ``None`` if not available. - :raises ValueError: If the feature is not defined in this version of Pillow. - """ - if not check_feature(feature): - return None - - module, flag, ver = features[feature] - - if ver is None: - return None - - return getattr(__import__(module, fromlist=[ver]), ver) - - -def get_supported_features() -> list[str]: - """ - :returns: A list of all supported features. - """ - return [f for f in features if check_feature(f)] - - -def check(feature: str) -> bool | None: - """ - :param feature: A module, codec, or feature name. - :returns: - ``True`` if the module, codec, or feature is available, - ``False`` or ``None`` otherwise. - """ - - if feature in modules: - return check_module(feature) - if feature in codecs: - return check_codec(feature) - if feature in features: - return check_feature(feature) - warnings.warn(f"Unknown feature '{feature}'.", stacklevel=2) - return False - - -def version(feature: str) -> str | None: - """ - :param feature: - The module, codec, or feature to check for. - :returns: - The version number as a string, or ``None`` if unknown or not available. - """ - if feature in modules: - return version_module(feature) - if feature in codecs: - return version_codec(feature) - if feature in features: - return version_feature(feature) - return None - - -def get_supported() -> list[str]: - """ - :returns: A list of all supported modules, features, and codecs. - """ - - ret = get_supported_modules() - ret.extend(get_supported_features()) - ret.extend(get_supported_codecs()) - return ret - - -def pilinfo(out: IO[str] | None = None, supported_formats: bool = True) -> None: - """ - Prints information about this installation of Pillow. - This function can be called with ``python3 -m PIL``. - It can also be called with ``python3 -m PIL.report`` or ``python3 -m PIL --report`` - to have "supported_formats" set to ``False``, omitting the list of all supported - image file formats. - - :param out: - The output stream to print to. Defaults to ``sys.stdout`` if ``None``. - :param supported_formats: - If ``True``, a list of all supported image file formats will be printed. - """ - - if out is None: - out = sys.stdout - - Image.init() - - print("-" * 68, file=out) - print(f"Pillow {PIL.__version__}", file=out) - py_version_lines = sys.version.splitlines() - print(f"Python {py_version_lines[0].strip()}", file=out) - for py_version in py_version_lines[1:]: - print(f" {py_version.strip()}", file=out) - print("-" * 68, file=out) - print(f"Python executable is {sys.executable or 'unknown'}", file=out) - if sys.prefix != sys.base_prefix: - print(f"Environment Python files loaded from {sys.prefix}", file=out) - print(f"System Python files loaded from {sys.base_prefix}", file=out) - print("-" * 68, file=out) - print( - f"Python Pillow modules loaded from {os.path.dirname(Image.__file__)}", - file=out, - ) - print( - f"Binary Pillow modules loaded from {os.path.dirname(Image.core.__file__)}", - file=out, - ) - print("-" * 68, file=out) - - for name, feature in [ - ("pil", "PIL CORE"), - ("tkinter", "TKINTER"), - ("freetype2", "FREETYPE2"), - ("littlecms2", "LITTLECMS2"), - ("webp", "WEBP"), - ("avif", "AVIF"), - ("jpg", "JPEG"), - ("jpg_2000", "OPENJPEG (JPEG2000)"), - ("zlib", "ZLIB (PNG/ZIP)"), - ("libtiff", "LIBTIFF"), - ("raqm", "RAQM (Bidirectional Text)"), - ("libimagequant", "LIBIMAGEQUANT (Quantization method)"), - ("xcb", "XCB (X protocol)"), - ]: - if check(name): - v: str | None = None - if name == "jpg": - libjpeg_turbo_version = version_feature("libjpeg_turbo") - if libjpeg_turbo_version is not None: - v = "mozjpeg" if check_feature("mozjpeg") else "libjpeg-turbo" - v += " " + libjpeg_turbo_version - if v is None: - v = version(name) - if v is not None: - version_static = name in ("pil", "jpg") - if name == "littlecms2": - # this check is also in src/_imagingcms.c:setup_module() - version_static = tuple(int(x) for x in v.split(".")) < (2, 7) - t = "compiled for" if version_static else "loaded" - if name == "zlib": - zlib_ng_version = version_feature("zlib_ng") - if zlib_ng_version is not None: - v += ", compiled for zlib-ng " + zlib_ng_version - elif name == "raqm": - for f in ("fribidi", "harfbuzz"): - v2 = version_feature(f) - if v2 is not None: - v += f", {f} {v2}" - print("---", feature, "support ok,", t, v, file=out) - else: - print("---", feature, "support ok", file=out) - else: - print("***", feature, "support not installed", file=out) - print("-" * 68, file=out) - - if supported_formats: - extensions = collections.defaultdict(list) - for ext, i in Image.EXTENSION.items(): - extensions[i].append(ext) - - for i in sorted(Image.ID): - line = f"{i}" - if i in Image.MIME: - line = f"{line} {Image.MIME[i]}" - print(line, file=out) - - if i in extensions: - print( - "Extensions: {}".format(", ".join(sorted(extensions[i]))), file=out - ) - - features = [] - if i in Image.OPEN: - features.append("open") - if i in Image.SAVE: - features.append("save") - if i in Image.SAVE_ALL: - features.append("save_all") - if i in Image.DECODERS: - features.append("decode") - if i in Image.ENCODERS: - features.append("encode") - - print("Features: {}".format(", ".join(features)), file=out) - print("-" * 68, file=out) diff --git a/pptx-env/lib/python3.12/site-packages/PIL/py.typed b/pptx-env/lib/python3.12/site-packages/PIL/py.typed deleted file mode 100644 index e69de29b..00000000 diff --git a/pptx-env/lib/python3.12/site-packages/PIL/report.py b/pptx-env/lib/python3.12/site-packages/PIL/report.py deleted file mode 100644 index d2815e84..00000000 --- a/pptx-env/lib/python3.12/site-packages/PIL/report.py +++ /dev/null @@ -1,5 +0,0 @@ -from __future__ import annotations - -from .features import pilinfo - -pilinfo(supported_formats=False) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__init__.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/__init__.py deleted file mode 100644 index 4154ee64..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -""" -PyPDF2 is a free and open-source pure-python PDF library capable of splitting, -merging, cropping, and transforming the pages of PDF files. It can also add -custom data, viewing options, and passwords to PDF files. PyPDF2 can retrieve -text and metadata from PDFs as well. - -You can read the full docs at https://pypdf2.readthedocs.io/. -""" - -import warnings - -from ._encryption import PasswordType -from ._merger import PdfFileMerger, PdfMerger -from ._page import PageObject, Transformation -from ._reader import DocumentInformation, PdfFileReader, PdfReader -from ._version import __version__ -from ._writer import PdfFileWriter, PdfWriter -from .pagerange import PageRange, parse_filename_page_ranges -from .papersizes import PaperSize - -warnings.warn( - message="PyPDF2 is deprecated. Please move to the pypdf library instead.", - category=DeprecationWarning, -) - -__all__ = [ - "__version__", - "PageRange", - "PaperSize", - "DocumentInformation", - "parse_filename_page_ranges", - "PdfFileMerger", # will be removed in PyPDF2 3.0.0; use PdfMerger instead - "PdfFileReader", # will be removed in PyPDF2 3.0.0; use PdfReader instead - "PdfFileWriter", # will be removed in PyPDF2 3.0.0; use PdfWriter instead - "PdfMerger", - "PdfReader", - "PdfWriter", - "Transformation", - "PageObject", - "PasswordType", -] diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 1d2486e9..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_cmap.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_cmap.cpython-312.pyc deleted file mode 100644 index 768b40b2..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_cmap.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_encryption.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_encryption.cpython-312.pyc deleted file mode 100644 index 47924766..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_encryption.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_merger.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_merger.cpython-312.pyc deleted file mode 100644 index fa274425..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_merger.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_page.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_page.cpython-312.pyc deleted file mode 100644 index d32f0b10..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_page.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_protocols.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_protocols.cpython-312.pyc deleted file mode 100644 index 757c22fa..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_protocols.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_reader.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_reader.cpython-312.pyc deleted file mode 100644 index 6bb658e7..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_reader.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_security.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_security.cpython-312.pyc deleted file mode 100644 index c94de683..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_security.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_utils.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_utils.cpython-312.pyc deleted file mode 100644 index ddf82c81..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_utils.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_version.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_version.cpython-312.pyc deleted file mode 100644 index 28a6b7dc..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_version.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_writer.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_writer.cpython-312.pyc deleted file mode 100644 index f89d0f95..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/_writer.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/constants.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/constants.cpython-312.pyc deleted file mode 100644 index 2fc8c8cb..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/constants.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/errors.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/errors.cpython-312.pyc deleted file mode 100644 index d88527e6..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/errors.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/filters.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/filters.cpython-312.pyc deleted file mode 100644 index 85fc877f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/filters.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/pagerange.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/pagerange.cpython-312.pyc deleted file mode 100644 index b519ae3d..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/pagerange.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/papersizes.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/papersizes.cpython-312.pyc deleted file mode 100644 index 2e6cda83..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/papersizes.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/types.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/types.cpython-312.pyc deleted file mode 100644 index 86a72d6f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/types.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/xmp.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/xmp.cpython-312.pyc deleted file mode 100644 index 638900e0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/__pycache__/xmp.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_cmap.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_cmap.py deleted file mode 100644 index db082a82..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_cmap.py +++ /dev/null @@ -1,413 +0,0 @@ -import warnings -from binascii import unhexlify -from math import ceil -from typing import Any, Dict, List, Tuple, Union, cast - -from ._codecs import adobe_glyphs, charset_encoding -from ._utils import logger_warning -from .errors import PdfReadWarning -from .generic import DecodedStreamObject, DictionaryObject, StreamObject - - -# code freely inspired from @twiggy ; see #711 -def build_char_map( - font_name: str, space_width: float, obj: DictionaryObject -) -> Tuple[ - str, float, Union[str, Dict[int, str]], Dict, DictionaryObject -]: # font_type,space_width /2, encoding, cmap - """Determine information about a font. - - This function returns a tuple consisting of: - font sub-type, space_width/2, encoding, map character-map, font-dictionary. - The font-dictionary itself is suitable for the curious.""" - ft: DictionaryObject = obj["/Resources"]["/Font"][font_name] # type: ignore - font_type: str = cast(str, ft["/Subtype"]) - - space_code = 32 - encoding, space_code = parse_encoding(ft, space_code) - map_dict, space_code, int_entry = parse_to_unicode(ft, space_code) - - # encoding can be either a string for decode (on 1,2 or a variable number of bytes) of a char table (for 1 byte only for me) - # if empty string, it means it is than encoding field is not present and we have to select the good encoding from cmap input data - if encoding == "": - if -1 not in map_dict or map_dict[-1] == 1: - # I have not been able to find any rule for no /Encoding nor /ToUnicode - # One example shows /Symbol,bold I consider 8 bits encoding default - encoding = "charmap" - else: - encoding = "utf-16-be" - # apply rule from PDF ref 1.7 Β§5.9.1, 1st bullet : if cmap not empty encoding should be discarded (here transformed into identity for those characters) - # if encoding is an str it is expected to be a identity translation - elif isinstance(encoding, dict): - for x in int_entry: - if x <= 255: - encoding[x] = chr(x) - try: - # override space_width with new params - space_width = _default_fonts_space_width[cast(str, ft["/BaseFont"])] - except Exception: - pass - # I conside the space_code is available on one byte - if isinstance(space_code, str): - try: # one byte - sp = space_code.encode("charmap")[0] - except Exception: - sp = space_code.encode("utf-16-be") - sp = sp[0] + 256 * sp[1] - else: - sp = space_code - sp_width = compute_space_width(ft, sp, space_width) - - return ( - font_type, - float(sp_width / 2), - encoding, - # https://github.com/python/mypy/issues/4374 - map_dict, - ft, - ) - - -# used when missing data, e.g. font def missing -unknown_char_map: Tuple[str, float, Union[str, Dict[int, str]], Dict[Any, Any]] = ( - "Unknown", - 9999, - dict(zip(range(256), ["οΏ½"] * 256)), - {}, -) - - -_predefined_cmap: Dict[str, str] = { - "/Identity-H": "utf-16-be", - "/Identity-V": "utf-16-be", - "/GB-EUC-H": "gbk", # TBC - "/GB-EUC-V": "gbk", # TBC - "/GBpc-EUC-H": "gb2312", # TBC - "/GBpc-EUC-V": "gb2312", # TBC -} - - -# manually extracted from http://mirrors.ctan.org/fonts/adobe/afm/Adobe-Core35_AFMs-229.tar.gz -_default_fonts_space_width: Dict[str, int] = { - "/Courrier": 600, - "/Courier-Bold": 600, - "/Courier-BoldOblique": 600, - "/Courier-Oblique": 600, - "/Helvetica": 278, - "/Helvetica-Bold": 278, - "/Helvetica-BoldOblique": 278, - "/Helvetica-Oblique": 278, - "/Helvetica-Narrow": 228, - "/Helvetica-NarrowBold": 228, - "/Helvetica-NarrowBoldOblique": 228, - "/Helvetica-NarrowOblique": 228, - "/Times-Roman": 250, - "/Times-Bold": 250, - "/Times-BoldItalic": 250, - "/Times-Italic": 250, - "/Symbol": 250, - "/ZapfDingbats": 278, -} - - -def parse_encoding( - ft: DictionaryObject, space_code: int -) -> Tuple[Union[str, Dict[int, str]], int]: - encoding: Union[str, List[str], Dict[int, str]] = [] - if "/Encoding" not in ft: - try: - if "/BaseFont" in ft and cast(str, ft["/BaseFont"]) in charset_encoding: - encoding = dict( - zip(range(256), charset_encoding[cast(str, ft["/BaseFont"])]) - ) - else: - encoding = "charmap" - return encoding, _default_fonts_space_width[cast(str, ft["/BaseFont"])] - except Exception: - if cast(str, ft["/Subtype"]) == "/Type1": - return "charmap", space_code - else: - return "", space_code - enc: Union(str, DictionaryObject) = ft["/Encoding"].get_object() # type: ignore - if isinstance(enc, str): - try: - # allready done : enc = NameObject.unnumber(enc.encode()).decode() # for #xx decoding - if enc in charset_encoding: - encoding = charset_encoding[enc].copy() - elif enc in _predefined_cmap: - encoding = _predefined_cmap[enc] - else: - raise Exception("not found") - except Exception: - warnings.warn( - f"Advanced encoding {enc} not implemented yet", - PdfReadWarning, - ) - encoding = enc - elif isinstance(enc, DictionaryObject) and "/BaseEncoding" in enc: - try: - encoding = charset_encoding[cast(str, enc["/BaseEncoding"])].copy() - except Exception: - warnings.warn( - f"Advanced encoding {encoding} not implemented yet", - PdfReadWarning, - ) - encoding = charset_encoding["/StandardCoding"].copy() - else: - encoding = charset_encoding["/StandardCoding"].copy() - if "/Differences" in enc: - x: int = 0 - o: Union[int, str] - for o in cast(DictionaryObject, cast(DictionaryObject, enc)["/Differences"]): - if isinstance(o, int): - x = o - else: # isinstance(o,str): - try: - encoding[x] = adobe_glyphs[o] # type: ignore - except Exception: - encoding[x] = o # type: ignore - if o == " ": - space_code = x - x += 1 - if isinstance(encoding, list): - encoding = dict(zip(range(256), encoding)) - return encoding, space_code - - -def parse_to_unicode( - ft: DictionaryObject, space_code: int -) -> Tuple[Dict[Any, Any], int, List[int]]: - # will store all translation code - # and map_dict[-1] we will have the number of bytes to convert - map_dict: Dict[Any, Any] = {} - - # will provide the list of cmap keys as int to correct encoding - int_entry: List[int] = [] - - if "/ToUnicode" not in ft: - return {}, space_code, [] - process_rg: bool = False - process_char: bool = False - multiline_rg: Union[ - None, Tuple[int, int] - ] = None # tuple = (current_char, remaining size) ; cf #1285 for example of file - cm = prepare_cm(ft) - for l in cm.split(b"\n"): - process_rg, process_char, multiline_rg = process_cm_line( - l.strip(b" "), process_rg, process_char, multiline_rg, map_dict, int_entry - ) - - for a, value in map_dict.items(): - if value == " ": - space_code = a - return map_dict, space_code, int_entry - - -def prepare_cm(ft: DictionaryObject) -> bytes: - tu = ft["/ToUnicode"] - cm: bytes - if isinstance(tu, StreamObject): - cm = cast(DecodedStreamObject, ft["/ToUnicode"]).get_data() - elif isinstance(tu, str) and tu.startswith("/Identity"): - cm = b"beginbfrange\n<0000> <0001> <0000>\nendbfrange" # the full range 0000-FFFF will be processed - if isinstance(cm, str): - cm = cm.encode() - # we need to prepare cm before due to missing return line in pdf printed to pdf from word - cm = ( - cm.strip() - .replace(b"beginbfchar", b"\nbeginbfchar\n") - .replace(b"endbfchar", b"\nendbfchar\n") - .replace(b"beginbfrange", b"\nbeginbfrange\n") - .replace(b"endbfrange", b"\nendbfrange\n") - .replace(b"<<", b"\n{\n") # text between << and >> not used but - .replace(b">>", b"\n}\n") # some solution to find it back - ) - ll = cm.split(b"<") - for i in range(len(ll)): - j = ll[i].find(b">") - if j >= 0: - if j == 0: - # string is empty: stash a placeholder here (see below) - # see https://github.com/py-pdf/PyPDF2/issues/1111 - content = b"." - else: - content = ll[i][:j].replace(b" ", b"") - ll[i] = content + b" " + ll[i][j + 1 :] - cm = ( - (b" ".join(ll)) - .replace(b"[", b" [ ") - .replace(b"]", b" ]\n ") - .replace(b"\r", b"\n") - ) - return cm - - -def process_cm_line( - l: bytes, - process_rg: bool, - process_char: bool, - multiline_rg: Union[None, Tuple[int, int]], - map_dict: Dict[Any, Any], - int_entry: List[int], -) -> Tuple[bool, bool, Union[None, Tuple[int, int]]]: - if l in (b"", b" ") or l[0] == 37: # 37 = % - return process_rg, process_char, multiline_rg - if b"beginbfrange" in l: - process_rg = True - elif b"endbfrange" in l: - process_rg = False - elif b"beginbfchar" in l: - process_char = True - elif b"endbfchar" in l: - process_char = False - elif process_rg: - multiline_rg = parse_bfrange(l, map_dict, int_entry, multiline_rg) - elif process_char: - parse_bfchar(l, map_dict, int_entry) - return process_rg, process_char, multiline_rg - - -def parse_bfrange( - l: bytes, - map_dict: Dict[Any, Any], - int_entry: List[int], - multiline_rg: Union[None, Tuple[int, int]], -) -> Union[None, Tuple[int, int]]: - lst = [x for x in l.split(b" ") if x] - closure_found = False - nbi = max(len(lst[0]), len(lst[1])) - map_dict[-1] = ceil(nbi / 2) - fmt = b"%%0%dX" % (map_dict[-1] * 2) - if multiline_rg is not None: - a = multiline_rg[0] # a, b not in the current line - b = multiline_rg[1] - for sq in lst[1:]: - if sq == b"]": - closure_found = True - break - map_dict[ - unhexlify(fmt % a).decode( - "charmap" if map_dict[-1] == 1 else "utf-16-be", - "surrogatepass", - ) - ] = unhexlify(sq).decode("utf-16-be", "surrogatepass") - int_entry.append(a) - a += 1 - else: - a = int(lst[0], 16) - b = int(lst[1], 16) - if lst[2] == b"[": - for sq in lst[3:]: - if sq == b"]": - closure_found = True - break - map_dict[ - unhexlify(fmt % a).decode( - "charmap" if map_dict[-1] == 1 else "utf-16-be", - "surrogatepass", - ) - ] = unhexlify(sq).decode("utf-16-be", "surrogatepass") - int_entry.append(a) - a += 1 - else: # case without list - c = int(lst[2], 16) - fmt2 = b"%%0%dX" % max(4, len(lst[2])) - closure_found = True - while a <= b: - map_dict[ - unhexlify(fmt % a).decode( - "charmap" if map_dict[-1] == 1 else "utf-16-be", - "surrogatepass", - ) - ] = unhexlify(fmt2 % c).decode("utf-16-be", "surrogatepass") - int_entry.append(a) - a += 1 - c += 1 - return None if closure_found else (a, b) - - -def parse_bfchar(l: bytes, map_dict: Dict[Any, Any], int_entry: List[int]) -> None: - lst = [x for x in l.split(b" ") if x] - map_dict[-1] = len(lst[0]) // 2 - while len(lst) > 1: - map_to = "" - # placeholder (see above) means empty string - if lst[1] != b".": - map_to = unhexlify(lst[1]).decode( - "charmap" if len(lst[1]) < 4 else "utf-16-be", "surrogatepass" - ) # join is here as some cases where the code was split - map_dict[ - unhexlify(lst[0]).decode( - "charmap" if map_dict[-1] == 1 else "utf-16-be", "surrogatepass" - ) - ] = map_to - int_entry.append(int(lst[0], 16)) - lst = lst[2:] - - -def compute_space_width( - ft: DictionaryObject, space_code: int, space_width: float -) -> float: - sp_width: float = space_width * 2 # default value - w = [] - w1 = {} - st: int = 0 - if "/DescendantFonts" in ft: # ft["/Subtype"].startswith("/CIDFontType"): - ft1 = ft["/DescendantFonts"][0].get_object() # type: ignore - try: - w1[-1] = cast(float, ft1["/DW"]) - except Exception: - w1[-1] = 1000.0 - if "/W" in ft1: - w = list(ft1["/W"]) - else: - w = [] - while len(w) > 0: - st = w[0] - second = w[1] - if isinstance(second, int): - for x in range(st, second): - w1[x] = w[2] - w = w[3:] - elif isinstance(second, list): - for y in second: - w1[st] = y - st += 1 - w = w[2:] - else: - logger_warning( - "unknown widths : \n" + (ft1["/W"]).__repr__(), - __name__, - ) - break - try: - sp_width = w1[space_code] - except Exception: - sp_width = ( - w1[-1] / 2.0 - ) # if using default we consider space will be only half size - elif "/Widths" in ft: - w = list(ft["/Widths"]) # type: ignore - try: - st = cast(int, ft["/FirstChar"]) - en: int = cast(int, ft["/LastChar"]) - if st > space_code or en < space_code: - raise Exception("Not in range") - if w[space_code - st] == 0: - raise Exception("null width") - sp_width = w[space_code - st] - except Exception: - if "/FontDescriptor" in ft and "/MissingWidth" in cast( - DictionaryObject, ft["/FontDescriptor"] - ): - sp_width = ft["/FontDescriptor"]["/MissingWidth"] # type: ignore - else: - # will consider width of char as avg(width)/2 - m = 0 - cpt = 0 - for x in w: - if x > 0: - m += x - cpt += 1 - sp_width = m / max(1, cpt) / 2 - return sp_width diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__init__.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__init__.py deleted file mode 100644 index 7e056181..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__init__.py +++ /dev/null @@ -1,63 +0,0 @@ -from typing import Dict, List - -from .adobe_glyphs import adobe_glyphs -from .pdfdoc import _pdfdoc_encoding -from .std import _std_encoding -from .symbol import _symbol_encoding -from .zapfding import _zapfding_encoding - - -def fill_from_encoding(enc: str) -> List[str]: - lst: List[str] = [] - for x in range(256): - try: - lst += (bytes((x,)).decode(enc),) - except Exception: - lst += (chr(x),) - return lst - - -def rev_encoding(enc: List[str]) -> Dict[str, int]: - rev: Dict[str, int] = {} - for i in range(256): - char = enc[i] - if char == "\u0000": - continue - assert char not in rev, ( - str(char) + " at " + str(i) + " already at " + str(rev[char]) - ) - rev[char] = i - return rev - - -_win_encoding = fill_from_encoding("cp1252") -_mac_encoding = fill_from_encoding("mac_roman") - - -_win_encoding_rev: Dict[str, int] = rev_encoding(_win_encoding) -_mac_encoding_rev: Dict[str, int] = rev_encoding(_mac_encoding) -_symbol_encoding_rev: Dict[str, int] = rev_encoding(_symbol_encoding) -_zapfding_encoding_rev: Dict[str, int] = rev_encoding(_zapfding_encoding) -_pdfdoc_encoding_rev: Dict[str, int] = rev_encoding(_pdfdoc_encoding) - - -charset_encoding: Dict[str, List[str]] = { - "/StandardCoding": _std_encoding, - "/WinAnsiEncoding": _win_encoding, - "/MacRomanEncoding": _mac_encoding, - "/PDFDocEncoding": _pdfdoc_encoding, - "/Symbol": _symbol_encoding, - "/ZapfDingbats": _zapfding_encoding, -} - -__all__ = [ - "adobe_glyphs", - "_std_encoding", - "_symbol_encoding", - "_zapfding_encoding", - "_pdfdoc_encoding", - "_pdfdoc_encoding_rev", - "_win_encoding", - "_mac_encoding", - "charset_encoding", -] diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index f7730057..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/adobe_glyphs.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/adobe_glyphs.cpython-312.pyc deleted file mode 100644 index 0f5c8e65..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/adobe_glyphs.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/pdfdoc.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/pdfdoc.cpython-312.pyc deleted file mode 100644 index b39d5b2f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/pdfdoc.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/std.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/std.cpython-312.pyc deleted file mode 100644 index 18a94047..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/std.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/symbol.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/symbol.cpython-312.pyc deleted file mode 100644 index 832e3afa..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/symbol.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/zapfding.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/zapfding.cpython-312.pyc deleted file mode 100644 index 7894e98e..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/__pycache__/zapfding.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/adobe_glyphs.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/adobe_glyphs.py deleted file mode 100644 index 6d8f7fbd..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/adobe_glyphs.py +++ /dev/null @@ -1,13437 +0,0 @@ -# https://raw.githubusercontent.com/adobe-type-tools/agl-aglfn/master/glyphlist.txt - -# converted manually to python -# Extended with data from GlyphNameFormatter: -# https://github.com/LettError/glyphNameFormatter - -# ----------------------------------------------------------- -# Copyright 2002-2019 Adobe (http://www.adobe.com/). -# -# Redistribution and use in source and binary forms, with or -# without modification, are permitted provided that the -# following conditions are met: -# -# Redistributions of source code must retain the above -# copyright notice, this list of conditions and the following -# disclaimer. -# -# Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following -# disclaimer in the documentation and/or other materials -# provided with the distribution. -# -# Neither the name of Adobe nor the names of its contributors -# may be used to endorse or promote products derived from this -# software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND -# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR -# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT -# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) -# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR -# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# ----------------------------------------------------------- -# Name: Adobe Glyph List -# Table version: 2.0 -# Date: September 20, 2002 -# URL: https://github.com/adobe-type-tools/agl-aglfn -# -# Format: two semicolon-delimited fields: -# (1) glyph name--upper/lowercase letters and digits -# (2) Unicode scalar value--four uppercase hexadecimal digits -# -adobe_glyphs = { - "/.notdef": "\u0000", - "/A": "\u0041", - "/AA": "\uA732", - "/AE": "\u00C6", - "/AEacute": "\u01FC", - "/AEmacron": "\u01E2", - "/AEsmall": "\uF7E6", - "/AO": "\uA734", - "/AU": "\uA736", - "/AV": "\uA738", - "/AVhorizontalbar": "\uA73A", - "/AY": "\uA73C", - "/Aacute": "\u00C1", - "/Aacutesmall": "\uF7E1", - "/Abreve": "\u0102", - "/Abreveacute": "\u1EAE", - "/Abrevecyr": "\u04D0", - "/Abrevecyrillic": "\u04D0", - "/Abrevedotbelow": "\u1EB6", - "/Abrevegrave": "\u1EB0", - "/Abrevehoi": "\u1EB2", - "/Abrevehookabove": "\u1EB2", - "/Abrevetilde": "\u1EB4", - "/Acaron": "\u01CD", - "/Acircle": "\u24B6", - "/Acircleblack": "\u1F150", - "/Acircumflex": "\u00C2", - "/Acircumflexacute": "\u1EA4", - "/Acircumflexdotbelow": "\u1EAC", - "/Acircumflexgrave": "\u1EA6", - "/Acircumflexhoi": "\u1EA8", - "/Acircumflexhookabove": "\u1EA8", - "/Acircumflexsmall": "\uF7E2", - "/Acircumflextilde": "\u1EAA", - "/Acute": "\uF6C9", - "/Acutesmall": "\uF7B4", - "/Acyr": "\u0410", - "/Acyrillic": "\u0410", - "/Adblgrave": "\u0200", - "/Adieresis": "\u00C4", - "/Adieresiscyr": "\u04D2", - "/Adieresiscyrillic": "\u04D2", - "/Adieresismacron": "\u01DE", - "/Adieresissmall": "\uF7E4", - "/Adot": "\u0226", - "/Adotbelow": "\u1EA0", - "/Adotmacron": "\u01E0", - "/Agrave": "\u00C0", - "/Agravedbl": "\u0200", - "/Agravesmall": "\uF7E0", - "/Ahoi": "\u1EA2", - "/Ahookabove": "\u1EA2", - "/Aiecyr": "\u04D4", - "/Aiecyrillic": "\u04D4", - "/Ainvertedbreve": "\u0202", - "/Akbar": "\uFDF3", - "/Alayhe": "\uFDF7", - "/Allah": "\uFDF2", - "/Alpha": "\u0391", - "/Alphaacute": "\u1FBB", - "/Alphaasper": "\u1F09", - "/Alphaasperacute": "\u1F0D", - "/Alphaasperacuteiotasub": "\u1F8D", - "/Alphaaspergrave": "\u1F0B", - "/Alphaaspergraveiotasub": "\u1F8B", - "/Alphaasperiotasub": "\u1F89", - "/Alphaaspertilde": "\u1F0F", - "/Alphaaspertildeiotasub": "\u1F8F", - "/Alphabreve": "\u1FB8", - "/Alphagrave": "\u1FBA", - "/Alphaiotasub": "\u1FBC", - "/Alphalenis": "\u1F08", - "/Alphalenisacute": "\u1F0C", - "/Alphalenisacuteiotasub": "\u1F8C", - "/Alphalenisgrave": "\u1F0A", - "/Alphalenisgraveiotasub": "\u1F8A", - "/Alphalenisiotasub": "\u1F88", - "/Alphalenistilde": "\u1F0E", - "/Alphalenistildeiotasub": "\u1F8E", - "/Alphatonos": "\u0386", - "/Alphawithmacron": "\u1FB9", - "/Amacron": "\u0100", - "/Amonospace": "\uFF21", - "/Aogonek": "\u0104", - "/Aparens": "\u1F110", - "/Aring": "\u00C5", - "/Aringacute": "\u01FA", - "/Aringbelow": "\u1E00", - "/Aringsmall": "\uF7E5", - "/Asmall": "\uF761", - "/Asquare": "\u1F130", - "/Asquareblack": "\u1F170", - "/Astroke": "\u023A", - "/Atilde": "\u00C3", - "/Atildesmall": "\uF7E3", - "/Aturned": "\u2C6F", - "/Ayahend": "\u06DD", - "/Aybarmenian": "\u0531", - "/B": "\u0042", - "/Bcircle": "\u24B7", - "/Bcircleblack": "\u1F151", - "/Bdot": "\u1E02", - "/Bdotaccent": "\u1E02", - "/Bdotbelow": "\u1E04", - "/Becyr": "\u0411", - "/Becyrillic": "\u0411", - "/Benarmenian": "\u0532", - "/Beta": "\u0392", - "/Bflourish": "\uA796", - "/Bhook": "\u0181", - "/BismillahArRahmanArRaheem": "\uFDFD", - "/Blinebelow": "\u1E06", - "/Bmonospace": "\uFF22", - "/Bparens": "\u1F111", - "/Brevesmall": "\uF6F4", - "/Bscript": "\u212C", - "/Bsmall": "\uF762", - "/Bsquare": "\u1F131", - "/Bsquareblack": "\u1F171", - "/Bstroke": "\u0243", - "/Btopbar": "\u0182", - "/C": "\u0043", - "/CDcircle": "\u1F12D", - "/Caarmenian": "\u053E", - "/Cacute": "\u0106", - "/Caron": "\uF6CA", - "/Caronsmall": "\uF6F5", - "/Cbar": "\uA792", - "/Ccaron": "\u010C", - "/Ccedilla": "\u00C7", - "/Ccedillaacute": "\u1E08", - "/Ccedillasmall": "\uF7E7", - "/Ccircle": "\u24B8", - "/Ccircleblack": "\u1F152", - "/Ccircumflex": "\u0108", - "/Cdblstruck": "\u2102", - "/Cdot": "\u010A", - "/Cdotaccent": "\u010A", - "/Cdotreversed": "\uA73E", - "/Cedillasmall": "\uF7B8", - "/Cfraktur": "\u212D", - "/Chaarmenian": "\u0549", - "/Cheabkhasiancyrillic": "\u04BC", - "/Cheabkhcyr": "\u04BC", - "/Cheabkhtailcyr": "\u04BE", - "/Checyr": "\u0427", - "/Checyrillic": "\u0427", - "/Chedescenderabkhasiancyrillic": "\u04BE", - "/Chedescendercyrillic": "\u04B6", - "/Chedieresiscyr": "\u04F4", - "/Chedieresiscyrillic": "\u04F4", - "/Cheharmenian": "\u0543", - "/Chekhakascyr": "\u04CB", - "/Chekhakassiancyrillic": "\u04CB", - "/Chetailcyr": "\u04B6", - "/Chevertcyr": "\u04B8", - "/Cheverticalstrokecyrillic": "\u04B8", - "/Chi": "\u03A7", - "/Chook": "\u0187", - "/Circumflexsmall": "\uF6F6", - "/Citaliccircle": "\u1F12B", - "/Cmonospace": "\uFF23", - "/Coarmenian": "\u0551", - "/Con": "\uA76E", - "/Cparens": "\u1F112", - "/Csmall": "\uF763", - "/Csquare": "\u1F132", - "/Csquareblack": "\u1F172", - "/Cstretched": "\u0297", - "/Cstroke": "\u023B", - "/Cuatrillo": "\uA72C", - "/Cuatrillocomma": "\uA72E", - "/D": "\u0044", - "/DZ": "\u01F1", - "/DZcaron": "\u01C4", - "/Daarmenian": "\u0534", - "/Dafrican": "\u0189", - "/Dcaron": "\u010E", - "/Dcedilla": "\u1E10", - "/Dchecyr": "\u052C", - "/Dcircle": "\u24B9", - "/Dcircleblack": "\u1F153", - "/Dcircumflexbelow": "\u1E12", - "/Dcroat": "\u0110", - "/Ddblstruckitalic": "\u2145", - "/Ddot": "\u1E0A", - "/Ddotaccent": "\u1E0A", - "/Ddotbelow": "\u1E0C", - "/Decyr": "\u0414", - "/Decyrillic": "\u0414", - "/Deicoptic": "\u03EE", - "/Dekomicyr": "\u0500", - "/Delta": "\u2206", - "/Deltagreek": "\u0394", - "/Dhook": "\u018A", - "/Dieresis": "\uF6CB", - "/DieresisAcute": "\uF6CC", - "/DieresisGrave": "\uF6CD", - "/Dieresissmall": "\uF7A8", - "/Digamma": "\u03DC", - "/Digammagreek": "\u03DC", - "/Digammapamphylian": "\u0376", - "/Dinsular": "\uA779", - "/Djecyr": "\u0402", - "/Djecyrillic": "\u0402", - "/Djekomicyr": "\u0502", - "/Dlinebelow": "\u1E0E", - "/Dmonospace": "\uFF24", - "/Dotaccentsmall": "\uF6F7", - "/Dparens": "\u1F113", - "/Dslash": "\u0110", - "/Dsmall": "\uF764", - "/Dsquare": "\u1F133", - "/Dsquareblack": "\u1F173", - "/Dtopbar": "\u018B", - "/Dz": "\u01F2", - "/Dzcaron": "\u01C5", - "/Dzeabkhasiancyrillic": "\u04E0", - "/Dzeabkhcyr": "\u04E0", - "/Dzecyr": "\u0405", - "/Dzecyrillic": "\u0405", - "/Dzhecyr": "\u040F", - "/Dzhecyrillic": "\u040F", - "/Dzjekomicyr": "\u0506", - "/Dzzhecyr": "\u052A", - "/E": "\u0045", - "/Eacute": "\u00C9", - "/Eacutesmall": "\uF7E9", - "/Ebreve": "\u0114", - "/Ecaron": "\u011A", - "/Ecedilla": "\u0228", - "/Ecedillabreve": "\u1E1C", - "/Echarmenian": "\u0535", - "/Ecircle": "\u24BA", - "/Ecircleblack": "\u1F154", - "/Ecircumflex": "\u00CA", - "/Ecircumflexacute": "\u1EBE", - "/Ecircumflexbelow": "\u1E18", - "/Ecircumflexdotbelow": "\u1EC6", - "/Ecircumflexgrave": "\u1EC0", - "/Ecircumflexhoi": "\u1EC2", - "/Ecircumflexhookabove": "\u1EC2", - "/Ecircumflexsmall": "\uF7EA", - "/Ecircumflextilde": "\u1EC4", - "/Ecyrillic": "\u0404", - "/Edblgrave": "\u0204", - "/Edieresis": "\u00CB", - "/Edieresissmall": "\uF7EB", - "/Edot": "\u0116", - "/Edotaccent": "\u0116", - "/Edotbelow": "\u1EB8", - "/Efcyr": "\u0424", - "/Efcyrillic": "\u0424", - "/Egrave": "\u00C8", - "/Egravedbl": "\u0204", - "/Egravesmall": "\uF7E8", - "/Egyptain": "\uA724", - "/Egyptalef": "\uA722", - "/Eharmenian": "\u0537", - "/Ehoi": "\u1EBA", - "/Ehookabove": "\u1EBA", - "/Eightroman": "\u2167", - "/Einvertedbreve": "\u0206", - "/Eiotifiedcyr": "\u0464", - "/Eiotifiedcyrillic": "\u0464", - "/Elcyr": "\u041B", - "/Elcyrillic": "\u041B", - "/Elevenroman": "\u216A", - "/Elhookcyr": "\u0512", - "/Elmiddlehookcyr": "\u0520", - "/Elsharptailcyr": "\u04C5", - "/Eltailcyr": "\u052E", - "/Emacron": "\u0112", - "/Emacronacute": "\u1E16", - "/Emacrongrave": "\u1E14", - "/Emcyr": "\u041C", - "/Emcyrillic": "\u041C", - "/Emonospace": "\uFF25", - "/Emsharptailcyr": "\u04CD", - "/Encyr": "\u041D", - "/Encyrillic": "\u041D", - "/Endescendercyrillic": "\u04A2", - "/Eng": "\u014A", - "/Engecyr": "\u04A4", - "/Enghecyrillic": "\u04A4", - "/Enhookcyr": "\u04C7", - "/Enhookcyrillic": "\u04C7", - "/Enhookleftcyr": "\u0528", - "/Enmiddlehookcyr": "\u0522", - "/Ensharptailcyr": "\u04C9", - "/Entailcyr": "\u04A2", - "/Eogonek": "\u0118", - "/Eopen": "\u0190", - "/Eparens": "\u1F114", - "/Epsilon": "\u0395", - "/Epsilonacute": "\u1FC9", - "/Epsilonasper": "\u1F19", - "/Epsilonasperacute": "\u1F1D", - "/Epsilonaspergrave": "\u1F1B", - "/Epsilongrave": "\u1FC8", - "/Epsilonlenis": "\u1F18", - "/Epsilonlenisacute": "\u1F1C", - "/Epsilonlenisgrave": "\u1F1A", - "/Epsilontonos": "\u0388", - "/Ercyr": "\u0420", - "/Ercyrillic": "\u0420", - "/Ereversed": "\u018E", - "/Ereversedcyr": "\u042D", - "/Ereversedcyrillic": "\u042D", - "/Ereverseddieresiscyr": "\u04EC", - "/Ereversedopen": "\uA7AB", - "/Ertickcyr": "\u048E", - "/Escript": "\u2130", - "/Escyr": "\u0421", - "/Escyrillic": "\u0421", - "/Esdescendercyrillic": "\u04AA", - "/Esh": "\u01A9", - "/Esmall": "\uF765", - "/Esmallturned": "\u2C7B", - "/Esquare": "\u1F134", - "/Esquareblack": "\u1F174", - "/Estailcyr": "\u04AA", - "/Estroke": "\u0246", - "/Et": "\uA76A", - "/Eta": "\u0397", - "/Etaacute": "\u1FCB", - "/Etaasper": "\u1F29", - "/Etaasperacute": "\u1F2D", - "/Etaasperacuteiotasub": "\u1F9D", - "/Etaaspergrave": "\u1F2B", - "/Etaaspergraveiotasub": "\u1F9B", - "/Etaasperiotasub": "\u1F99", - "/Etaaspertilde": "\u1F2F", - "/Etaaspertildeiotasub": "\u1F9F", - "/Etagrave": "\u1FCA", - "/Etaiotasub": "\u1FCC", - "/Etalenis": "\u1F28", - "/Etalenisacute": "\u1F2C", - "/Etalenisacuteiotasub": "\u1F9C", - "/Etalenisgrave": "\u1F2A", - "/Etalenisgraveiotasub": "\u1F9A", - "/Etalenisiotasub": "\u1F98", - "/Etalenistilde": "\u1F2E", - "/Etalenistildeiotasub": "\u1F9E", - "/Etarmenian": "\u0538", - "/Etatonos": "\u0389", - "/Eth": "\u00D0", - "/Ethsmall": "\uF7F0", - "/Etilde": "\u1EBC", - "/Etildebelow": "\u1E1A", - "/Eukrcyr": "\u0404", - "/Euro": "\u20AC", - "/Ezh": "\u01B7", - "/Ezhcaron": "\u01EE", - "/Ezhreversed": "\u01B8", - "/F": "\u0046", - "/Fcircle": "\u24BB", - "/Fcircleblack": "\u1F155", - "/Fdot": "\u1E1E", - "/Fdotaccent": "\u1E1E", - "/Feharmenian": "\u0556", - "/Feicoptic": "\u03E4", - "/Fhook": "\u0191", - "/Finsular": "\uA77B", - "/Fitacyr": "\u0472", - "/Fitacyrillic": "\u0472", - "/Fiveroman": "\u2164", - "/Fmonospace": "\uFF26", - "/Fourroman": "\u2163", - "/Fparens": "\u1F115", - "/Fscript": "\u2131", - "/Fsmall": "\uF766", - "/Fsquare": "\u1F135", - "/Fsquareblack": "\u1F175", - "/Fstroke": "\uA798", - "/Fturned": "\u2132", - "/G": "\u0047", - "/GBsquare": "\u3387", - "/Gacute": "\u01F4", - "/Gamma": "\u0393", - "/Gammaafrican": "\u0194", - "/Gammadblstruck": "\u213E", - "/Gangiacoptic": "\u03EA", - "/Gbreve": "\u011E", - "/Gcaron": "\u01E6", - "/Gcedilla": "\u0122", - "/Gcircle": "\u24BC", - "/Gcircleblack": "\u1F156", - "/Gcircumflex": "\u011C", - "/Gcommaaccent": "\u0122", - "/Gdot": "\u0120", - "/Gdotaccent": "\u0120", - "/Gecyr": "\u0413", - "/Gecyrillic": "\u0413", - "/Gehookcyr": "\u0494", - "/Gehookstrokecyr": "\u04FA", - "/Germandbls": "\u1E9E", - "/Gestrokecyr": "\u0492", - "/Getailcyr": "\u04F6", - "/Geupcyr": "\u0490", - "/Ghadarmenian": "\u0542", - "/Ghemiddlehookcyrillic": "\u0494", - "/Ghestrokecyrillic": "\u0492", - "/Gheupturncyrillic": "\u0490", - "/Ghook": "\u0193", - "/Ghooksmall": "\u029B", - "/Gimarmenian": "\u0533", - "/Ginsular": "\uA77D", - "/Ginsularturned": "\uA77E", - "/Gjecyr": "\u0403", - "/Gjecyrillic": "\u0403", - "/Glottalstop": "\u0241", - "/Gmacron": "\u1E20", - "/Gmonospace": "\uFF27", - "/Gobliquestroke": "\uA7A0", - "/Gparens": "\u1F116", - "/Grave": "\uF6CE", - "/Gravesmall": "\uF760", - "/Gsmall": "\uF767", - "/Gsmallhook": "\u029B", - "/Gsquare": "\u1F136", - "/Gsquareblack": "\u1F176", - "/Gstroke": "\u01E4", - "/Gturnedsans": "\u2141", - "/H": "\u0048", - "/H18533": "\u25CF", - "/H18543": "\u25AA", - "/H18551": "\u25AB", - "/H22073": "\u25A1", - "/HPsquare": "\u33CB", - "/HVsquare": "\u1F14A", - "/Haabkhasiancyrillic": "\u04A8", - "/Haabkhcyr": "\u04A8", - "/Hacyr": "\u0425", - "/Hadescendercyrillic": "\u04B2", - "/Hahookcyr": "\u04FC", - "/Hardcyr": "\u042A", - "/Hardsigncyrillic": "\u042A", - "/Hastrokecyr": "\u04FE", - "/Hbar": "\u0126", - "/Hbrevebelow": "\u1E2A", - "/Hcaron": "\u021E", - "/Hcedilla": "\u1E28", - "/Hcircle": "\u24BD", - "/Hcircleblack": "\u1F157", - "/Hcircumflex": "\u0124", - "/Hdblstruck": "\u210D", - "/Hdescender": "\u2C67", - "/Hdieresis": "\u1E26", - "/Hdot": "\u1E22", - "/Hdotaccent": "\u1E22", - "/Hdotbelow": "\u1E24", - "/Heng": "\uA726", - "/Heta": "\u0370", - "/Hfraktur": "\u210C", - "/Hgfullwidth": "\u32CC", - "/Hhalf": "\u2C75", - "/Hhook": "\uA7AA", - "/Hmonospace": "\uFF28", - "/Hoarmenian": "\u0540", - "/HonAA": "\u0611", - "/HonRA": "\u0612", - "/HonSAW": "\u0610", - "/Horicoptic": "\u03E8", - "/Hparens": "\u1F117", - "/Hscript": "\u210B", - "/Hsmall": "\uF768", - "/Hsquare": "\u1F137", - "/Hsquareblack": "\u1F177", - "/Hstrokemod": "\uA7F8", - "/Hturned": "\uA78D", - "/Hungarumlaut": "\uF6CF", - "/Hungarumlautsmall": "\uF6F8", - "/Hwair": "\u01F6", - "/Hzsquare": "\u3390", - "/I": "\u0049", - "/IAcyrillic": "\u042F", - "/ICsquareblack": "\u1F18B", - "/IJ": "\u0132", - "/IUcyrillic": "\u042E", - "/Iacute": "\u00CD", - "/Iacutesmall": "\uF7ED", - "/Ibreve": "\u012C", - "/Icaron": "\u01CF", - "/Icircle": "\u24BE", - "/Icircleblack": "\u1F158", - "/Icircumflex": "\u00CE", - "/Icircumflexsmall": "\uF7EE", - "/Icyr": "\u0418", - "/Icyrillic": "\u0406", - "/Idblgrave": "\u0208", - "/Idieresis": "\u00CF", - "/Idieresisacute": "\u1E2E", - "/Idieresiscyr": "\u04E4", - "/Idieresiscyrillic": "\u04E4", - "/Idieresissmall": "\uF7EF", - "/Idot": "\u0130", - "/Idotaccent": "\u0130", - "/Idotbelow": "\u1ECA", - "/Iebrevecyr": "\u04D6", - "/Iebrevecyrillic": "\u04D6", - "/Iecyr": "\u0415", - "/Iecyrillic": "\u0415", - "/Iegravecyr": "\u0400", - "/Ifraktur": "\u2111", - "/Igrave": "\u00CC", - "/Igravecyr": "\u040D", - "/Igravedbl": "\u0208", - "/Igravesmall": "\uF7EC", - "/Ihoi": "\u1EC8", - "/Ihookabove": "\u1EC8", - "/Iicyrillic": "\u0418", - "/Iinvertedbreve": "\u020A", - "/Iishortcyrillic": "\u0419", - "/Imacron": "\u012A", - "/Imacroncyr": "\u04E2", - "/Imacroncyrillic": "\u04E2", - "/Imonospace": "\uFF29", - "/Iniarmenian": "\u053B", - "/Iocyr": "\u0401", - "/Iocyrillic": "\u0401", - "/Iogonek": "\u012E", - "/Iota": "\u0399", - "/Iotaacute": "\u1FDB", - "/Iotaafrican": "\u0196", - "/Iotaasper": "\u1F39", - "/Iotaasperacute": "\u1F3D", - "/Iotaaspergrave": "\u1F3B", - "/Iotaaspertilde": "\u1F3F", - "/Iotabreve": "\u1FD8", - "/Iotadieresis": "\u03AA", - "/Iotagrave": "\u1FDA", - "/Iotalenis": "\u1F38", - "/Iotalenisacute": "\u1F3C", - "/Iotalenisgrave": "\u1F3A", - "/Iotalenistilde": "\u1F3E", - "/Iotatonos": "\u038A", - "/Iotawithmacron": "\u1FD9", - "/Iparens": "\u1F118", - "/Is": "\uA76C", - "/Iscript": "\u2110", - "/Ishortcyr": "\u0419", - "/Ishortsharptailcyr": "\u048A", - "/Ismall": "\uF769", - "/Isquare": "\u1F138", - "/Isquareblack": "\u1F178", - "/Istroke": "\u0197", - "/Itilde": "\u0128", - "/Itildebelow": "\u1E2C", - "/Iukrcyr": "\u0406", - "/Izhitsacyr": "\u0474", - "/Izhitsacyrillic": "\u0474", - "/Izhitsadblgravecyrillic": "\u0476", - "/Izhitsagravedblcyr": "\u0476", - "/J": "\u004A", - "/Jaarmenian": "\u0541", - "/Jallajalalouhou": "\uFDFB", - "/Jcircle": "\u24BF", - "/Jcircleblack": "\u1F159", - "/Jcircumflex": "\u0134", - "/Jcrossed-tail": "\uA7B2", - "/Jecyr": "\u0408", - "/Jecyrillic": "\u0408", - "/Jheharmenian": "\u054B", - "/Jmonospace": "\uFF2A", - "/Jparens": "\u1F119", - "/Jsmall": "\uF76A", - "/Jsquare": "\u1F139", - "/Jsquareblack": "\u1F179", - "/Jstroke": "\u0248", - "/K": "\u004B", - "/KBsquare": "\u3385", - "/KKsquare": "\u33CD", - "/KORONIS": "\u1FBD", - "/Kaaleutcyr": "\u051E", - "/Kabashkcyr": "\u04A0", - "/Kabashkircyrillic": "\u04A0", - "/Kacute": "\u1E30", - "/Kacyr": "\u041A", - "/Kacyrillic": "\u041A", - "/Kadescendercyrillic": "\u049A", - "/Kahookcyr": "\u04C3", - "/Kahookcyrillic": "\u04C3", - "/Kaisymbol": "\u03CF", - "/Kappa": "\u039A", - "/Kastrokecyr": "\u049E", - "/Kastrokecyrillic": "\u049E", - "/Katailcyr": "\u049A", - "/Kaverticalstrokecyr": "\u049C", - "/Kaverticalstrokecyrillic": "\u049C", - "/Kcaron": "\u01E8", - "/Kcedilla": "\u0136", - "/Kcircle": "\u24C0", - "/Kcircleblack": "\u1F15A", - "/Kcommaaccent": "\u0136", - "/Kdescender": "\u2C69", - "/Kdiagonalstroke": "\uA742", - "/Kdotbelow": "\u1E32", - "/Keharmenian": "\u0554", - "/Kenarmenian": "\u053F", - "/Khacyrillic": "\u0425", - "/Kheicoptic": "\u03E6", - "/Khook": "\u0198", - "/Kjecyr": "\u040C", - "/Kjecyrillic": "\u040C", - "/Klinebelow": "\u1E34", - "/Kmonospace": "\uFF2B", - "/Kobliquestroke": "\uA7A2", - "/Koppa": "\u03DE", - "/Koppaarchaic": "\u03D8", - "/Koppacyr": "\u0480", - "/Koppacyrillic": "\u0480", - "/Koppagreek": "\u03DE", - "/Kparens": "\u1F11A", - "/Ksicyr": "\u046E", - "/Ksicyrillic": "\u046E", - "/Ksmall": "\uF76B", - "/Ksquare": "\u1F13A", - "/Ksquareblack": "\u1F17A", - "/Kstroke": "\uA740", - "/Kstrokediagonalstroke": "\uA744", - "/Kturned": "\uA7B0", - "/L": "\u004C", - "/LJ": "\u01C7", - "/LL": "\uF6BF", - "/LLwelsh": "\u1EFA", - "/LTDfullwidth": "\u32CF", - "/Lacute": "\u0139", - "/Lambda": "\u039B", - "/Lbar": "\u023D", - "/Lbelt": "\uA7AD", - "/Lbroken": "\uA746", - "/Lcaron": "\u013D", - "/Lcedilla": "\u013B", - "/Lcircle": "\u24C1", - "/Lcircleblack": "\u1F15B", - "/Lcircumflexbelow": "\u1E3C", - "/Lcommaaccent": "\u013B", - "/Ldblbar": "\u2C60", - "/Ldot": "\u013F", - "/Ldotaccent": "\u013F", - "/Ldotbelow": "\u1E36", - "/Ldotbelowmacron": "\u1E38", - "/Lhacyr": "\u0514", - "/Liwnarmenian": "\u053C", - "/Lj": "\u01C8", - "/Ljecyr": "\u0409", - "/Ljecyrillic": "\u0409", - "/Ljekomicyr": "\u0508", - "/Llinebelow": "\u1E3A", - "/Lmacrondot": "\u1E38", - "/Lmiddletilde": "\u2C62", - "/Lmonospace": "\uFF2C", - "/Lparens": "\u1F11B", - "/Lreversedsans": "\u2143", - "/Lscript": "\u2112", - "/Lslash": "\u0141", - "/Lslashsmall": "\uF6F9", - "/Lsmall": "\uF76C", - "/Lsquare": "\u1F13B", - "/Lsquareblack": "\u1F17B", - "/Lstroke": "\uA748", - "/Lturned": "\uA780", - "/Lturnedsans": "\u2142", - "/M": "\u004D", - "/MBsquare": "\u3386", - "/MVsquare": "\u1F14B", - "/Macron": "\uF6D0", - "/Macronsmall": "\uF7AF", - "/Macute": "\u1E3E", - "/Mcircle": "\u24C2", - "/Mcircleblack": "\u1F15C", - "/Mdot": "\u1E40", - "/Mdotaccent": "\u1E40", - "/Mdotbelow": "\u1E42", - "/Menarmenian": "\u0544", - "/Mhook": "\u2C6E", - "/Mmonospace": "\uFF2D", - "/Mohammad": "\uFDF4", - "/Mparens": "\u1F11C", - "/Mscript": "\u2133", - "/Msmall": "\uF76D", - "/Msquare": "\u1F13C", - "/Msquareblack": "\u1F17C", - "/Mturned": "\u019C", - "/Mturnedsmall": "\uA7FA", - "/Mu": "\u039C", - "/N": "\u004E", - "/NJ": "\u01CA", - "/Nacute": "\u0143", - "/Ncaron": "\u0147", - "/Ncedilla": "\u0145", - "/Ncircle": "\u24C3", - "/Ncircleblack": "\u1F15D", - "/Ncircumflexbelow": "\u1E4A", - "/Ncommaaccent": "\u0145", - "/Ndblstruck": "\u2115", - "/Ndescender": "\uA790", - "/Ndot": "\u1E44", - "/Ndotaccent": "\u1E44", - "/Ndotbelow": "\u1E46", - "/Ngrave": "\u01F8", - "/Nhookleft": "\u019D", - "/Nineroman": "\u2168", - "/Nj": "\u01CB", - "/Njecyr": "\u040A", - "/Njecyrillic": "\u040A", - "/Njekomicyr": "\u050A", - "/Nlinebelow": "\u1E48", - "/Nlongrightleg": "\u0220", - "/Nmonospace": "\uFF2E", - "/Nobliquestroke": "\uA7A4", - "/Nowarmenian": "\u0546", - "/Nparens": "\u1F11D", - "/Nsmall": "\uF76E", - "/Nsquare": "\u1F13D", - "/Nsquareblack": "\u1F17D", - "/Ntilde": "\u00D1", - "/Ntildesmall": "\uF7F1", - "/Nu": "\u039D", - "/O": "\u004F", - "/OE": "\u0152", - "/OEsmall": "\uF6FA", - "/OO": "\uA74E", - "/Oacute": "\u00D3", - "/Oacutesmall": "\uF7F3", - "/Obar": "\u019F", - "/Obarcyr": "\u04E8", - "/Obardieresiscyr": "\u04EA", - "/Obarredcyrillic": "\u04E8", - "/Obarreddieresiscyrillic": "\u04EA", - "/Obreve": "\u014E", - "/Ocaron": "\u01D1", - "/Ocenteredtilde": "\u019F", - "/Ocircle": "\u24C4", - "/Ocircleblack": "\u1F15E", - "/Ocircumflex": "\u00D4", - "/Ocircumflexacute": "\u1ED0", - "/Ocircumflexdotbelow": "\u1ED8", - "/Ocircumflexgrave": "\u1ED2", - "/Ocircumflexhoi": "\u1ED4", - "/Ocircumflexhookabove": "\u1ED4", - "/Ocircumflexsmall": "\uF7F4", - "/Ocircumflextilde": "\u1ED6", - "/Ocyr": "\u041E", - "/Ocyrillic": "\u041E", - "/Odblacute": "\u0150", - "/Odblgrave": "\u020C", - "/Odieresis": "\u00D6", - "/Odieresiscyr": "\u04E6", - "/Odieresiscyrillic": "\u04E6", - "/Odieresismacron": "\u022A", - "/Odieresissmall": "\uF7F6", - "/Odot": "\u022E", - "/Odotbelow": "\u1ECC", - "/Odotmacron": "\u0230", - "/Ogoneksmall": "\uF6FB", - "/Ograve": "\u00D2", - "/Ogravedbl": "\u020C", - "/Ogravesmall": "\uF7F2", - "/Oharmenian": "\u0555", - "/Ohm": "\u2126", - "/Ohoi": "\u1ECE", - "/Ohookabove": "\u1ECE", - "/Ohorn": "\u01A0", - "/Ohornacute": "\u1EDA", - "/Ohorndotbelow": "\u1EE2", - "/Ohorngrave": "\u1EDC", - "/Ohornhoi": "\u1EDE", - "/Ohornhookabove": "\u1EDE", - "/Ohorntilde": "\u1EE0", - "/Ohungarumlaut": "\u0150", - "/Oi": "\u01A2", - "/Oinvertedbreve": "\u020E", - "/Oloop": "\uA74C", - "/Omacron": "\u014C", - "/Omacronacute": "\u1E52", - "/Omacrongrave": "\u1E50", - "/Omega": "\u2126", - "/Omegaacute": "\u1FFB", - "/Omegaasper": "\u1F69", - "/Omegaasperacute": "\u1F6D", - "/Omegaasperacuteiotasub": "\u1FAD", - "/Omegaaspergrave": "\u1F6B", - "/Omegaaspergraveiotasub": "\u1FAB", - "/Omegaasperiotasub": "\u1FA9", - "/Omegaaspertilde": "\u1F6F", - "/Omegaaspertildeiotasub": "\u1FAF", - "/Omegacyr": "\u0460", - "/Omegacyrillic": "\u0460", - "/Omegagrave": "\u1FFA", - "/Omegagreek": "\u03A9", - "/Omegaiotasub": "\u1FFC", - "/Omegalenis": "\u1F68", - "/Omegalenisacute": "\u1F6C", - "/Omegalenisacuteiotasub": "\u1FAC", - "/Omegalenisgrave": "\u1F6A", - "/Omegalenisgraveiotasub": "\u1FAA", - "/Omegalenisiotasub": "\u1FA8", - "/Omegalenistilde": "\u1F6E", - "/Omegalenistildeiotasub": "\u1FAE", - "/Omegaroundcyr": "\u047A", - "/Omegaroundcyrillic": "\u047A", - "/Omegatitlocyr": "\u047C", - "/Omegatitlocyrillic": "\u047C", - "/Omegatonos": "\u038F", - "/Omicron": "\u039F", - "/Omicronacute": "\u1FF9", - "/Omicronasper": "\u1F49", - "/Omicronasperacute": "\u1F4D", - "/Omicronaspergrave": "\u1F4B", - "/Omicrongrave": "\u1FF8", - "/Omicronlenis": "\u1F48", - "/Omicronlenisacute": "\u1F4C", - "/Omicronlenisgrave": "\u1F4A", - "/Omicrontonos": "\u038C", - "/Omonospace": "\uFF2F", - "/Oneroman": "\u2160", - "/Oogonek": "\u01EA", - "/Oogonekmacron": "\u01EC", - "/Oopen": "\u0186", - "/Oparens": "\u1F11E", - "/Oslash": "\u00D8", - "/Oslashacute": "\u01FE", - "/Oslashsmall": "\uF7F8", - "/Osmall": "\uF76F", - "/Osquare": "\u1F13E", - "/Osquareblack": "\u1F17E", - "/Ostroke": "\uA74A", - "/Ostrokeacute": "\u01FE", - "/Otcyr": "\u047E", - "/Otcyrillic": "\u047E", - "/Otilde": "\u00D5", - "/Otildeacute": "\u1E4C", - "/Otildedieresis": "\u1E4E", - "/Otildemacron": "\u022C", - "/Otildesmall": "\uF7F5", - "/Ou": "\u0222", - "/P": "\u0050", - "/PAsquareblack": "\u1F18C", - "/PPVsquare": "\u1F14E", - "/Pacute": "\u1E54", - "/Palochkacyr": "\u04C0", - "/Pcircle": "\u24C5", - "/Pcircleblack": "\u1F15F", - "/Pcrosssquareblack": "\u1F18A", - "/Pdblstruck": "\u2119", - "/Pdot": "\u1E56", - "/Pdotaccent": "\u1E56", - "/Pecyr": "\u041F", - "/Pecyrillic": "\u041F", - "/Peharmenian": "\u054A", - "/Pehookcyr": "\u04A6", - "/Pemiddlehookcyrillic": "\u04A6", - "/Petailcyr": "\u0524", - "/Pflourish": "\uA752", - "/Phi": "\u03A6", - "/Phook": "\u01A4", - "/Pi": "\u03A0", - "/Pidblstruck": "\u213F", - "/Piwrarmenian": "\u0553", - "/Pmonospace": "\uFF30", - "/Pparens": "\u1F11F", - "/Psi": "\u03A8", - "/Psicyr": "\u0470", - "/Psicyrillic": "\u0470", - "/Psmall": "\uF770", - "/Psquare": "\u1F13F", - "/Psquareblack": "\u1F17F", - "/Pstroke": "\u2C63", - "/Pstrokedescender": "\uA750", - "/Ptail": "\uA754", - "/Q": "\u0051", - "/Qacyr": "\u051A", - "/QalaUsedAsKoranicStopSign": "\uFDF1", - "/Qcircle": "\u24C6", - "/Qcircleblack": "\u1F160", - "/Qdblstruck": "\u211A", - "/Qdiagonalstroke": "\uA758", - "/Qmonospace": "\uFF31", - "/Qparens": "\u1F120", - "/Qrotated": "\u213A", - "/Qsmall": "\uF771", - "/Qsmallhooktail": "\u024A", - "/Qsquare": "\u1F140", - "/Qsquareblack": "\u1F180", - "/Qstrokedescender": "\uA756", - "/R": "\u0052", - "/Raarmenian": "\u054C", - "/Racute": "\u0154", - "/Rasoul": "\uFDF6", - "/Rcaron": "\u0158", - "/Rcedilla": "\u0156", - "/Rcircle": "\u24C7", - "/Rcircleblack": "\u1F161", - "/Rcommaaccent": "\u0156", - "/Rdblgrave": "\u0210", - "/Rdblstruck": "\u211D", - "/Rdot": "\u1E58", - "/Rdotaccent": "\u1E58", - "/Rdotbelow": "\u1E5A", - "/Rdotbelowmacron": "\u1E5C", - "/Reharmenian": "\u0550", - "/Reverseddottedsigmalunatesymbol": "\u03FF", - "/Reversedzecyr": "\u0510", - "/Rfraktur": "\u211C", - "/Rgravedbl": "\u0210", - "/Rhacyr": "\u0516", - "/Rho": "\u03A1", - "/Rhoasper": "\u1FEC", - "/Ringsmall": "\uF6FC", - "/Rinsular": "\uA782", - "/Rinvertedbreve": "\u0212", - "/Rinvertedsmall": "\u0281", - "/Ritaliccircle": "\u1F12C", - "/Rlinebelow": "\u1E5E", - "/Rmacrondot": "\u1E5C", - "/Rmonospace": "\uFF32", - "/Robliquestroke": "\uA7A6", - "/Rparens": "\u1F121", - "/Rrotunda": "\uA75A", - "/Rscript": "\u211B", - "/Rsmall": "\uF772", - "/Rsmallinverted": "\u0281", - "/Rsmallinvertedsuperior": "\u02B6", - "/Rsquare": "\u1F141", - "/Rsquareblack": "\u1F181", - "/Rstroke": "\u024C", - "/Rsupinvertedmod": "\u02B6", - "/Rtail": "\u2C64", - "/RubElHizbstart": "\u06DE", - "/Rumrotunda": "\uA75C", - "/Rumsmall": "\uA776", - "/S": "\u0053", - "/SAsquareblack": "\u1F18D", - "/SDsquare": "\u1F14C", - "/SF010000": "\u250C", - "/SF020000": "\u2514", - "/SF030000": "\u2510", - "/SF040000": "\u2518", - "/SF050000": "\u253C", - "/SF060000": "\u252C", - "/SF070000": "\u2534", - "/SF080000": "\u251C", - "/SF090000": "\u2524", - "/SF100000": "\u2500", - "/SF110000": "\u2502", - "/SF190000": "\u2561", - "/SF200000": "\u2562", - "/SF210000": "\u2556", - "/SF220000": "\u2555", - "/SF230000": "\u2563", - "/SF240000": "\u2551", - "/SF250000": "\u2557", - "/SF260000": "\u255D", - "/SF270000": "\u255C", - "/SF280000": "\u255B", - "/SF360000": "\u255E", - "/SF370000": "\u255F", - "/SF380000": "\u255A", - "/SF390000": "\u2554", - "/SF400000": "\u2569", - "/SF410000": "\u2566", - "/SF420000": "\u2560", - "/SF430000": "\u2550", - "/SF440000": "\u256C", - "/SF450000": "\u2567", - "/SF460000": "\u2568", - "/SF470000": "\u2564", - "/SF480000": "\u2565", - "/SF490000": "\u2559", - "/SF500000": "\u2558", - "/SF510000": "\u2552", - "/SF520000": "\u2553", - "/SF530000": "\u256B", - "/SF540000": "\u256A", - "/SSsquare": "\u1F14D", - "/Sacute": "\u015A", - "/Sacutedotaccent": "\u1E64", - "/Safha": "\u0603", - "/Sajdah": "\u06E9", - "/Salam": "\uFDF5", - "/Salla": "\uFDF9", - "/SallaUsedAsKoranicStopSign": "\uFDF0", - "/SallallahouAlayheWasallam": "\uFDFA", - "/Saltillo": "\uA78B", - "/Sampi": "\u03E0", - "/Sampiarchaic": "\u0372", - "/Sampigreek": "\u03E0", - "/San": "\u03FA", - "/Sanah": "\u0601", - "/Scaron": "\u0160", - "/Scarondot": "\u1E66", - "/Scarondotaccent": "\u1E66", - "/Scaronsmall": "\uF6FD", - "/Scedilla": "\u015E", - "/Schwa": "\u018F", - "/Schwacyr": "\u04D8", - "/Schwacyrillic": "\u04D8", - "/Schwadieresiscyr": "\u04DA", - "/Schwadieresiscyrillic": "\u04DA", - "/Scircle": "\u24C8", - "/Scircleblack": "\u1F162", - "/Scircumflex": "\u015C", - "/Scommaaccent": "\u0218", - "/Scriptg": "\uA7AC", - "/Sdot": "\u1E60", - "/Sdotaccent": "\u1E60", - "/Sdotbelow": "\u1E62", - "/Sdotbelowdotabove": "\u1E68", - "/Sdotbelowdotaccent": "\u1E68", - "/Seharmenian": "\u054D", - "/Semisoftcyr": "\u048C", - "/Sevenroman": "\u2166", - "/Shaarmenian": "\u0547", - "/Shacyr": "\u0428", - "/Shacyrillic": "\u0428", - "/Shchacyr": "\u0429", - "/Shchacyrillic": "\u0429", - "/Sheicoptic": "\u03E2", - "/SheneGerishin:hb": "\u059E", - "/Shhacyr": "\u04BA", - "/Shhacyrillic": "\u04BA", - "/Shhatailcyr": "\u0526", - "/Shimacoptic": "\u03EC", - "/Sho": "\u03F7", - "/Sigma": "\u03A3", - "/Sigmalunatesymbol": "\u03F9", - "/Sigmalunatesymboldotted": "\u03FE", - "/Sigmareversedlunatesymbol": "\u03FD", - "/Sinsular": "\uA784", - "/Sixroman": "\u2165", - "/Sjekomicyr": "\u050C", - "/Smonospace": "\uFF33", - "/Sobliquestroke": "\uA7A8", - "/Softcyr": "\u042C", - "/Softsigncyrillic": "\u042C", - "/Sparens": "\u1F122", - "/Sshell": "\u1F12A", - "/Ssmall": "\uF773", - "/Ssquare": "\u1F142", - "/Ssquareblack": "\u1F182", - "/Sswashtail": "\u2C7E", - "/Stigma": "\u03DA", - "/Stigmagreek": "\u03DA", - "/T": "\u0054", - "/Tau": "\u03A4", - "/Tbar": "\u0166", - "/Tcaron": "\u0164", - "/Tcedilla": "\u0162", - "/Tcircle": "\u24C9", - "/Tcircleblack": "\u1F163", - "/Tcircumflexbelow": "\u1E70", - "/Tcommaaccent": "\u0162", - "/Tdot": "\u1E6A", - "/Tdotaccent": "\u1E6A", - "/Tdotbelow": "\u1E6C", - "/Tecyr": "\u0422", - "/Tecyrillic": "\u0422", - "/Tedescendercyrillic": "\u04AC", - "/Tenroman": "\u2169", - "/Tetailcyr": "\u04AC", - "/Tetsecyr": "\u04B4", - "/Tetsecyrillic": "\u04B4", - "/Theta": "\u0398", - "/Thetasymbol": "\u03F4", - "/Thook": "\u01AC", - "/Thorn": "\u00DE", - "/Thornsmall": "\uF7FE", - "/Thornstroke": "\uA764", - "/Thornstrokedescender": "\uA766", - "/Threeroman": "\u2162", - "/Tildesmall": "\uF6FE", - "/Tinsular": "\uA786", - "/Tiwnarmenian": "\u054F", - "/Tjekomicyr": "\u050E", - "/Tlinebelow": "\u1E6E", - "/Tmonospace": "\uFF34", - "/Toarmenian": "\u0539", - "/Tonefive": "\u01BC", - "/Tonesix": "\u0184", - "/Tonetwo": "\u01A7", - "/Tparens": "\u1F123", - "/Tresillo": "\uA72A", - "/Tretroflexhook": "\u01AE", - "/Tsecyr": "\u0426", - "/Tsecyrillic": "\u0426", - "/Tshecyr": "\u040B", - "/Tshecyrillic": "\u040B", - "/Tsmall": "\uF774", - "/Tsquare": "\u1F143", - "/Tsquareblack": "\u1F183", - "/Tturned": "\uA7B1", - "/Twelveroman": "\u216B", - "/Twithdiagonalstroke": "\u023E", - "/Tworoman": "\u2161", - "/Tz": "\uA728", - "/U": "\u0055", - "/Uacute": "\u00DA", - "/Uacutedblcyr": "\u04F2", - "/Uacutesmall": "\uF7FA", - "/Ubar": "\u0244", - "/Ubreve": "\u016C", - "/Ucaron": "\u01D3", - "/Ucircle": "\u24CA", - "/Ucircleblack": "\u1F164", - "/Ucircumflex": "\u00DB", - "/Ucircumflexbelow": "\u1E76", - "/Ucircumflexsmall": "\uF7FB", - "/Ucyr": "\u0423", - "/Ucyrillic": "\u0423", - "/Udblacute": "\u0170", - "/Udblgrave": "\u0214", - "/Udieresis": "\u00DC", - "/Udieresisacute": "\u01D7", - "/Udieresisbelow": "\u1E72", - "/Udieresiscaron": "\u01D9", - "/Udieresiscyr": "\u04F0", - "/Udieresiscyrillic": "\u04F0", - "/Udieresisgrave": "\u01DB", - "/Udieresismacron": "\u01D5", - "/Udieresissmall": "\uF7FC", - "/Udotbelow": "\u1EE4", - "/Ugrave": "\u00D9", - "/Ugravedbl": "\u0214", - "/Ugravesmall": "\uF7F9", - "/Uhoi": "\u1EE6", - "/Uhookabove": "\u1EE6", - "/Uhorn": "\u01AF", - "/Uhornacute": "\u1EE8", - "/Uhorndotbelow": "\u1EF0", - "/Uhorngrave": "\u1EEA", - "/Uhornhoi": "\u1EEC", - "/Uhornhookabove": "\u1EEC", - "/Uhorntilde": "\u1EEE", - "/Uhungarumlaut": "\u0170", - "/Uhungarumlautcyrillic": "\u04F2", - "/Uinvertedbreve": "\u0216", - "/Ukcyr": "\u0478", - "/Ukcyrillic": "\u0478", - "/Umacron": "\u016A", - "/Umacroncyr": "\u04EE", - "/Umacroncyrillic": "\u04EE", - "/Umacrondieresis": "\u1E7A", - "/Umonospace": "\uFF35", - "/Uogonek": "\u0172", - "/Uparens": "\u1F124", - "/Upsilon": "\u03A5", - "/Upsilon1": "\u03D2", - "/Upsilonacute": "\u1FEB", - "/Upsilonacutehooksymbol": "\u03D3", - "/Upsilonacutehooksymbolgreek": "\u03D3", - "/Upsilonadieresishooksymbol": "\u03D4", - "/Upsilonafrican": "\u01B1", - "/Upsilonasper": "\u1F59", - "/Upsilonasperacute": "\u1F5D", - "/Upsilonaspergrave": "\u1F5B", - "/Upsilonaspertilde": "\u1F5F", - "/Upsilonbreve": "\u1FE8", - "/Upsilondieresis": "\u03AB", - "/Upsilondieresishooksymbolgreek": "\u03D4", - "/Upsilongrave": "\u1FEA", - "/Upsilonhooksymbol": "\u03D2", - "/Upsilontonos": "\u038E", - "/Upsilonwithmacron": "\u1FE9", - "/Uring": "\u016E", - "/Ushortcyr": "\u040E", - "/Ushortcyrillic": "\u040E", - "/Usmall": "\uF775", - "/Usquare": "\u1F144", - "/Usquareblack": "\u1F184", - "/Ustraightcyr": "\u04AE", - "/Ustraightcyrillic": "\u04AE", - "/Ustraightstrokecyr": "\u04B0", - "/Ustraightstrokecyrillic": "\u04B0", - "/Utilde": "\u0168", - "/Utildeacute": "\u1E78", - "/Utildebelow": "\u1E74", - "/V": "\u0056", - "/Vcircle": "\u24CB", - "/Vcircleblack": "\u1F165", - "/Vdiagonalstroke": "\uA75E", - "/Vdotbelow": "\u1E7E", - "/Vecyr": "\u0412", - "/Vecyrillic": "\u0412", - "/Vend": "\uA768", - "/Vewarmenian": "\u054E", - "/Vhook": "\u01B2", - "/Visigothicz": "\uA762", - "/Vmod": "\u2C7D", - "/Vmonospace": "\uFF36", - "/Voarmenian": "\u0548", - "/Volapukae": "\uA79A", - "/Volapukoe": "\uA79C", - "/Volapukue": "\uA79E", - "/Vparens": "\u1F125", - "/Vsmall": "\uF776", - "/Vsquare": "\u1F145", - "/Vsquareblack": "\u1F185", - "/Vtilde": "\u1E7C", - "/Vturned": "\u0245", - "/Vwelsh": "\u1EFC", - "/Vy": "\uA760", - "/W": "\u0057", - "/WZcircle": "\u1F12E", - "/Wacute": "\u1E82", - "/Wasallam": "\uFDF8", - "/Wcircle": "\u24CC", - "/Wcircleblack": "\u1F166", - "/Wcircumflex": "\u0174", - "/Wdieresis": "\u1E84", - "/Wdot": "\u1E86", - "/Wdotaccent": "\u1E86", - "/Wdotbelow": "\u1E88", - "/Wecyr": "\u051C", - "/Wgrave": "\u1E80", - "/Whook": "\u2C72", - "/Wmonospace": "\uFF37", - "/Wparens": "\u1F126", - "/Wsmall": "\uF777", - "/Wsquare": "\u1F146", - "/Wsquareblack": "\u1F186", - "/Wynn": "\u01F7", - "/X": "\u0058", - "/Xatailcyr": "\u04B2", - "/Xcircle": "\u24CD", - "/Xcircleblack": "\u1F167", - "/Xdieresis": "\u1E8C", - "/Xdot": "\u1E8A", - "/Xdotaccent": "\u1E8A", - "/Xeharmenian": "\u053D", - "/Xi": "\u039E", - "/Xmonospace": "\uFF38", - "/Xparens": "\u1F127", - "/Xsmall": "\uF778", - "/Xsquare": "\u1F147", - "/Xsquareblack": "\u1F187", - "/Y": "\u0059", - "/Yacute": "\u00DD", - "/Yacutesmall": "\uF7FD", - "/Yacyr": "\u042F", - "/Yaecyr": "\u0518", - "/Yatcyr": "\u0462", - "/Yatcyrillic": "\u0462", - "/Ycircle": "\u24CE", - "/Ycircleblack": "\u1F168", - "/Ycircumflex": "\u0176", - "/Ydieresis": "\u0178", - "/Ydieresissmall": "\uF7FF", - "/Ydot": "\u1E8E", - "/Ydotaccent": "\u1E8E", - "/Ydotbelow": "\u1EF4", - "/Yericyrillic": "\u042B", - "/Yerudieresiscyrillic": "\u04F8", - "/Ygrave": "\u1EF2", - "/Yhoi": "\u1EF6", - "/Yhook": "\u01B3", - "/Yhookabove": "\u1EF6", - "/Yiarmenian": "\u0545", - "/Yicyrillic": "\u0407", - "/Yiwnarmenian": "\u0552", - "/Ylongcyr": "\u042B", - "/Ylongdieresiscyr": "\u04F8", - "/Yloop": "\u1EFE", - "/Ymacron": "\u0232", - "/Ymonospace": "\uFF39", - "/Yogh": "\u021C", - "/Yot": "\u037F", - "/Yparens": "\u1F128", - "/Ysmall": "\uF779", - "/Ysquare": "\u1F148", - "/Ysquareblack": "\u1F188", - "/Ystroke": "\u024E", - "/Ytilde": "\u1EF8", - "/Yturnedsans": "\u2144", - "/Yucyr": "\u042E", - "/Yukrcyr": "\u0407", - "/Yusbigcyr": "\u046A", - "/Yusbigcyrillic": "\u046A", - "/Yusbigiotifiedcyr": "\u046C", - "/Yusbigiotifiedcyrillic": "\u046C", - "/Yuslittlecyr": "\u0466", - "/Yuslittlecyrillic": "\u0466", - "/Yuslittleiotifiedcyr": "\u0468", - "/Yuslittleiotifiedcyrillic": "\u0468", - "/Z": "\u005A", - "/Zaarmenian": "\u0536", - "/Zacute": "\u0179", - "/Zcaron": "\u017D", - "/Zcaronsmall": "\uF6FF", - "/Zcircle": "\u24CF", - "/Zcircleblack": "\u1F169", - "/Zcircumflex": "\u1E90", - "/Zdblstruck": "\u2124", - "/Zdescender": "\u2C6B", - "/Zdot": "\u017B", - "/Zdotaccent": "\u017B", - "/Zdotbelow": "\u1E92", - "/Zecyr": "\u0417", - "/Zecyrillic": "\u0417", - "/Zedescendercyrillic": "\u0498", - "/Zedieresiscyr": "\u04DE", - "/Zedieresiscyrillic": "\u04DE", - "/Zeta": "\u0396", - "/Zetailcyr": "\u0498", - "/Zfraktur": "\u2128", - "/Zhearmenian": "\u053A", - "/Zhebrevecyr": "\u04C1", - "/Zhebrevecyrillic": "\u04C1", - "/Zhecyr": "\u0416", - "/Zhecyrillic": "\u0416", - "/Zhedescendercyrillic": "\u0496", - "/Zhedieresiscyr": "\u04DC", - "/Zhedieresiscyrillic": "\u04DC", - "/Zhetailcyr": "\u0496", - "/Zhook": "\u0224", - "/Zjekomicyr": "\u0504", - "/Zlinebelow": "\u1E94", - "/Zmonospace": "\uFF3A", - "/Zparens": "\u1F129", - "/Zsmall": "\uF77A", - "/Zsquare": "\u1F149", - "/Zsquareblack": "\u1F189", - "/Zstroke": "\u01B5", - "/Zswashtail": "\u2C7F", - "/a": "\u0061", - "/a.inferior": "\u2090", - "/aHonRAA": "\u0613", - "/aa": "\uA733", - "/aabengali": "\u0986", - "/aacute": "\u00E1", - "/aadeva": "\u0906", - "/aagujarati": "\u0A86", - "/aagurmukhi": "\u0A06", - "/aamatragurmukhi": "\u0A3E", - "/aarusquare": "\u3303", - "/aavowelsignbengali": "\u09BE", - "/aavowelsigndeva": "\u093E", - "/aavowelsigngujarati": "\u0ABE", - "/abbreviationmarkarmenian": "\u055F", - "/abbreviationsigndeva": "\u0970", - "/abengali": "\u0985", - "/abopomofo": "\u311A", - "/abreve": "\u0103", - "/abreveacute": "\u1EAF", - "/abrevecyr": "\u04D1", - "/abrevecyrillic": "\u04D1", - "/abrevedotbelow": "\u1EB7", - "/abrevegrave": "\u1EB1", - "/abrevehoi": "\u1EB3", - "/abrevehookabove": "\u1EB3", - "/abrevetilde": "\u1EB5", - "/absquareblack": "\u1F18E", - "/acaron": "\u01CE", - "/accountof": "\u2100", - "/accurrent": "\u23E6", - "/acircle": "\u24D0", - "/acirclekatakana": "\u32D0", - "/acircumflex": "\u00E2", - "/acircumflexacute": "\u1EA5", - "/acircumflexdotbelow": "\u1EAD", - "/acircumflexgrave": "\u1EA7", - "/acircumflexhoi": "\u1EA9", - "/acircumflexhookabove": "\u1EA9", - "/acircumflextilde": "\u1EAB", - "/activatearabicformshaping": "\u206D", - "/activatesymmetricswapping": "\u206B", - "/acute": "\u00B4", - "/acutebelowcmb": "\u0317", - "/acutecmb": "\u0301", - "/acutecomb": "\u0301", - "/acutedblmiddlemod": "\u02F6", - "/acutedeva": "\u0954", - "/acutelowmod": "\u02CF", - "/acutemod": "\u02CA", - "/acutetonecmb": "\u0341", - "/acyr": "\u0430", - "/acyrillic": "\u0430", - "/adblgrave": "\u0201", - "/addakgurmukhi": "\u0A71", - "/addressedsubject": "\u2101", - "/adegadegpada": "\uA9CB", - "/adegpada": "\uA9CA", - "/adeva": "\u0905", - "/adieresis": "\u00E4", - "/adieresiscyr": "\u04D3", - "/adieresiscyrillic": "\u04D3", - "/adieresismacron": "\u01DF", - "/adishakti": "\u262C", - "/admissionTickets": "\u1F39F", - "/adot": "\u0227", - "/adotbelow": "\u1EA1", - "/adotmacron": "\u01E1", - "/ae": "\u00E6", - "/aeacute": "\u01FD", - "/aekorean": "\u3150", - "/aemacron": "\u01E3", - "/aerialTramway": "\u1F6A1", - "/afghani": "\u060B", - "/afii00208": "\u2015", - "/afii08941": "\u20A4", - "/afii10017": "\u0410", - "/afii10018": "\u0411", - "/afii10019": "\u0412", - "/afii10020": "\u0413", - "/afii10021": "\u0414", - "/afii10022": "\u0415", - "/afii10023": "\u0401", - "/afii10024": "\u0416", - "/afii10025": "\u0417", - "/afii10026": "\u0418", - "/afii10027": "\u0419", - "/afii10028": "\u041A", - "/afii10029": "\u041B", - "/afii10030": "\u041C", - "/afii10031": "\u041D", - "/afii10032": "\u041E", - "/afii10033": "\u041F", - "/afii10034": "\u0420", - "/afii10035": "\u0421", - "/afii10036": "\u0422", - "/afii10037": "\u0423", - "/afii10038": "\u0424", - "/afii10039": "\u0425", - "/afii10040": "\u0426", - "/afii10041": "\u0427", - "/afii10042": "\u0428", - "/afii10043": "\u0429", - "/afii10044": "\u042A", - "/afii10045": "\u042B", - "/afii10046": "\u042C", - "/afii10047": "\u042D", - "/afii10048": "\u042E", - "/afii10049": "\u042F", - "/afii10050": "\u0490", - "/afii10051": "\u0402", - "/afii10052": "\u0403", - "/afii10053": "\u0404", - "/afii10054": "\u0405", - "/afii10055": "\u0406", - "/afii10056": "\u0407", - "/afii10057": "\u0408", - "/afii10058": "\u0409", - "/afii10059": "\u040A", - "/afii10060": "\u040B", - "/afii10061": "\u040C", - "/afii10062": "\u040E", - "/afii10063": "\uF6C4", - "/afii10064": "\uF6C5", - "/afii10065": "\u0430", - "/afii10066": "\u0431", - "/afii10067": "\u0432", - "/afii10068": "\u0433", - "/afii10069": "\u0434", - "/afii10070": "\u0435", - "/afii10071": "\u0451", - "/afii10072": "\u0436", - "/afii10073": "\u0437", - "/afii10074": "\u0438", - "/afii10075": "\u0439", - "/afii10076": "\u043A", - "/afii10077": "\u043B", - "/afii10078": "\u043C", - "/afii10079": "\u043D", - "/afii10080": "\u043E", - "/afii10081": "\u043F", - "/afii10082": "\u0440", - "/afii10083": "\u0441", - "/afii10084": "\u0442", - "/afii10085": "\u0443", - "/afii10086": "\u0444", - "/afii10087": "\u0445", - "/afii10088": "\u0446", - "/afii10089": "\u0447", - "/afii10090": "\u0448", - "/afii10091": "\u0449", - "/afii10092": "\u044A", - "/afii10093": "\u044B", - "/afii10094": "\u044C", - "/afii10095": "\u044D", - "/afii10096": "\u044E", - "/afii10097": "\u044F", - "/afii10098": "\u0491", - "/afii10099": "\u0452", - "/afii10100": "\u0453", - "/afii10101": "\u0454", - "/afii10102": "\u0455", - "/afii10103": "\u0456", - "/afii10104": "\u0457", - "/afii10105": "\u0458", - "/afii10106": "\u0459", - "/afii10107": "\u045A", - "/afii10108": "\u045B", - "/afii10109": "\u045C", - "/afii10110": "\u045E", - "/afii10145": "\u040F", - "/afii10146": "\u0462", - "/afii10147": "\u0472", - "/afii10148": "\u0474", - "/afii10192": "\uF6C6", - "/afii10193": "\u045F", - "/afii10194": "\u0463", - "/afii10195": "\u0473", - "/afii10196": "\u0475", - "/afii10831": "\uF6C7", - "/afii10832": "\uF6C8", - "/afii10846": "\u04D9", - "/afii299": "\u200E", - "/afii300": "\u200F", - "/afii301": "\u200D", - "/afii57381": "\u066A", - "/afii57388": "\u060C", - "/afii57392": "\u0660", - "/afii57393": "\u0661", - "/afii57394": "\u0662", - "/afii57395": "\u0663", - "/afii57396": "\u0664", - "/afii57397": "\u0665", - "/afii57398": "\u0666", - "/afii57399": "\u0667", - "/afii57400": "\u0668", - "/afii57401": "\u0669", - "/afii57403": "\u061B", - "/afii57407": "\u061F", - "/afii57409": "\u0621", - "/afii57410": "\u0622", - "/afii57411": "\u0623", - "/afii57412": "\u0624", - "/afii57413": "\u0625", - "/afii57414": "\u0626", - "/afii57415": "\u0627", - "/afii57416": "\u0628", - "/afii57417": "\u0629", - "/afii57418": "\u062A", - "/afii57419": "\u062B", - "/afii57420": "\u062C", - "/afii57421": "\u062D", - "/afii57422": "\u062E", - "/afii57423": "\u062F", - "/afii57424": "\u0630", - "/afii57425": "\u0631", - "/afii57426": "\u0632", - "/afii57427": "\u0633", - "/afii57428": "\u0634", - "/afii57429": "\u0635", - "/afii57430": "\u0636", - "/afii57431": "\u0637", - "/afii57432": "\u0638", - "/afii57433": "\u0639", - "/afii57434": "\u063A", - "/afii57440": "\u0640", - "/afii57441": "\u0641", - "/afii57442": "\u0642", - "/afii57443": "\u0643", - "/afii57444": "\u0644", - "/afii57445": "\u0645", - "/afii57446": "\u0646", - "/afii57448": "\u0648", - "/afii57449": "\u0649", - "/afii57450": "\u064A", - "/afii57451": "\u064B", - "/afii57452": "\u064C", - "/afii57453": "\u064D", - "/afii57454": "\u064E", - "/afii57455": "\u064F", - "/afii57456": "\u0650", - "/afii57457": "\u0651", - "/afii57458": "\u0652", - "/afii57470": "\u0647", - "/afii57505": "\u06A4", - "/afii57506": "\u067E", - "/afii57507": "\u0686", - "/afii57508": "\u0698", - "/afii57509": "\u06AF", - "/afii57511": "\u0679", - "/afii57512": "\u0688", - "/afii57513": "\u0691", - "/afii57514": "\u06BA", - "/afii57519": "\u06D2", - "/afii57534": "\u06D5", - "/afii57636": "\u20AA", - "/afii57645": "\u05BE", - "/afii57658": "\u05C3", - "/afii57664": "\u05D0", - "/afii57665": "\u05D1", - "/afii57666": "\u05D2", - "/afii57667": "\u05D3", - "/afii57668": "\u05D4", - "/afii57669": "\u05D5", - "/afii57670": "\u05D6", - "/afii57671": "\u05D7", - "/afii57672": "\u05D8", - "/afii57673": "\u05D9", - "/afii57674": "\u05DA", - "/afii57675": "\u05DB", - "/afii57676": "\u05DC", - "/afii57677": "\u05DD", - "/afii57678": "\u05DE", - "/afii57679": "\u05DF", - "/afii57680": "\u05E0", - "/afii57681": "\u05E1", - "/afii57682": "\u05E2", - "/afii57683": "\u05E3", - "/afii57684": "\u05E4", - "/afii57685": "\u05E5", - "/afii57686": "\u05E6", - "/afii57687": "\u05E7", - "/afii57688": "\u05E8", - "/afii57689": "\u05E9", - "/afii57690": "\u05EA", - "/afii57694": "\uFB2A", - "/afii57695": "\uFB2B", - "/afii57700": "\uFB4B", - "/afii57705": "\uFB1F", - "/afii57716": "\u05F0", - "/afii57717": "\u05F1", - "/afii57718": "\u05F2", - "/afii57723": "\uFB35", - "/afii57793": "\u05B4", - "/afii57794": "\u05B5", - "/afii57795": "\u05B6", - "/afii57796": "\u05BB", - "/afii57797": "\u05B8", - "/afii57798": "\u05B7", - "/afii57799": "\u05B0", - "/afii57800": "\u05B2", - "/afii57801": "\u05B1", - "/afii57802": "\u05B3", - "/afii57803": "\u05C2", - "/afii57804": "\u05C1", - "/afii57806": "\u05B9", - "/afii57807": "\u05BC", - "/afii57839": "\u05BD", - "/afii57841": "\u05BF", - "/afii57842": "\u05C0", - "/afii57929": "\u02BC", - "/afii61248": "\u2105", - "/afii61289": "\u2113", - "/afii61352": "\u2116", - "/afii61573": "\u202C", - "/afii61574": "\u202D", - "/afii61575": "\u202E", - "/afii61664": "\u200C", - "/afii63167": "\u066D", - "/afii64937": "\u02BD", - "/agrave": "\u00E0", - "/agravedbl": "\u0201", - "/agujarati": "\u0A85", - "/agurmukhi": "\u0A05", - "/ahiragana": "\u3042", - "/ahoi": "\u1EA3", - "/ahookabove": "\u1EA3", - "/aibengali": "\u0990", - "/aibopomofo": "\u311E", - "/aideva": "\u0910", - "/aiecyr": "\u04D5", - "/aiecyrillic": "\u04D5", - "/aigujarati": "\u0A90", - "/aigurmukhi": "\u0A10", - "/aimatragurmukhi": "\u0A48", - "/ain.fina": "\uFECA", - "/ain.init": "\uFECB", - "/ain.init_alefmaksura.fina": "\uFCF7", - "/ain.init_jeem.fina": "\uFC29", - "/ain.init_jeem.medi": "\uFCBA", - "/ain.init_jeem.medi_meem.medi": "\uFDC4", - "/ain.init_meem.fina": "\uFC2A", - "/ain.init_meem.medi": "\uFCBB", - "/ain.init_meem.medi_meem.medi": "\uFD77", - "/ain.init_yeh.fina": "\uFCF8", - "/ain.isol": "\uFEC9", - "/ain.medi": "\uFECC", - "/ain.medi_alefmaksura.fina": "\uFD13", - "/ain.medi_jeem.medi_meem.fina": "\uFD75", - "/ain.medi_meem.medi_alefmaksura.fina": "\uFD78", - "/ain.medi_meem.medi_meem.fina": "\uFD76", - "/ain.medi_meem.medi_yeh.fina": "\uFDB6", - "/ain.medi_yeh.fina": "\uFD14", - "/ainThreeDotsDownAbove": "\u075E", - "/ainTwoDotsAbove": "\u075D", - "/ainTwoDotsVerticallyAbove": "\u075F", - "/ainarabic": "\u0639", - "/ainfinalarabic": "\uFECA", - "/aininitialarabic": "\uFECB", - "/ainmedialarabic": "\uFECC", - "/ainthreedotsabove": "\u06A0", - "/ainvertedbreve": "\u0203", - "/airplaneArriving": "\u1F6EC", - "/airplaneDeparture": "\u1F6EB", - "/aivowelsignbengali": "\u09C8", - "/aivowelsigndeva": "\u0948", - "/aivowelsigngujarati": "\u0AC8", - "/akatakana": "\u30A2", - "/akatakanahalfwidth": "\uFF71", - "/akorean": "\u314F", - "/aktieselskab": "\u214D", - "/alarmclock": "\u23F0", - "/alef": "\u05D0", - "/alef.fina": "\uFE8E", - "/alef.init_fathatan.fina": "\uFD3D", - "/alef.isol": "\uFE8D", - "/alef.medi_fathatan.fina": "\uFD3C", - "/alef:hb": "\u05D0", - "/alefDigitThreeAbove": "\u0774", - "/alefDigitTwoAbove": "\u0773", - "/alefLamYehabove": "\u0616", - "/alefabove": "\u0670", - "/alefarabic": "\u0627", - "/alefdageshhebrew": "\uFB30", - "/aleffinalarabic": "\uFE8E", - "/alefhamza": "\u0623", - "/alefhamza.fina": "\uFE84", - "/alefhamza.isol": "\uFE83", - "/alefhamzaabovearabic": "\u0623", - "/alefhamzaabovefinalarabic": "\uFE84", - "/alefhamzabelow": "\u0625", - "/alefhamzabelow.fina": "\uFE88", - "/alefhamzabelow.isol": "\uFE87", - "/alefhamzabelowarabic": "\u0625", - "/alefhamzabelowfinalarabic": "\uFE88", - "/alefhebrew": "\u05D0", - "/alefhighhamza": "\u0675", - "/aleflamedhebrew": "\uFB4F", - "/alefmadda": "\u0622", - "/alefmadda.fina": "\uFE82", - "/alefmadda.isol": "\uFE81", - "/alefmaddaabovearabic": "\u0622", - "/alefmaddaabovefinalarabic": "\uFE82", - "/alefmaksura": "\u0649", - "/alefmaksura.fina": "\uFEF0", - "/alefmaksura.init_superscriptalef.fina": "\uFC5D", - "/alefmaksura.isol": "\uFEEF", - "/alefmaksura.medi_superscriptalef.fina": "\uFC90", - "/alefmaksuraarabic": "\u0649", - "/alefmaksurafinalarabic": "\uFEF0", - "/alefmaksurainitialarabic": "\uFEF3", - "/alefmaksuramedialarabic": "\uFEF4", - "/alefpatahhebrew": "\uFB2E", - "/alefqamatshebrew": "\uFB2F", - "/alefwasla": "\u0671", - "/alefwasla.fina": "\uFB51", - "/alefwasla.isol": "\uFB50", - "/alefwavyhamza": "\u0672", - "/alefwavyhamzabelow": "\u0673", - "/alefwide:hb": "\uFB21", - "/alefwithmapiq:hb": "\uFB30", - "/alefwithpatah:hb": "\uFB2E", - "/alefwithqamats:hb": "\uFB2F", - "/alembic": "\u2697", - "/aleph": "\u2135", - "/alienMonster": "\u1F47E", - "/allaroundprofile": "\u232E", - "/allequal": "\u224C", - "/allianceideographiccircled": "\u32AF", - "/allianceideographicparen": "\u323F", - "/almostequalorequal": "\u224A", - "/alpha": "\u03B1", - "/alphaacute": "\u1F71", - "/alphaacuteiotasub": "\u1FB4", - "/alphaasper": "\u1F01", - "/alphaasperacute": "\u1F05", - "/alphaasperacuteiotasub": "\u1F85", - "/alphaaspergrave": "\u1F03", - "/alphaaspergraveiotasub": "\u1F83", - "/alphaasperiotasub": "\u1F81", - "/alphaaspertilde": "\u1F07", - "/alphaaspertildeiotasub": "\u1F87", - "/alphabreve": "\u1FB0", - "/alphafunc": "\u237A", - "/alphagrave": "\u1F70", - "/alphagraveiotasub": "\u1FB2", - "/alphaiotasub": "\u1FB3", - "/alphalenis": "\u1F00", - "/alphalenisacute": "\u1F04", - "/alphalenisacuteiotasub": "\u1F84", - "/alphalenisgrave": "\u1F02", - "/alphalenisgraveiotasub": "\u1F82", - "/alphalenisiotasub": "\u1F80", - "/alphalenistilde": "\u1F06", - "/alphalenistildeiotasub": "\u1F86", - "/alphatilde": "\u1FB6", - "/alphatildeiotasub": "\u1FB7", - "/alphatonos": "\u03AC", - "/alphaturned": "\u0252", - "/alphaunderlinefunc": "\u2376", - "/alphawithmacron": "\u1FB1", - "/alternateonewayleftwaytraffic": "\u26D5", - "/alternative": "\u2387", - "/amacron": "\u0101", - "/ambulance": "\u1F691", - "/americanFootball": "\u1F3C8", - "/amfullwidth": "\u33C2", - "/amonospace": "\uFF41", - "/amountofcheck": "\u2447", - "/ampersand": "\u0026", - "/ampersandSindhi": "\u06FD", - "/ampersandmonospace": "\uFF06", - "/ampersandsmall": "\uF726", - "/ampersandturned": "\u214B", - "/amphora": "\u1F3FA", - "/amsquare": "\u33C2", - "/anbopomofo": "\u3122", - "/anchor": "\u2693", - "/ancoradown": "\u2E14", - "/ancoraup": "\u2E15", - "/andappada": "\uA9C3", - "/angbopomofo": "\u3124", - "/anger": "\u1F4A2", - "/angkhankhuthai": "\u0E5A", - "/angle": "\u2220", - "/anglearcright": "\u22BE", - "/anglebracketleft": "\u3008", - "/anglebracketleftvertical": "\uFE3F", - "/anglebracketright": "\u3009", - "/anglebracketrightvertical": "\uFE40", - "/angledottedright": "\u2E16", - "/angleleft": "\u2329", - "/anglemarkerdottedsubstitutionright": "\u2E01", - "/anglemarkersubstitutionright": "\u2E00", - "/angleright": "\u232A", - "/anglezigzagarrowdownright": "\u237C", - "/angryFace": "\u1F620", - "/angstrom": "\u212B", - "/anguishedFace": "\u1F627", - "/ankh": "\u2625", - "/anoteleia": "\u0387", - "/anpeasquare": "\u3302", - "/ant": "\u1F41C", - "/antennaBars": "\u1F4F6", - "/anticlockwiseDownwardsAndUpwardsOpenCircleArrows": "\u1F504", - "/anudattadeva": "\u0952", - "/anusvarabengali": "\u0982", - "/anusvaradeva": "\u0902", - "/anusvaragujarati": "\u0A82", - "/ao": "\uA735", - "/aogonek": "\u0105", - "/aovermfullwidth": "\u33DF", - "/apaatosquare": "\u3300", - "/aparen": "\u249C", - "/aparenthesized": "\u249C", - "/apostrophearmenian": "\u055A", - "/apostrophedblmod": "\u02EE", - "/apostrophemod": "\u02BC", - "/apple": "\uF8FF", - "/approaches": "\u2250", - "/approacheslimit": "\u2250", - "/approxequal": "\u2248", - "/approxequalorimage": "\u2252", - "/approximatelybutnotactuallyequal": "\u2246", - "/approximatelyequal": "\u2245", - "/approximatelyequalorimage": "\u2252", - "/apriltelegraph": "\u32C3", - "/aquarius": "\u2652", - "/ar:ae": "\u06D5", - "/ar:ain": "\u0639", - "/ar:alef": "\u0627", - "/ar:comma": "\u060C", - "/ar:cuberoot": "\u0606", - "/ar:decimalseparator": "\u066B", - "/ar:e": "\u06D0", - "/ar:eight": "\u0668", - "/ar:feh": "\u0641", - "/ar:five": "\u0665", - "/ar:four": "\u0664", - "/ar:fourthroot": "\u0607", - "/ar:kaf": "\u0643", - "/ar:ng": "\u06AD", - "/ar:nine": "\u0669", - "/ar:numbersign": "\u0600", - "/ar:oe": "\u06C6", - "/ar:one": "\u0661", - "/ar:peh": "\u067E", - "/ar:percent": "\u066A", - "/ar:perthousand": "\u060A", - "/ar:question": "\u061F", - "/ar:reh": "\u0631", - "/ar:semicolon": "\u061B", - "/ar:seven": "\u0667", - "/ar:shadda": "\u0651", - "/ar:six": "\u0666", - "/ar:sukun": "\u0652", - "/ar:three": "\u0663", - "/ar:two": "\u0662", - "/ar:u": "\u06C7", - "/ar:ve": "\u06CB", - "/ar:yu": "\u06C8", - "/ar:zero": "\u0660", - "/araeaekorean": "\u318E", - "/araeakorean": "\u318D", - "/arc": "\u2312", - "/archaicmepigraphic": "\uA7FF", - "/aries": "\u2648", - "/arighthalfring": "\u1E9A", - "/aring": "\u00E5", - "/aringacute": "\u01FB", - "/aringbelow": "\u1E01", - "/armn:Ayb": "\u0531", - "/armn:Ben": "\u0532", - "/armn:Ca": "\u053E", - "/armn:Cha": "\u0549", - "/armn:Cheh": "\u0543", - "/armn:Co": "\u0551", - "/armn:DRAMSIGN": "\u058F", - "/armn:Da": "\u0534", - "/armn:Ech": "\u0535", - "/armn:Eh": "\u0537", - "/armn:Et": "\u0538", - "/armn:Feh": "\u0556", - "/armn:Ghad": "\u0542", - "/armn:Gim": "\u0533", - "/armn:Ho": "\u0540", - "/armn:Ini": "\u053B", - "/armn:Ja": "\u0541", - "/armn:Jheh": "\u054B", - "/armn:Keh": "\u0554", - "/armn:Ken": "\u053F", - "/armn:Liwn": "\u053C", - "/armn:Men": "\u0544", - "/armn:Now": "\u0546", - "/armn:Oh": "\u0555", - "/armn:Peh": "\u054A", - "/armn:Piwr": "\u0553", - "/armn:Ra": "\u054C", - "/armn:Reh": "\u0550", - "/armn:Seh": "\u054D", - "/armn:Sha": "\u0547", - "/armn:Tiwn": "\u054F", - "/armn:To": "\u0539", - "/armn:Vew": "\u054E", - "/armn:Vo": "\u0548", - "/armn:Xeh": "\u053D", - "/armn:Yi": "\u0545", - "/armn:Yiwn": "\u0552", - "/armn:Za": "\u0536", - "/armn:Zhe": "\u053A", - "/armn:abbreviationmark": "\u055F", - "/armn:apostrophe": "\u055A", - "/armn:ayb": "\u0561", - "/armn:ben": "\u0562", - "/armn:ca": "\u056E", - "/armn:cha": "\u0579", - "/armn:cheh": "\u0573", - "/armn:co": "\u0581", - "/armn:comma": "\u055D", - "/armn:da": "\u0564", - "/armn:ech": "\u0565", - "/armn:ech_yiwn": "\u0587", - "/armn:eh": "\u0567", - "/armn:emphasismark": "\u055B", - "/armn:et": "\u0568", - "/armn:exclam": "\u055C", - "/armn:feh": "\u0586", - "/armn:ghad": "\u0572", - "/armn:gim": "\u0563", - "/armn:ho": "\u0570", - "/armn:hyphen": "\u058A", - "/armn:ini": "\u056B", - "/armn:ja": "\u0571", - "/armn:jheh": "\u057B", - "/armn:keh": "\u0584", - "/armn:ken": "\u056F", - "/armn:leftfacingeternitysign": "\u058E", - "/armn:liwn": "\u056C", - "/armn:men": "\u0574", - "/armn:men_ech": "\uFB14", - "/armn:men_ini": "\uFB15", - "/armn:men_now": "\uFB13", - "/armn:men_xeh": "\uFB17", - "/armn:now": "\u0576", - "/armn:oh": "\u0585", - "/armn:peh": "\u057A", - "/armn:period": "\u0589", - "/armn:piwr": "\u0583", - "/armn:question": "\u055E", - "/armn:ra": "\u057C", - "/armn:reh": "\u0580", - "/armn:rightfacingeternitysign": "\u058D", - "/armn:ringhalfleft": "\u0559", - "/armn:seh": "\u057D", - "/armn:sha": "\u0577", - "/armn:tiwn": "\u057F", - "/armn:to": "\u0569", - "/armn:vew": "\u057E", - "/armn:vew_now": "\uFB16", - "/armn:vo": "\u0578", - "/armn:xeh": "\u056D", - "/armn:yi": "\u0575", - "/armn:yiwn": "\u0582", - "/armn:za": "\u0566", - "/armn:zhe": "\u056A", - "/arrowNE": "\u2197", - "/arrowNW": "\u2196", - "/arrowSE": "\u2198", - "/arrowSW": "\u2199", - "/arrowanticlockwiseopencircle": "\u21BA", - "/arrowanticlockwisesemicircle": "\u21B6", - "/arrowboth": "\u2194", - "/arrowclockwiseopencircle": "\u21BB", - "/arrowclockwisesemicircle": "\u21B7", - "/arrowdashdown": "\u21E3", - "/arrowdashleft": "\u21E0", - "/arrowdashright": "\u21E2", - "/arrowdashup": "\u21E1", - "/arrowdblboth": "\u21D4", - "/arrowdbldown": "\u21D3", - "/arrowdblleft": "\u21D0", - "/arrowdblright": "\u21D2", - "/arrowdblup": "\u21D1", - "/arrowdown": "\u2193", - "/arrowdowndashed": "\u21E3", - "/arrowdownfrombar": "\u21A7", - "/arrowdownleft": "\u2199", - "/arrowdownright": "\u2198", - "/arrowdowntwoheaded": "\u21A1", - "/arrowdownwhite": "\u21E9", - "/arrowdownzigzag": "\u21AF", - "/arrowheaddown": "\u2304", - "/arrowheaddownlowmod": "\u02EF", - "/arrowheaddownmod": "\u02C5", - "/arrowheadleftlowmod": "\u02F1", - "/arrowheadleftmod": "\u02C2", - "/arrowheadrightlowmod": "\u02F2", - "/arrowheadrightmod": "\u02C3", - "/arrowheadtwobarsuphorizontal": "\u2324", - "/arrowheadup": "\u2303", - "/arrowheaduplowmod": "\u02F0", - "/arrowheadupmod": "\u02C4", - "/arrowhorizex": "\uF8E7", - "/arrowleft": "\u2190", - "/arrowleftdashed": "\u21E0", - "/arrowleftdbl": "\u21D0", - "/arrowleftdblstroke": "\u21CD", - "/arrowleftdowncorner": "\u21B5", - "/arrowleftdowntip": "\u21B2", - "/arrowleftfrombar": "\u21A4", - "/arrowlefthook": "\u21A9", - "/arrowleftloop": "\u21AB", - "/arrowleftlowmod": "\u02FF", - "/arrowleftoverright": "\u21C6", - "/arrowleftoverrighttobar": "\u21B9", - "/arrowleftright": "\u2194", - "/arrowleftrightstroke": "\u21AE", - "/arrowleftrightwave": "\u21AD", - "/arrowleftsquiggle": "\u21DC", - "/arrowleftstroke": "\u219A", - "/arrowlefttail": "\u21A2", - "/arrowlefttobar": "\u21E4", - "/arrowlefttwoheaded": "\u219E", - "/arrowleftuptip": "\u21B0", - "/arrowleftwave": "\u219C", - "/arrowleftwhite": "\u21E6", - "/arrowlongNWtobar": "\u21B8", - "/arrowright": "\u2192", - "/arrowrightdashed": "\u21E2", - "/arrowrightdblstroke": "\u21CF", - "/arrowrightdowncorner": "\u21B4", - "/arrowrightdowntip": "\u21B3", - "/arrowrightfrombar": "\u21A6", - "/arrowrightheavy": "\u279E", - "/arrowrighthook": "\u21AA", - "/arrowrightloop": "\u21AC", - "/arrowrightoverleft": "\u21C4", - "/arrowrightsmallcircle": "\u21F4", - "/arrowrightsquiggle": "\u21DD", - "/arrowrightstroke": "\u219B", - "/arrowrighttail": "\u21A3", - "/arrowrighttobar": "\u21E5", - "/arrowrighttwoheaded": "\u21A0", - "/arrowrightwave": "\u219D", - "/arrowrightwhite": "\u21E8", - "/arrowspaireddown": "\u21CA", - "/arrowspairedleft": "\u21C7", - "/arrowspairedright": "\u21C9", - "/arrowspairedup": "\u21C8", - "/arrowtableft": "\u21E4", - "/arrowtabright": "\u21E5", - "/arrowup": "\u2191", - "/arrowupdashed": "\u21E1", - "/arrowupdn": "\u2195", - "/arrowupdnbse": "\u21A8", - "/arrowupdown": "\u2195", - "/arrowupdownbase": "\u21A8", - "/arrowupdownwithbase": "\u21A8", - "/arrowupfrombar": "\u21A5", - "/arrowupleft": "\u2196", - "/arrowupleftofdown": "\u21C5", - "/arrowupright": "\u2197", - "/arrowuprighttip": "\u21B1", - "/arrowuptwoheaded": "\u219F", - "/arrowupwhite": "\u21E7", - "/arrowvertex": "\uF8E6", - "/articulatedLorry": "\u1F69B", - "/artistPalette": "\u1F3A8", - "/aruhuasquare": "\u3301", - "/asciicircum": "\u005E", - "/asciicircummonospace": "\uFF3E", - "/asciitilde": "\u007E", - "/asciitildemonospace": "\uFF5E", - "/ascript": "\u0251", - "/ascriptturned": "\u0252", - "/asmallhiragana": "\u3041", - "/asmallkatakana": "\u30A1", - "/asmallkatakanahalfwidth": "\uFF67", - "/asper": "\u1FFE", - "/asperacute": "\u1FDE", - "/aspergrave": "\u1FDD", - "/aspertilde": "\u1FDF", - "/assertion": "\u22A6", - "/asterisk": "\u002A", - "/asteriskaltonearabic": "\u066D", - "/asteriskarabic": "\u066D", - "/asteriskmath": "\u2217", - "/asteriskmonospace": "\uFF0A", - "/asterisksmall": "\uFE61", - "/asterism": "\u2042", - "/astonishedFace": "\u1F632", - "/astroke": "\u2C65", - "/astronomicaluranus": "\u26E2", - "/asuperior": "\uF6E9", - "/asympticallyequal": "\u2243", - "/asymptoticallyequal": "\u2243", - "/at": "\u0040", - "/athleticShoe": "\u1F45F", - "/atilde": "\u00E3", - "/atmonospace": "\uFF20", - "/atnachHafukh:hb": "\u05A2", - "/atom": "\u269B", - "/atsmall": "\uFE6B", - "/attentionideographiccircled": "\u329F", - "/aturned": "\u0250", - "/au": "\uA737", - "/aubengali": "\u0994", - "/aubergine": "\u1F346", - "/aubopomofo": "\u3120", - "/audeva": "\u0914", - "/aufullwidth": "\u3373", - "/augujarati": "\u0A94", - "/augurmukhi": "\u0A14", - "/augusttelegraph": "\u32C7", - "/aulengthmarkbengali": "\u09D7", - "/aumatragurmukhi": "\u0A4C", - "/austral": "\u20B3", - "/automatedTellerMachine": "\u1F3E7", - "/automobile": "\u1F697", - "/auvowelsignbengali": "\u09CC", - "/auvowelsigndeva": "\u094C", - "/auvowelsigngujarati": "\u0ACC", - "/av": "\uA739", - "/avagrahadeva": "\u093D", - "/avhorizontalbar": "\uA73B", - "/ay": "\uA73D", - "/aybarmenian": "\u0561", - "/ayin": "\u05E2", - "/ayin:hb": "\u05E2", - "/ayinalt:hb": "\uFB20", - "/ayinaltonehebrew": "\uFB20", - "/ayinhebrew": "\u05E2", - "/azla:hb": "\u059C", - "/b": "\u0062", - "/baarerusquare": "\u332D", - "/babengali": "\u09AC", - "/babyAngel": "\u1F47C", - "/babyBottle": "\u1F37C", - "/babyChick": "\u1F424", - "/backLeftwardsArrowAbove": "\u1F519", - "/backOfEnvelope": "\u1F582", - "/backslash": "\u005C", - "/backslashbarfunc": "\u2340", - "/backslashdbl": "\u244A", - "/backslashmonospace": "\uFF3C", - "/bactrianCamel": "\u1F42B", - "/badeva": "\u092C", - "/badmintonRacquetAndShuttlecock": "\u1F3F8", - "/bagdelimitersshapeleft": "\u27C5", - "/bagdelimitersshaperight": "\u27C6", - "/baggageClaim": "\u1F6C4", - "/bagujarati": "\u0AAC", - "/bagurmukhi": "\u0A2C", - "/bahiragana": "\u3070", - "/bahtthai": "\u0E3F", - "/bakatakana": "\u30D0", - "/balloon": "\u1F388", - "/ballotBoldScriptX": "\u1F5F6", - "/ballotBoxBallot": "\u1F5F3", - "/ballotBoxBoldCheck": "\u1F5F9", - "/ballotBoxBoldScriptX": "\u1F5F7", - "/ballotBoxScriptX": "\u1F5F5", - "/ballotScriptX": "\u1F5F4", - "/bamurda": "\uA9A8", - "/banana": "\u1F34C", - "/bank": "\u1F3E6", - "/banknoteDollarSign": "\u1F4B5", - "/banknoteEuroSign": "\u1F4B6", - "/banknotePoundSign": "\u1F4B7", - "/banknoteYenSign": "\u1F4B4", - "/bar": "\u007C", - "/barChart": "\u1F4CA", - "/barberPole": "\u1F488", - "/barfullwidth": "\u3374", - "/barmonospace": "\uFF5C", - "/barquillverticalleft": "\u2E20", - "/barquillverticalright": "\u2E21", - "/baseball": "\u26BE", - "/basketballAndHoop": "\u1F3C0", - "/bath": "\u1F6C0", - "/bathtub": "\u1F6C1", - "/battery": "\u1F50B", - "/bbopomofo": "\u3105", - "/bcircle": "\u24D1", - "/bdot": "\u1E03", - "/bdotaccent": "\u1E03", - "/bdotbelow": "\u1E05", - "/beachUmbrella": "\u1F3D6", - "/beamedAscendingMusicalNotes": "\u1F39C", - "/beamedDescendingMusicalNotes": "\u1F39D", - "/beamedeighthnotes": "\u266B", - "/beamedsixteenthnotes": "\u266C", - "/beamfunc": "\u2336", - "/bearFace": "\u1F43B", - "/beatingHeart": "\u1F493", - "/because": "\u2235", - "/becyr": "\u0431", - "/becyrillic": "\u0431", - "/bed": "\u1F6CF", - "/beeh": "\u067B", - "/beeh.fina": "\uFB53", - "/beeh.init": "\uFB54", - "/beeh.isol": "\uFB52", - "/beeh.medi": "\uFB55", - "/beerMug": "\u1F37A", - "/beetasquare": "\u333C", - "/beh": "\u0628", - "/beh.fina": "\uFE90", - "/beh.init": "\uFE91", - "/beh.init_alefmaksura.fina": "\uFC09", - "/beh.init_hah.fina": "\uFC06", - "/beh.init_hah.medi": "\uFC9D", - "/beh.init_heh.medi": "\uFCA0", - "/beh.init_jeem.fina": "\uFC05", - "/beh.init_jeem.medi": "\uFC9C", - "/beh.init_khah.fina": "\uFC07", - "/beh.init_khah.medi": "\uFC9E", - "/beh.init_meem.fina": "\uFC08", - "/beh.init_meem.medi": "\uFC9F", - "/beh.init_yeh.fina": "\uFC0A", - "/beh.isol": "\uFE8F", - "/beh.medi": "\uFE92", - "/beh.medi_alefmaksura.fina": "\uFC6E", - "/beh.medi_hah.medi_yeh.fina": "\uFDC2", - "/beh.medi_heh.medi": "\uFCE2", - "/beh.medi_khah.medi_yeh.fina": "\uFD9E", - "/beh.medi_meem.fina": "\uFC6C", - "/beh.medi_meem.medi": "\uFCE1", - "/beh.medi_noon.fina": "\uFC6D", - "/beh.medi_reh.fina": "\uFC6A", - "/beh.medi_yeh.fina": "\uFC6F", - "/beh.medi_zain.fina": "\uFC6B", - "/behDotBelowThreeDotsAbove": "\u0751", - "/behInvertedSmallVBelow": "\u0755", - "/behSmallV": "\u0756", - "/behThreeDotsHorizontallyBelow": "\u0750", - "/behThreeDotsUpBelow": "\u0752", - "/behThreeDotsUpBelowTwoDotsAbove": "\u0753", - "/behTwoDotsBelowDotAbove": "\u0754", - "/beharabic": "\u0628", - "/beheh": "\u0680", - "/beheh.fina": "\uFB5B", - "/beheh.init": "\uFB5C", - "/beheh.isol": "\uFB5A", - "/beheh.medi": "\uFB5D", - "/behfinalarabic": "\uFE90", - "/behinitialarabic": "\uFE91", - "/behiragana": "\u3079", - "/behmedialarabic": "\uFE92", - "/behmeeminitialarabic": "\uFC9F", - "/behmeemisolatedarabic": "\uFC08", - "/behnoonfinalarabic": "\uFC6D", - "/bekatakana": "\u30D9", - "/bellCancellationStroke": "\u1F515", - "/bellhopBell": "\u1F6CE", - "/beltbuckle": "\u2444", - "/benarmenian": "\u0562", - "/beng:a": "\u0985", - "/beng:aa": "\u0986", - "/beng:aasign": "\u09BE", - "/beng:abbreviationsign": "\u09FD", - "/beng:ai": "\u0990", - "/beng:aisign": "\u09C8", - "/beng:anji": "\u0980", - "/beng:anusvara": "\u0982", - "/beng:au": "\u0994", - "/beng:aulengthmark": "\u09D7", - "/beng:ausign": "\u09CC", - "/beng:avagraha": "\u09BD", - "/beng:ba": "\u09AC", - "/beng:bha": "\u09AD", - "/beng:ca": "\u099A", - "/beng:candrabindu": "\u0981", - "/beng:cha": "\u099B", - "/beng:currencyoneless": "\u09F8", - "/beng:da": "\u09A6", - "/beng:dda": "\u09A1", - "/beng:ddha": "\u09A2", - "/beng:dha": "\u09A7", - "/beng:e": "\u098F", - "/beng:eight": "\u09EE", - "/beng:esign": "\u09C7", - "/beng:five": "\u09EB", - "/beng:four": "\u09EA", - "/beng:fourcurrencynumerator": "\u09F7", - "/beng:ga": "\u0997", - "/beng:gandamark": "\u09FB", - "/beng:gha": "\u0998", - "/beng:ha": "\u09B9", - "/beng:i": "\u0987", - "/beng:ii": "\u0988", - "/beng:iisign": "\u09C0", - "/beng:isign": "\u09BF", - "/beng:isshar": "\u09FA", - "/beng:ja": "\u099C", - "/beng:jha": "\u099D", - "/beng:ka": "\u0995", - "/beng:kha": "\u0996", - "/beng:khandata": "\u09CE", - "/beng:la": "\u09B2", - "/beng:llvocal": "\u09E1", - "/beng:llvocalsign": "\u09E3", - "/beng:lvocal": "\u098C", - "/beng:lvocalsign": "\u09E2", - "/beng:ma": "\u09AE", - "/beng:na": "\u09A8", - "/beng:nga": "\u0999", - "/beng:nine": "\u09EF", - "/beng:nna": "\u09A3", - "/beng:nukta": "\u09BC", - "/beng:nya": "\u099E", - "/beng:o": "\u0993", - "/beng:one": "\u09E7", - "/beng:onecurrencynumerator": "\u09F4", - "/beng:osign": "\u09CB", - "/beng:pa": "\u09AA", - "/beng:pha": "\u09AB", - "/beng:ra": "\u09B0", - "/beng:ralowdiagonal": "\u09F1", - "/beng:ramiddiagonal": "\u09F0", - "/beng:rha": "\u09DD", - "/beng:rra": "\u09DC", - "/beng:rrvocal": "\u09E0", - "/beng:rrvocalsign": "\u09C4", - "/beng:rupee": "\u09F3", - "/beng:rupeemark": "\u09F2", - "/beng:rvocal": "\u098B", - "/beng:rvocalsign": "\u09C3", - "/beng:sa": "\u09B8", - "/beng:seven": "\u09ED", - "/beng:sha": "\u09B6", - "/beng:six": "\u09EC", - "/beng:sixteencurrencydenominator": "\u09F9", - "/beng:ssa": "\u09B7", - "/beng:ta": "\u09A4", - "/beng:tha": "\u09A5", - "/beng:three": "\u09E9", - "/beng:threecurrencynumerator": "\u09F6", - "/beng:tta": "\u099F", - "/beng:ttha": "\u09A0", - "/beng:two": "\u09E8", - "/beng:twocurrencynumerator": "\u09F5", - "/beng:u": "\u0989", - "/beng:usign": "\u09C1", - "/beng:uu": "\u098A", - "/beng:uusign": "\u09C2", - "/beng:vedicanusvara": "\u09FC", - "/beng:virama": "\u09CD", - "/beng:visarga": "\u0983", - "/beng:ya": "\u09AF", - "/beng:yya": "\u09DF", - "/beng:zero": "\u09E6", - "/bentoBox": "\u1F371", - "/benzenering": "\u232C", - "/benzeneringcircle": "\u23E3", - "/bet": "\u05D1", - "/bet:hb": "\u05D1", - "/beta": "\u03B2", - "/betasymbol": "\u03D0", - "/betasymbolgreek": "\u03D0", - "/betdagesh": "\uFB31", - "/betdageshhebrew": "\uFB31", - "/bethebrew": "\u05D1", - "/betrafehebrew": "\uFB4C", - "/between": "\u226C", - "/betwithdagesh:hb": "\uFB31", - "/betwithrafe:hb": "\uFB4C", - "/bflourish": "\uA797", - "/bhabengali": "\u09AD", - "/bhadeva": "\u092D", - "/bhagujarati": "\u0AAD", - "/bhagurmukhi": "\u0A2D", - "/bhook": "\u0253", - "/bicycle": "\u1F6B2", - "/bicyclist": "\u1F6B4", - "/bihiragana": "\u3073", - "/bikatakana": "\u30D3", - "/bikini": "\u1F459", - "/bilabialclick": "\u0298", - "/billiards": "\u1F3B1", - "/bindigurmukhi": "\u0A02", - "/biohazard": "\u2623", - "/bird": "\u1F426", - "/birthdayCake": "\u1F382", - "/birusquare": "\u3331", - "/bishopblack": "\u265D", - "/bishopwhite": "\u2657", - "/bitcoin": "\u20BF", - "/blackDownPointingBackhandIndex": "\u1F5A3", - "/blackDroplet": "\u1F322", - "/blackFolder": "\u1F5BF", - "/blackHardShellFloppyDisk": "\u1F5AA", - "/blackHeart": "\u1F5A4", - "/blackLeftPointingBackhandIndex": "\u1F59C", - "/blackPennant": "\u1F3F2", - "/blackPushpin": "\u1F588", - "/blackRightPointingBackhandIndex": "\u1F59D", - "/blackRosette": "\u1F3F6", - "/blackSkullAndCrossbones": "\u1F571", - "/blackSquareButton": "\u1F532", - "/blackTouchtoneTelephone": "\u1F57F", - "/blackUpPointingBackhandIndex": "\u1F5A2", - "/blackcircle": "\u25CF", - "/blackcircleforrecord": "\u23FA", - "/blackdiamond": "\u25C6", - "/blackdownpointingtriangle": "\u25BC", - "/blackforstopsquare": "\u23F9", - "/blackleftpointingpointer": "\u25C4", - "/blackleftpointingtriangle": "\u25C0", - "/blacklenticularbracketleft": "\u3010", - "/blacklenticularbracketleftvertical": "\uFE3B", - "/blacklenticularbracketright": "\u3011", - "/blacklenticularbracketrightvertical": "\uFE3C", - "/blacklowerlefttriangle": "\u25E3", - "/blacklowerrighttriangle": "\u25E2", - "/blackmediumpointingtriangledown": "\u23F7", - "/blackmediumpointingtriangleleft": "\u23F4", - "/blackmediumpointingtriangleright": "\u23F5", - "/blackmediumpointingtriangleup": "\u23F6", - "/blackpointingdoubletrianglebarverticalleft": "\u23EE", - "/blackpointingdoubletrianglebarverticalright": "\u23ED", - "/blackpointingdoubletriangledown": "\u23EC", - "/blackpointingdoubletriangleleft": "\u23EA", - "/blackpointingdoubletriangleright": "\u23E9", - "/blackpointingdoubletriangleup": "\u23EB", - "/blackpointingtriangledoublebarverticalright": "\u23EF", - "/blackrectangle": "\u25AC", - "/blackrightpointingpointer": "\u25BA", - "/blackrightpointingtriangle": "\u25B6", - "/blacksmallsquare": "\u25AA", - "/blacksmilingface": "\u263B", - "/blacksquare": "\u25A0", - "/blackstar": "\u2605", - "/blackupperlefttriangle": "\u25E4", - "/blackupperrighttriangle": "\u25E5", - "/blackuppointingsmalltriangle": "\u25B4", - "/blackuppointingtriangle": "\u25B2", - "/blackwardsbulletleft": "\u204C", - "/blackwardsbulletright": "\u204D", - "/blank": "\u2423", - "/blinebelow": "\u1E07", - "/block": "\u2588", - "/blossom": "\u1F33C", - "/blowfish": "\u1F421", - "/blueBook": "\u1F4D8", - "/blueHeart": "\u1F499", - "/bmonospace": "\uFF42", - "/boar": "\u1F417", - "/board": "\u2328", - "/bobaimaithai": "\u0E1A", - "/bohiragana": "\u307C", - "/bokatakana": "\u30DC", - "/bomb": "\u1F4A3", - "/book": "\u1F56E", - "/bookmark": "\u1F516", - "/bookmarkTabs": "\u1F4D1", - "/books": "\u1F4DA", - "/bopo:a": "\u311A", - "/bopo:ai": "\u311E", - "/bopo:an": "\u3122", - "/bopo:ang": "\u3124", - "/bopo:au": "\u3120", - "/bopo:b": "\u3105", - "/bopo:c": "\u3118", - "/bopo:ch": "\u3114", - "/bopo:d": "\u3109", - "/bopo:e": "\u311C", - "/bopo:eh": "\u311D", - "/bopo:ei": "\u311F", - "/bopo:en": "\u3123", - "/bopo:eng": "\u3125", - "/bopo:er": "\u3126", - "/bopo:f": "\u3108", - "/bopo:g": "\u310D", - "/bopo:gn": "\u312C", - "/bopo:h": "\u310F", - "/bopo:i": "\u3127", - "/bopo:ih": "\u312D", - "/bopo:iu": "\u3129", - "/bopo:j": "\u3110", - "/bopo:k": "\u310E", - "/bopo:l": "\u310C", - "/bopo:m": "\u3107", - "/bopo:n": "\u310B", - "/bopo:ng": "\u312B", - "/bopo:o": "\u311B", - "/bopo:ou": "\u3121", - "/bopo:owithdotabove": "\u312E", - "/bopo:p": "\u3106", - "/bopo:q": "\u3111", - "/bopo:r": "\u3116", - "/bopo:s": "\u3119", - "/bopo:sh": "\u3115", - "/bopo:t": "\u310A", - "/bopo:u": "\u3128", - "/bopo:v": "\u312A", - "/bopo:x": "\u3112", - "/bopo:z": "\u3117", - "/bopo:zh": "\u3113", - "/borutosquare": "\u333E", - "/bottlePoppingCork": "\u1F37E", - "/bouquet": "\u1F490", - "/bouquetOfFlowers": "\u1F395", - "/bowAndArrow": "\u1F3F9", - "/bowlOfHygieia": "\u1F54F", - "/bowling": "\u1F3B3", - "/boxlineverticalleft": "\u23B8", - "/boxlineverticalright": "\u23B9", - "/boy": "\u1F466", - "/boys": "\u1F6C9", - "/bparen": "\u249D", - "/bparenthesized": "\u249D", - "/bqfullwidth": "\u33C3", - "/bqsquare": "\u33C3", - "/braceex": "\uF8F4", - "/braceleft": "\u007B", - "/braceleftbt": "\uF8F3", - "/braceleftmid": "\uF8F2", - "/braceleftmonospace": "\uFF5B", - "/braceleftsmall": "\uFE5B", - "/bracelefttp": "\uF8F1", - "/braceleftvertical": "\uFE37", - "/braceright": "\u007D", - "/bracerightbt": "\uF8FE", - "/bracerightmid": "\uF8FD", - "/bracerightmonospace": "\uFF5D", - "/bracerightsmall": "\uFE5C", - "/bracerighttp": "\uF8FC", - "/bracerightvertical": "\uFE38", - "/bracketangledblleft": "\u27EA", - "/bracketangledblright": "\u27EB", - "/bracketangleleft": "\u27E8", - "/bracketangleright": "\u27E9", - "/bracketbottomcurly": "\u23DF", - "/bracketbottomsquare": "\u23B5", - "/bracketcornerupleftsquare": "\u23A1", - "/bracketcorneruprightsquare": "\u23A4", - "/bracketdottedsubstitutionleft": "\u2E04", - "/bracketdottedsubstitutionright": "\u2E05", - "/bracketextensioncurly": "\u23AA", - "/bracketextensionleftsquare": "\u23A2", - "/bracketextensionrightsquare": "\u23A5", - "/brackethalfbottomleft": "\u2E24", - "/brackethalfbottomright": "\u2E25", - "/brackethalftopleft": "\u2E22", - "/brackethalftopright": "\u2E23", - "/brackethookupleftcurly": "\u23A7", - "/brackethookuprightcurly": "\u23AB", - "/bracketleft": "\u005B", - "/bracketleftbt": "\uF8F0", - "/bracketleftex": "\uF8EF", - "/bracketleftmonospace": "\uFF3B", - "/bracketleftsquarequill": "\u2045", - "/bracketlefttp": "\uF8EE", - "/bracketlowercornerleftsquare": "\u23A3", - "/bracketlowercornerrightsquare": "\u23A6", - "/bracketlowerhookleftcurly": "\u23A9", - "/bracketlowerhookrightcurly": "\u23AD", - "/bracketmiddlepieceleftcurly": "\u23A8", - "/bracketmiddlepiecerightcurly": "\u23AC", - "/bracketoverbrackettopbottomsquare": "\u23B6", - "/bracketparaphraselowleft": "\u2E1C", - "/bracketparaphraselowright": "\u2E1D", - "/bracketraisedleft": "\u2E0C", - "/bracketraisedright": "\u2E0D", - "/bracketright": "\u005D", - "/bracketrightbt": "\uF8FB", - "/bracketrightex": "\uF8FA", - "/bracketrightmonospace": "\uFF3D", - "/bracketrightsquarequill": "\u2046", - "/bracketrighttp": "\uF8F9", - "/bracketsectionupleftlowerrightcurly": "\u23B0", - "/bracketsectionuprightlowerleftcurly": "\u23B1", - "/bracketshellbottom": "\u23E1", - "/bracketshelltop": "\u23E0", - "/bracketshellwhiteleft": "\u27EC", - "/bracketshellwhiteright": "\u27ED", - "/bracketsubstitutionleft": "\u2E02", - "/bracketsubstitutionright": "\u2E03", - "/brackettopcurly": "\u23DE", - "/brackettopsquare": "\u23B4", - "/brackettranspositionleft": "\u2E09", - "/brackettranspositionright": "\u2E0A", - "/bracketwhitesquareleft": "\u27E6", - "/bracketwhitesquareright": "\u27E7", - "/branchbankidentification": "\u2446", - "/bread": "\u1F35E", - "/breve": "\u02D8", - "/brevebelowcmb": "\u032E", - "/brevecmb": "\u0306", - "/breveinvertedbelowcmb": "\u032F", - "/breveinvertedcmb": "\u0311", - "/breveinverteddoublecmb": "\u0361", - "/brevemetrical": "\u23D1", - "/brideVeil": "\u1F470", - "/bridgeAtNight": "\u1F309", - "/bridgebelowcmb": "\u032A", - "/bridgeinvertedbelowcmb": "\u033A", - "/briefcase": "\u1F4BC", - "/brll:blank": "\u2800", - "/brokenHeart": "\u1F494", - "/brokenbar": "\u00A6", - "/brokencirclenorthwestarrow": "\u238B", - "/bstroke": "\u0180", - "/bsuperior": "\uF6EA", - "/btopbar": "\u0183", - "/bug": "\u1F41B", - "/buhiragana": "\u3076", - "/buildingConstruction": "\u1F3D7", - "/bukatakana": "\u30D6", - "/bullet": "\u2022", - "/bulletinverse": "\u25D8", - "/bulletoperator": "\u2219", - "/bullhorn": "\u1F56B", - "/bullhornSoundWaves": "\u1F56C", - "/bullseye": "\u25CE", - "/burrito": "\u1F32F", - "/bus": "\u1F68C", - "/busStop": "\u1F68F", - "/bussyerusquare": "\u3334", - "/bustInSilhouette": "\u1F464", - "/bustsInSilhouette": "\u1F465", - "/c": "\u0063", - "/caarmenian": "\u056E", - "/cabengali": "\u099A", - "/cactus": "\u1F335", - "/cacute": "\u0107", - "/cadauna": "\u2106", - "/cadeva": "\u091A", - "/caduceus": "\u2624", - "/cagujarati": "\u0A9A", - "/cagurmukhi": "\u0A1A", - "/cakraconsonant": "\uA9BF", - "/calendar": "\u1F4C5", - "/calfullwidth": "\u3388", - "/callideographicparen": "\u323A", - "/calsquare": "\u3388", - "/camera": "\u1F4F7", - "/cameraFlash": "\u1F4F8", - "/camping": "\u1F3D5", - "/camurda": "\uA996", - "/cancellationX": "\u1F5D9", - "/cancer": "\u264B", - "/candle": "\u1F56F", - "/candrabindubengali": "\u0981", - "/candrabinducmb": "\u0310", - "/candrabindudeva": "\u0901", - "/candrabindugujarati": "\u0A81", - "/candy": "\u1F36C", - "/canoe": "\u1F6F6", - "/capitulum": "\u2E3F", - "/capricorn": "\u2651", - "/capslock": "\u21EA", - "/cardFileBox": "\u1F5C3", - "/cardIndex": "\u1F4C7", - "/cardIndexDividers": "\u1F5C2", - "/careof": "\u2105", - "/caret": "\u2038", - "/caretinsertionpoint": "\u2041", - "/carettildedownfunc": "\u2371", - "/carettildeupfunc": "\u2372", - "/caron": "\u02C7", - "/caronbelowcmb": "\u032C", - "/caroncmb": "\u030C", - "/carouselHorse": "\u1F3A0", - "/carpStreamer": "\u1F38F", - "/carriagereturn": "\u21B5", - "/carsliding": "\u26D0", - "/castle": "\u26EB", - "/cat": "\u1F408", - "/catFace": "\u1F431", - "/catFaceWithTearsOfJoy": "\u1F639", - "/catFaceWithWrySmile": "\u1F63C", - "/caution": "\u2621", - "/cbar": "\uA793", - "/cbopomofo": "\u3118", - "/ccaron": "\u010D", - "/ccedilla": "\u00E7", - "/ccedillaacute": "\u1E09", - "/ccfullwidth": "\u33C4", - "/ccircle": "\u24D2", - "/ccircumflex": "\u0109", - "/ccurl": "\u0255", - "/cdfullwidth": "\u33C5", - "/cdot": "\u010B", - "/cdotaccent": "\u010B", - "/cdotreversed": "\uA73F", - "/cdsquare": "\u33C5", - "/cecak": "\uA981", - "/cecaktelu": "\uA9B3", - "/cedi": "\u20B5", - "/cedilla": "\u00B8", - "/cedillacmb": "\u0327", - "/ceilingleft": "\u2308", - "/ceilingright": "\u2309", - "/celticCross": "\u1F548", - "/cent": "\u00A2", - "/centigrade": "\u2103", - "/centinferior": "\uF6DF", - "/centmonospace": "\uFFE0", - "/centoldstyle": "\uF7A2", - "/centreddotwhitediamond": "\u27D0", - "/centreideographiccircled": "\u32A5", - "/centreline": "\u2104", - "/centrelineverticalsquarewhite": "\u2385", - "/centsuperior": "\uF6E0", - "/ceres": "\u26B3", - "/chaarmenian": "\u0579", - "/chabengali": "\u099B", - "/chadeva": "\u091B", - "/chagujarati": "\u0A9B", - "/chagurmukhi": "\u0A1B", - "/chains": "\u26D3", - "/chair": "\u2441", - "/chamkocircle": "\u327C", - "/charactertie": "\u2040", - "/chartDownwardsTrend": "\u1F4C9", - "/chartUpwardsTrend": "\u1F4C8", - "/chartUpwardsTrendAndYenSign": "\u1F4B9", - "/chbopomofo": "\u3114", - "/cheabkhasiancyrillic": "\u04BD", - "/cheabkhcyr": "\u04BD", - "/cheabkhtailcyr": "\u04BF", - "/checkbox": "\u2610", - "/checkboxchecked": "\u2611", - "/checkboxx": "\u2612", - "/checkmark": "\u2713", - "/checyr": "\u0447", - "/checyrillic": "\u0447", - "/chedescenderabkhasiancyrillic": "\u04BF", - "/chedescendercyrillic": "\u04B7", - "/chedieresiscyr": "\u04F5", - "/chedieresiscyrillic": "\u04F5", - "/cheeringMegaphone": "\u1F4E3", - "/cheharmenian": "\u0573", - "/chekhakascyr": "\u04CC", - "/chekhakassiancyrillic": "\u04CC", - "/chequeredFlag": "\u1F3C1", - "/cherries": "\u1F352", - "/cherryBlossom": "\u1F338", - "/chestnut": "\u1F330", - "/chetailcyr": "\u04B7", - "/chevertcyr": "\u04B9", - "/cheverticalstrokecyrillic": "\u04B9", - "/chi": "\u03C7", - "/chicken": "\u1F414", - "/chieuchacirclekorean": "\u3277", - "/chieuchaparenkorean": "\u3217", - "/chieuchcirclekorean": "\u3269", - "/chieuchkorean": "\u314A", - "/chieuchparenkorean": "\u3209", - "/childrenCrossing": "\u1F6B8", - "/chipmunk": "\u1F43F", - "/chirho": "\u2627", - "/chiron": "\u26B7", - "/chochangthai": "\u0E0A", - "/chochanthai": "\u0E08", - "/chochingthai": "\u0E09", - "/chochoethai": "\u0E0C", - "/chocolateBar": "\u1F36B", - "/chook": "\u0188", - "/christmasTree": "\u1F384", - "/church": "\u26EA", - "/cieucacirclekorean": "\u3276", - "/cieucaparenkorean": "\u3216", - "/cieuccirclekorean": "\u3268", - "/cieuckorean": "\u3148", - "/cieucparenkorean": "\u3208", - "/cieucuparenkorean": "\u321C", - "/cinema": "\u1F3A6", - "/circle": "\u25CB", - "/circleallbutupperquadrantleftblack": "\u25D5", - "/circlebackslashfunc": "\u2349", - "/circleblack": "\u25CF", - "/circledCrossPommee": "\u1F540", - "/circledInformationSource": "\u1F6C8", - "/circledasteriskoperator": "\u229B", - "/circledbarnotchhorizontal": "\u2389", - "/circledcrossinglanes": "\u26D2", - "/circleddash": "\u229D", - "/circleddivisionslash": "\u2298", - "/circleddotoperator": "\u2299", - "/circledequals": "\u229C", - "/circlediaeresisfunc": "\u2365", - "/circledminus": "\u2296", - "/circledot": "\u2299", - "/circledotrightwhite": "\u2686", - "/circledotted": "\u25CC", - "/circledringoperator": "\u229A", - "/circledtriangledown": "\u238A", - "/circlehalfleftblack": "\u25D0", - "/circlehalfrightblack": "\u25D1", - "/circleinversewhite": "\u25D9", - "/circlejotfunc": "\u233E", - "/circlelowerhalfblack": "\u25D2", - "/circlelowerquadrantleftwhite": "\u25F5", - "/circlelowerquadrantrightwhite": "\u25F6", - "/circlemultiply": "\u2297", - "/circleot": "\u2299", - "/circleplus": "\u2295", - "/circlepostalmark": "\u3036", - "/circlestarfunc": "\u235F", - "/circlestilefunc": "\u233D", - "/circlestroketwodotsaboveheavy": "\u26E3", - "/circletwodotsblackwhite": "\u2689", - "/circletwodotswhite": "\u2687", - "/circleunderlinefunc": "\u235C", - "/circleupperhalfblack": "\u25D3", - "/circleupperquadrantleftwhite": "\u25F4", - "/circleupperquadrantrightblack": "\u25D4", - "/circleupperquadrantrightwhite": "\u25F7", - "/circleverticalfill": "\u25CD", - "/circlewhite": "\u25CB", - "/circlewhitedotrightblack": "\u2688", - "/circlewithlefthalfblack": "\u25D0", - "/circlewithrighthalfblack": "\u25D1", - "/circumflex": "\u02C6", - "/circumflexbelowcmb": "\u032D", - "/circumflexcmb": "\u0302", - "/circumflexlow": "\uA788", - "/circusTent": "\u1F3AA", - "/cityscape": "\u1F3D9", - "/cityscapeAtDusk": "\u1F306", - "/cjk:ideographiccomma": "\u3001", - "/cjk:tortoiseshellbracketleft": "\u3014", - "/cjk:tortoiseshellbracketright": "\u3015", - "/clamshellMobilePhone": "\u1F581", - "/clapperBoard": "\u1F3AC", - "/clappingHandsSign": "\u1F44F", - "/classicalBuilding": "\u1F3DB", - "/clear": "\u2327", - "/clearscreen": "\u239A", - "/clickalveolar": "\u01C2", - "/clickbilabial": "\u0298", - "/clickdental": "\u01C0", - "/clicklateral": "\u01C1", - "/clickretroflex": "\u01C3", - "/clinkingBeerMugs": "\u1F37B", - "/clipboard": "\u1F4CB", - "/clockFaceEight-thirty": "\u1F563", - "/clockFaceEightOclock": "\u1F557", - "/clockFaceEleven-thirty": "\u1F566", - "/clockFaceElevenOclock": "\u1F55A", - "/clockFaceFive-thirty": "\u1F560", - "/clockFaceFiveOclock": "\u1F554", - "/clockFaceFour-thirty": "\u1F55F", - "/clockFaceFourOclock": "\u1F553", - "/clockFaceNine-thirty": "\u1F564", - "/clockFaceNineOclock": "\u1F558", - "/clockFaceOne-thirty": "\u1F55C", - "/clockFaceOneOclock": "\u1F550", - "/clockFaceSeven-thirty": "\u1F562", - "/clockFaceSevenOclock": "\u1F556", - "/clockFaceSix-thirty": "\u1F561", - "/clockFaceSixOclock": "\u1F555", - "/clockFaceTen-thirty": "\u1F565", - "/clockFaceTenOclock": "\u1F559", - "/clockFaceThree-thirty": "\u1F55E", - "/clockFaceThreeOclock": "\u1F552", - "/clockFaceTwelve-thirty": "\u1F567", - "/clockFaceTwelveOclock": "\u1F55B", - "/clockFaceTwo-thirty": "\u1F55D", - "/clockFaceTwoOclock": "\u1F551", - "/clockwiseDownwardsAndUpwardsOpenCircleArrows": "\u1F503", - "/clockwiseRightAndLeftSemicircleArrows": "\u1F5D8", - "/clockwiseRightwardsAndLeftwardsOpenCircleArrows": "\u1F501", - "/clockwiseRightwardsAndLeftwardsOpenCircleArrowsCircledOneOverlay": "\u1F502", - "/closedBook": "\u1F4D5", - "/closedLockKey": "\u1F510", - "/closedMailboxLoweredFlag": "\u1F4EA", - "/closedMailboxRaisedFlag": "\u1F4EB", - "/closedUmbrella": "\u1F302", - "/closedentryleft": "\u26DC", - "/closeup": "\u2050", - "/cloud": "\u2601", - "/cloudLightning": "\u1F329", - "/cloudRain": "\u1F327", - "/cloudSnow": "\u1F328", - "/cloudTornado": "\u1F32A", - "/clsquare": "\u1F191", - "/club": "\u2663", - "/clubblack": "\u2663", - "/clubsuitblack": "\u2663", - "/clubsuitwhite": "\u2667", - "/clubwhite": "\u2667", - "/cm2fullwidth": "\u33A0", - "/cm3fullwidth": "\u33A4", - "/cmb:a": "\u0363", - "/cmb:aaboveflat": "\u1DD3", - "/cmb:aboveogonek": "\u1DCE", - "/cmb:acute": "\u0301", - "/cmb:acutebelow": "\u0317", - "/cmb:acutegraveacute": "\u1DC9", - "/cmb:acutemacron": "\u1DC7", - "/cmb:acutetone": "\u0341", - "/cmb:adieresis": "\u1DF2", - "/cmb:ae": "\u1DD4", - "/cmb:almostequalabove": "\u034C", - "/cmb:almostequaltobelow": "\u1DFD", - "/cmb:alpha": "\u1DE7", - "/cmb:ao": "\u1DD5", - "/cmb:arrowheadleftbelow": "\u0354", - "/cmb:arrowheadrightabove": "\u0350", - "/cmb:arrowheadrightarrowheadupbelow": "\u0356", - "/cmb:arrowheadrightbelow": "\u0355", - "/cmb:arrowleftrightbelow": "\u034D", - "/cmb:arrowrightdoublebelow": "\u0362", - "/cmb:arrowupbelow": "\u034E", - "/cmb:asteriskbelow": "\u0359", - "/cmb:av": "\u1DD6", - "/cmb:b": "\u1DE8", - "/cmb:belowbreve": "\u032E", - "/cmb:beta": "\u1DE9", - "/cmb:breve": "\u0306", - "/cmb:brevemacron": "\u1DCB", - "/cmb:bridgeabove": "\u0346", - "/cmb:bridgebelow": "\u032A", - "/cmb:c": "\u0368", - "/cmb:candrabindu": "\u0310", - "/cmb:caron": "\u030C", - "/cmb:caronbelow": "\u032C", - "/cmb:ccedilla": "\u1DD7", - "/cmb:cedilla": "\u0327", - "/cmb:circumflex": "\u0302", - "/cmb:circumflexbelow": "\u032D", - "/cmb:commaaccentbelow": "\u0326", - "/cmb:commaturnedabove": "\u0312", - "/cmb:d": "\u0369", - "/cmb:dblarchinvertedbelow": "\u032B", - "/cmb:dbloverline": "\u033F", - "/cmb:dblverticallineabove": "\u030E", - "/cmb:dblverticallinebelow": "\u0348", - "/cmb:deletionmark": "\u1DFB", - "/cmb:dialytikatonos": "\u0344", - "/cmb:dieresis": "\u0308", - "/cmb:dieresisbelow": "\u0324", - "/cmb:dotaboveleft": "\u1DF8", - "/cmb:dotaccent": "\u0307", - "/cmb:dotbelowcomb": "\u0323", - "/cmb:dotrightabove": "\u0358", - "/cmb:dottedacute": "\u1DC1", - "/cmb:dottedgrave": "\u1DC0", - "/cmb:doubleabovecircumflex": "\u1DCD", - "/cmb:doublebelowbreve": "\u035C", - "/cmb:doublebreve": "\u035D", - "/cmb:doubleinvertedbelowbreve": "\u1DFC", - "/cmb:doubleringbelow": "\u035A", - "/cmb:downtackbelow": "\u031E", - "/cmb:e": "\u0364", - "/cmb:equalbelow": "\u0347", - "/cmb:esh": "\u1DEF", - "/cmb:eth": "\u1DD9", - "/cmb:f": "\u1DEB", - "/cmb:fermata": "\u0352", - "/cmb:g": "\u1DDA", - "/cmb:graphemejoiner": "\u034F", - "/cmb:grave": "\u0300", - "/cmb:graveacutegrave": "\u1DC8", - "/cmb:gravebelow": "\u0316", - "/cmb:gravedouble": "\u030F", - "/cmb:gravemacron": "\u1DC5", - "/cmb:gravetone": "\u0340", - "/cmb:gsmall": "\u1DDB", - "/cmb:h": "\u036A", - "/cmb:halfleftringabove": "\u0351", - "/cmb:halfleftringbelow": "\u031C", - "/cmb:halfrightringabove": "\u0357", - "/cmb:halfrightringbelow": "\u0339", - "/cmb:homotheticabove": "\u034B", - "/cmb:hookabove": "\u0309", - "/cmb:horn": "\u031B", - "/cmb:hungarumlaut": "\u030B", - "/cmb:i": "\u0365", - "/cmb:insulard": "\u1DD8", - "/cmb:invertedbelowbreve": "\u032F", - "/cmb:invertedbreve": "\u0311", - "/cmb:invertedbridgebelow": "\u033A", - "/cmb:inverteddoublebreve": "\u0361", - "/cmb:iotasub": "\u0345", - "/cmb:isbelow": "\u1DD0", - "/cmb:k": "\u1DDC", - "/cmb:kavykaaboveleft": "\u1DF7", - "/cmb:kavykaaboveright": "\u1DF6", - "/cmb:koronis": "\u0343", - "/cmb:l": "\u1DDD", - "/cmb:leftangleabove": "\u031A", - "/cmb:leftanglebelow": "\u0349", - "/cmb:leftarrowheadabove": "\u1DFE", - "/cmb:lefttackbelow": "\u0318", - "/cmb:lineverticalabove": "\u030D", - "/cmb:lineverticalbelow": "\u0329", - "/cmb:longs": "\u1DE5", - "/cmb:lowline": "\u0332", - "/cmb:lowlinedouble": "\u0333", - "/cmb:lsmall": "\u1DDE", - "/cmb:lwithdoublemiddletilde": "\u1DEC", - "/cmb:m": "\u036B", - "/cmb:macron": "\u0304", - "/cmb:macronacute": "\u1DC4", - "/cmb:macronbelow": "\u0331", - "/cmb:macronbreve": "\u1DCC", - "/cmb:macrondouble": "\u035E", - "/cmb:macrondoublebelow": "\u035F", - "/cmb:macrongrave": "\u1DC6", - "/cmb:minusbelow": "\u0320", - "/cmb:msmall": "\u1DDF", - "/cmb:n": "\u1DE0", - "/cmb:nottildeabove": "\u034A", - "/cmb:nsmall": "\u1DE1", - "/cmb:o": "\u0366", - "/cmb:odieresis": "\u1DF3", - "/cmb:ogonek": "\u0328", - "/cmb:overlaystrokelong": "\u0336", - "/cmb:overlaystrokeshort": "\u0335", - "/cmb:overline": "\u0305", - "/cmb:owithlightcentralizationstroke": "\u1DED", - "/cmb:p": "\u1DEE", - "/cmb:palatalizedhookbelow": "\u0321", - "/cmb:perispomeni": "\u0342", - "/cmb:plusbelow": "\u031F", - "/cmb:r": "\u036C", - "/cmb:rbelow": "\u1DCA", - "/cmb:retroflexhookbelow": "\u0322", - "/cmb:reversedcommaabove": "\u0314", - "/cmb:rightarrowheadanddownarrowheadbelow": "\u1DFF", - "/cmb:righttackbelow": "\u0319", - "/cmb:ringabove": "\u030A", - "/cmb:ringbelow": "\u0325", - "/cmb:rrotunda": "\u1DE3", - "/cmb:rsmall": "\u1DE2", - "/cmb:s": "\u1DE4", - "/cmb:schwa": "\u1DEA", - "/cmb:seagullbelow": "\u033C", - "/cmb:snakebelow": "\u1DC2", - "/cmb:soliduslongoverlay": "\u0338", - "/cmb:solidusshortoverlay": "\u0337", - "/cmb:squarebelow": "\u033B", - "/cmb:suspensionmark": "\u1DC3", - "/cmb:t": "\u036D", - "/cmb:tilde": "\u0303", - "/cmb:tildebelow": "\u0330", - "/cmb:tildedouble": "\u0360", - "/cmb:tildeoverlay": "\u0334", - "/cmb:tildevertical": "\u033E", - "/cmb:turnedabove": "\u0313", - "/cmb:turnedcommaabove": "\u0315", - "/cmb:u": "\u0367", - "/cmb:udieresis": "\u1DF4", - "/cmb:uptackabove": "\u1DF5", - "/cmb:uptackbelow": "\u031D", - "/cmb:urabove": "\u1DD1", - "/cmb:usabove": "\u1DD2", - "/cmb:uwithlightcentralizationstroke": "\u1DF0", - "/cmb:v": "\u036E", - "/cmb:w": "\u1DF1", - "/cmb:wideinvertedbridgebelow": "\u1DF9", - "/cmb:x": "\u036F", - "/cmb:xabove": "\u033D", - "/cmb:xbelow": "\u0353", - "/cmb:z": "\u1DE6", - "/cmb:zigzagabove": "\u035B", - "/cmb:zigzagbelow": "\u1DCF", - "/cmcubedsquare": "\u33A4", - "/cmfullwidth": "\u339D", - "/cmonospace": "\uFF43", - "/cmsquaredsquare": "\u33A0", - "/cntr:acknowledge": "\u2406", - "/cntr:backspace": "\u2408", - "/cntr:bell": "\u2407", - "/cntr:blank": "\u2422", - "/cntr:cancel": "\u2418", - "/cntr:carriagereturn": "\u240D", - "/cntr:datalinkescape": "\u2410", - "/cntr:delete": "\u2421", - "/cntr:deleteformtwo": "\u2425", - "/cntr:devicecontrolfour": "\u2414", - "/cntr:devicecontrolone": "\u2411", - "/cntr:devicecontrolthree": "\u2413", - "/cntr:devicecontroltwo": "\u2412", - "/cntr:endofmedium": "\u2419", - "/cntr:endoftext": "\u2403", - "/cntr:endoftransmission": "\u2404", - "/cntr:endoftransmissionblock": "\u2417", - "/cntr:enquiry": "\u2405", - "/cntr:escape": "\u241B", - "/cntr:fileseparator": "\u241C", - "/cntr:formfeed": "\u240C", - "/cntr:groupseparator": "\u241D", - "/cntr:horizontaltab": "\u2409", - "/cntr:linefeed": "\u240A", - "/cntr:negativeacknowledge": "\u2415", - "/cntr:newline": "\u2424", - "/cntr:null": "\u2400", - "/cntr:openbox": "\u2423", - "/cntr:recordseparator": "\u241E", - "/cntr:shiftin": "\u240F", - "/cntr:shiftout": "\u240E", - "/cntr:space": "\u2420", - "/cntr:startofheading": "\u2401", - "/cntr:startoftext": "\u2402", - "/cntr:substitute": "\u241A", - "/cntr:substituteformtwo": "\u2426", - "/cntr:synchronousidle": "\u2416", - "/cntr:unitseparator": "\u241F", - "/cntr:verticaltab": "\u240B", - "/coarmenian": "\u0581", - "/cocktailGlass": "\u1F378", - "/coffin": "\u26B0", - "/cofullwidth": "\u33C7", - "/collision": "\u1F4A5", - "/colon": "\u003A", - "/colonequals": "\u2254", - "/colonmod": "\uA789", - "/colonmonetary": "\u20A1", - "/colonmonospace": "\uFF1A", - "/colonraisedmod": "\u02F8", - "/colonsign": "\u20A1", - "/colonsmall": "\uFE55", - "/colontriangularhalfmod": "\u02D1", - "/colontriangularmod": "\u02D0", - "/comet": "\u2604", - "/comma": "\u002C", - "/commaabovecmb": "\u0313", - "/commaaboverightcmb": "\u0315", - "/commaaccent": "\uF6C3", - "/commaarabic": "\u060C", - "/commaarmenian": "\u055D", - "/commabarfunc": "\u236A", - "/commainferior": "\uF6E1", - "/commamonospace": "\uFF0C", - "/commaraised": "\u2E34", - "/commareversed": "\u2E41", - "/commareversedabovecmb": "\u0314", - "/commareversedmod": "\u02BD", - "/commasmall": "\uFE50", - "/commasuperior": "\uF6E2", - "/commaturnedabovecmb": "\u0312", - "/commaturnedmod": "\u02BB", - "/commercialat": "\uFE6B", - "/commercialminussign": "\u2052", - "/compass": "\u263C", - "/complement": "\u2201", - "/composition": "\u2384", - "/compression": "\u1F5DC", - "/con": "\uA76F", - "/confettiBall": "\u1F38A", - "/confoundedFace": "\u1F616", - "/confusedFace": "\u1F615", - "/congratulationideographiccircled": "\u3297", - "/congratulationideographicparen": "\u3237", - "/congruent": "\u2245", - "/conicaltaper": "\u2332", - "/conjunction": "\u260C", - "/consquareupblack": "\u26FE", - "/constructionSign": "\u1F6A7", - "/constructionWorker": "\u1F477", - "/containsasmembersmall": "\u220D", - "/containsasnormalsubgroorequalup": "\u22B5", - "/containsasnormalsubgroup": "\u22B3", - "/containslonghorizontalstroke": "\u22FA", - "/containsoverbar": "\u22FD", - "/containsoverbarsmall": "\u22FE", - "/containssmallverticalbarhorizontalstroke": "\u22FC", - "/containsverticalbarhorizontalstroke": "\u22FB", - "/continuousunderline": "\u2381", - "/contourintegral": "\u222E", - "/control": "\u2303", - "/controlACK": "\u0006", - "/controlBEL": "\u0007", - "/controlBS": "\u0008", - "/controlCAN": "\u0018", - "/controlCR": "\u000D", - "/controlDC1": "\u0011", - "/controlDC2": "\u0012", - "/controlDC3": "\u0013", - "/controlDC4": "\u0014", - "/controlDEL": "\u007F", - "/controlDLE": "\u0010", - "/controlEM": "\u0019", - "/controlENQ": "\u0005", - "/controlEOT": "\u0004", - "/controlESC": "\u001B", - "/controlETB": "\u0017", - "/controlETX": "\u0003", - "/controlFF": "\u000C", - "/controlFS": "\u001C", - "/controlGS": "\u001D", - "/controlHT": "\u0009", - "/controlKnobs": "\u1F39B", - "/controlLF": "\u000A", - "/controlNAK": "\u0015", - "/controlRS": "\u001E", - "/controlSI": "\u000F", - "/controlSO": "\u000E", - "/controlSOT": "\u0002", - "/controlSTX": "\u0001", - "/controlSUB": "\u001A", - "/controlSYN": "\u0016", - "/controlUS": "\u001F", - "/controlVT": "\u000B", - "/convavediamondwhite": "\u27E1", - "/convenienceStore": "\u1F3EA", - "/cookedRice": "\u1F35A", - "/cookie": "\u1F36A", - "/cooking": "\u1F373", - "/coolsquare": "\u1F192", - "/coproductarray": "\u2210", - "/copyideographiccircled": "\u32A2", - "/copyright": "\u00A9", - "/copyrightsans": "\uF8E9", - "/copyrightserif": "\uF6D9", - "/cornerbottomleft": "\u231E", - "/cornerbottomright": "\u231F", - "/cornerbracketleft": "\u300C", - "/cornerbracketlefthalfwidth": "\uFF62", - "/cornerbracketleftvertical": "\uFE41", - "/cornerbracketright": "\u300D", - "/cornerbracketrighthalfwidth": "\uFF63", - "/cornerbracketrightvertical": "\uFE42", - "/cornerdotupleft": "\u27D4", - "/cornertopleft": "\u231C", - "/cornertopright": "\u231D", - "/coroniseditorial": "\u2E0E", - "/corporationsquare": "\u337F", - "/correctideographiccircled": "\u32A3", - "/corresponds": "\u2258", - "/cosquare": "\u33C7", - "/couchAndLamp": "\u1F6CB", - "/counterbore": "\u2334", - "/countersink": "\u2335", - "/coupleHeart": "\u1F491", - "/coverkgfullwidth": "\u33C6", - "/coverkgsquare": "\u33C6", - "/cow": "\u1F404", - "/cowFace": "\u1F42E", - "/cpalatalhook": "\uA794", - "/cparen": "\u249E", - "/cparenthesized": "\u249E", - "/creditCard": "\u1F4B3", - "/crescentMoon": "\u1F319", - "/creversed": "\u2184", - "/cricketBatAndBall": "\u1F3CF", - "/crocodile": "\u1F40A", - "/cropbottomleft": "\u230D", - "/cropbottomright": "\u230C", - "/croptopleft": "\u230F", - "/croptopright": "\u230E", - "/crossPommee": "\u1F542", - "/crossPommeeHalf-circleBelow": "\u1F541", - "/crossedFlags": "\u1F38C", - "/crossedswords": "\u2694", - "/crossinglanes": "\u26CC", - "/crossmod": "\u02DF", - "/crossofjerusalem": "\u2629", - "/crossoflorraine": "\u2628", - "/crossonshieldblack": "\u26E8", - "/crown": "\u1F451", - "/crrn:rupee": "\u20A8", - "/cruzeiro": "\u20A2", - "/cryingCatFace": "\u1F63F", - "/cryingFace": "\u1F622", - "/crystalBall": "\u1F52E", - "/cstretched": "\u0297", - "/cstroke": "\u023C", - "/cuatrillo": "\uA72D", - "/cuatrillocomma": "\uA72F", - "/curlyand": "\u22CF", - "/curlylogicaland": "\u22CF", - "/curlylogicalor": "\u22CE", - "/curlyor": "\u22CE", - "/currency": "\u00A4", - "/currencyExchange": "\u1F4B1", - "/curryAndRice": "\u1F35B", - "/custard": "\u1F36E", - "/customeraccountnumber": "\u2449", - "/customs": "\u1F6C3", - "/cyclone": "\u1F300", - "/cylindricity": "\u232D", - "/cyrBreve": "\uF6D1", - "/cyrFlex": "\uF6D2", - "/cyrbreve": "\uF6D4", - "/cyrflex": "\uF6D5", - "/d": "\u0064", - "/daarmenian": "\u0564", - "/daasusquare": "\u3324", - "/dabengali": "\u09A6", - "/dad": "\u0636", - "/dad.fina": "\uFEBE", - "/dad.init": "\uFEBF", - "/dad.init_alefmaksura.fina": "\uFD07", - "/dad.init_hah.fina": "\uFC23", - "/dad.init_hah.medi": "\uFCB5", - "/dad.init_jeem.fina": "\uFC22", - "/dad.init_jeem.medi": "\uFCB4", - "/dad.init_khah.fina": "\uFC24", - "/dad.init_khah.medi": "\uFCB6", - "/dad.init_khah.medi_meem.medi": "\uFD70", - "/dad.init_meem.fina": "\uFC25", - "/dad.init_meem.medi": "\uFCB7", - "/dad.init_reh.fina": "\uFD10", - "/dad.init_yeh.fina": "\uFD08", - "/dad.isol": "\uFEBD", - "/dad.medi": "\uFEC0", - "/dad.medi_alefmaksura.fina": "\uFD23", - "/dad.medi_hah.medi_alefmaksura.fina": "\uFD6E", - "/dad.medi_hah.medi_yeh.fina": "\uFDAB", - "/dad.medi_khah.medi_meem.fina": "\uFD6F", - "/dad.medi_reh.fina": "\uFD2C", - "/dad.medi_yeh.fina": "\uFD24", - "/dadarabic": "\u0636", - "/daddotbelow": "\u06FB", - "/dadeva": "\u0926", - "/dadfinalarabic": "\uFEBE", - "/dadinitialarabic": "\uFEBF", - "/dadmedialarabic": "\uFEC0", - "/dafullwidth": "\u3372", - "/dagesh": "\u05BC", - "/dagesh:hb": "\u05BC", - "/dageshhebrew": "\u05BC", - "/dagger": "\u2020", - "/daggerKnife": "\u1F5E1", - "/daggerdbl": "\u2021", - "/daggerwithguardleft": "\u2E36", - "/daggerwithguardright": "\u2E37", - "/dagujarati": "\u0AA6", - "/dagurmukhi": "\u0A26", - "/dahal": "\u068C", - "/dahal.fina": "\uFB85", - "/dahal.isol": "\uFB84", - "/dahiragana": "\u3060", - "/dakatakana": "\u30C0", - "/dal": "\u062F", - "/dal.fina": "\uFEAA", - "/dal.isol": "\uFEA9", - "/dalInvertedSmallVBelow": "\u075A", - "/dalTwoDotsVerticallyBelowSmallTah": "\u0759", - "/dalarabic": "\u062F", - "/daldotbelow": "\u068A", - "/daldotbelowtahsmall": "\u068B", - "/daldownthreedotsabove": "\u068F", - "/dalet": "\u05D3", - "/dalet:hb": "\u05D3", - "/daletdagesh": "\uFB33", - "/daletdageshhebrew": "\uFB33", - "/dalethatafpatah": "\u05D3", - "/dalethatafpatahhebrew": "\u05D3", - "/dalethatafsegol": "\u05D3", - "/dalethatafsegolhebrew": "\u05D3", - "/dalethebrew": "\u05D3", - "/dalethiriq": "\u05D3", - "/dalethiriqhebrew": "\u05D3", - "/daletholam": "\u05D3", - "/daletholamhebrew": "\u05D3", - "/daletpatah": "\u05D3", - "/daletpatahhebrew": "\u05D3", - "/daletqamats": "\u05D3", - "/daletqamatshebrew": "\u05D3", - "/daletqubuts": "\u05D3", - "/daletqubutshebrew": "\u05D3", - "/daletsegol": "\u05D3", - "/daletsegolhebrew": "\u05D3", - "/daletsheva": "\u05D3", - "/daletshevahebrew": "\u05D3", - "/dalettsere": "\u05D3", - "/dalettserehebrew": "\u05D3", - "/daletwide:hb": "\uFB22", - "/daletwithdagesh:hb": "\uFB33", - "/dalfinalarabic": "\uFEAA", - "/dalfourdotsabove": "\u0690", - "/dalinvertedV": "\u06EE", - "/dalring": "\u0689", - "/damahaprana": "\uA9A3", - "/damma": "\u064F", - "/dammaIsol": "\uFE78", - "/dammaMedi": "\uFE79", - "/dammaarabic": "\u064F", - "/dammalowarabic": "\u064F", - "/dammareversed": "\u065D", - "/dammasmall": "\u0619", - "/dammatan": "\u064C", - "/dammatanIsol": "\uFE72", - "/dammatanaltonearabic": "\u064C", - "/dammatanarabic": "\u064C", - "/dancer": "\u1F483", - "/danda": "\u0964", - "/dango": "\u1F361", - "/darga:hb": "\u05A7", - "/dargahebrew": "\u05A7", - "/dargalefthebrew": "\u05A7", - "/darkShade": "\u2593", - "/darkSunglasses": "\u1F576", - "/dashwithupturnleft": "\u2E43", - "/dasiacmbcyr": "\u0485", - "/dasiapneumatacyrilliccmb": "\u0485", - "/dateseparator": "\u060D", - "/dayeighteentelegraph": "\u33F1", - "/dayeighttelegraph": "\u33E7", - "/dayeleventelegraph": "\u33EA", - "/dayfifteentelegraph": "\u33EE", - "/dayfivetelegraph": "\u33E4", - "/dayfourteentelegraph": "\u33ED", - "/dayfourtelegraph": "\u33E3", - "/daynineteentelegraph": "\u33F2", - "/dayninetelegraph": "\u33E8", - "/dayonetelegraph": "\u33E0", - "/dayseventeentelegraph": "\u33F0", - "/dayseventelegraph": "\u33E6", - "/daysixteentelegraph": "\u33EF", - "/daysixtelegraph": "\u33E5", - "/daytentelegraph": "\u33E9", - "/daythirteentelegraph": "\u33EC", - "/daythirtyonetelegraph": "\u33FE", - "/daythirtytelegraph": "\u33FD", - "/daythreetelegraph": "\u33E2", - "/daytwelvetelegraph": "\u33EB", - "/daytwentyeighttelegraph": "\u33FB", - "/daytwentyfivetelegraph": "\u33F8", - "/daytwentyfourtelegraph": "\u33F7", - "/daytwentyninetelegraph": "\u33FC", - "/daytwentyonetelegraph": "\u33F4", - "/daytwentyseventelegraph": "\u33FA", - "/daytwentysixtelegraph": "\u33F9", - "/daytwentytelegraph": "\u33F3", - "/daytwentythreetelegraph": "\u33F6", - "/daytwentytwotelegraph": "\u33F5", - "/daytwotelegraph": "\u33E1", - "/dbdigraph": "\u0238", - "/dbfullwidth": "\u33C8", - "/dblGrave": "\uF6D3", - "/dblanglebracketleft": "\u300A", - "/dblanglebracketleftvertical": "\uFE3D", - "/dblanglebracketright": "\u300B", - "/dblanglebracketrightvertical": "\uFE3E", - "/dblarchinvertedbelowcmb": "\u032B", - "/dblarrowNE": "\u21D7", - "/dblarrowNW": "\u21D6", - "/dblarrowSE": "\u21D8", - "/dblarrowSW": "\u21D9", - "/dblarrowdown": "\u21D3", - "/dblarrowleft": "\u21D4", - "/dblarrowleftright": "\u21D4", - "/dblarrowleftrightstroke": "\u21CE", - "/dblarrowleftstroke": "\u21CD", - "/dblarrowright": "\u21D2", - "/dblarrowrightstroke": "\u21CF", - "/dblarrowup": "\u21D1", - "/dblarrowupdown": "\u21D5", - "/dbldanda": "\u0965", - "/dbldnhorz": "\u2566", - "/dbldnleft": "\u2557", - "/dbldnright": "\u2554", - "/dblgrave": "\uF6D6", - "/dblgravecmb": "\u030F", - "/dblhorz": "\u2550", - "/dblintegral": "\u222C", - "/dbllowline": "\u2017", - "/dbllowlinecmb": "\u0333", - "/dbloverlinecmb": "\u033F", - "/dblprimemod": "\u02BA", - "/dblstrokearrowdown": "\u21DF", - "/dblstrokearrowup": "\u21DE", - "/dbluphorz": "\u2569", - "/dblupleft": "\u255D", - "/dblupright": "\u255A", - "/dblvert": "\u2551", - "/dblverthorz": "\u256C", - "/dblverticalbar": "\u2016", - "/dblverticallineabovecmb": "\u030E", - "/dblvertleft": "\u2563", - "/dblvertright": "\u2560", - "/dbopomofo": "\u3109", - "/dbsquare": "\u33C8", - "/dcaron": "\u010F", - "/dcedilla": "\u1E11", - "/dchecyr": "\u052D", - "/dcircle": "\u24D3", - "/dcircumflexbelow": "\u1E13", - "/dcroat": "\u0111", - "/dcurl": "\u0221", - "/ddabengali": "\u09A1", - "/ddadeva": "\u0921", - "/ddagujarati": "\u0AA1", - "/ddagurmukhi": "\u0A21", - "/ddahal": "\u068D", - "/ddahal.fina": "\uFB83", - "/ddahal.isol": "\uFB82", - "/ddal": "\u0688", - "/ddal.fina": "\uFB89", - "/ddal.isol": "\uFB88", - "/ddalarabic": "\u0688", - "/ddalfinalarabic": "\uFB89", - "/ddamahaprana": "\uA99E", - "/ddblstruckitalic": "\u2146", - "/dddhadeva": "\u095C", - "/ddhabengali": "\u09A2", - "/ddhadeva": "\u0922", - "/ddhagujarati": "\u0AA2", - "/ddhagurmukhi": "\u0A22", - "/ddot": "\u1E0B", - "/ddotaccent": "\u1E0B", - "/ddotbelow": "\u1E0D", - "/decembertelegraph": "\u32CB", - "/deciduousTree": "\u1F333", - "/decimalexponent": "\u23E8", - "/decimalseparatorarabic": "\u066B", - "/decimalseparatorpersian": "\u066B", - "/decreaseFontSize": "\u1F5DB", - "/decyr": "\u0434", - "/decyrillic": "\u0434", - "/degree": "\u00B0", - "/degreecelsius": "\u2103", - "/degreefahrenheit": "\u2109", - "/dehi:hb": "\u05AD", - "/dehihebrew": "\u05AD", - "/dehiragana": "\u3067", - "/deicoptic": "\u03EF", - "/dekatakana": "\u30C7", - "/dekomicyr": "\u0501", - "/deldiaeresisfunc": "\u2362", - "/deleteleft": "\u232B", - "/deleteright": "\u2326", - "/deliveryTruck": "\u1F69A", - "/delstilefunc": "\u2352", - "/delta": "\u03B4", - "/deltaequal": "\u225C", - "/deltastilefunc": "\u234B", - "/deltaturned": "\u018D", - "/deltaunderlinefunc": "\u2359", - "/deltildefunc": "\u236B", - "/denominatorminusonenumeratorbengali": "\u09F8", - "/dentistrybottomverticalleft": "\u23CC", - "/dentistrybottomverticalright": "\u23BF", - "/dentistrycircledownhorizontal": "\u23C1", - "/dentistrycircleuphorizontal": "\u23C2", - "/dentistrycirclevertical": "\u23C0", - "/dentistrydownhorizontal": "\u23C9", - "/dentistrytopverticalleft": "\u23CB", - "/dentistrytopverticalright": "\u23BE", - "/dentistrytriangledownhorizontal": "\u23C4", - "/dentistrytriangleuphorizontal": "\u23C5", - "/dentistrytrianglevertical": "\u23C3", - "/dentistryuphorizontal": "\u23CA", - "/dentistrywavedownhorizontal": "\u23C7", - "/dentistrywaveuphorizontal": "\u23C8", - "/dentistrywavevertical": "\u23C6", - "/departmentStore": "\u1F3EC", - "/derelictHouseBuilding": "\u1F3DA", - "/desert": "\u1F3DC", - "/desertIsland": "\u1F3DD", - "/desisquare": "\u3325", - "/desktopComputer": "\u1F5A5", - "/desktopWindow": "\u1F5D4", - "/deva:a": "\u0905", - "/deva:aa": "\u0906", - "/deva:aasign": "\u093E", - "/deva:abbreviation": "\u0970", - "/deva:acandra": "\u0972", - "/deva:acute": "\u0954", - "/deva:ai": "\u0910", - "/deva:aisign": "\u0948", - "/deva:anudatta": "\u0952", - "/deva:anusvara": "\u0902", - "/deva:ashort": "\u0904", - "/deva:au": "\u0914", - "/deva:ausign": "\u094C", - "/deva:avagraha": "\u093D", - "/deva:aw": "\u0975", - "/deva:awsign": "\u094F", - "/deva:ba": "\u092C", - "/deva:bba": "\u097F", - "/deva:bha": "\u092D", - "/deva:ca": "\u091A", - "/deva:candrabindu": "\u0901", - "/deva:candrabinduinverted": "\u0900", - "/deva:cha": "\u091B", - "/deva:da": "\u0926", - "/deva:danda": "\u0964", - "/deva:dbldanda": "\u0965", - "/deva:dda": "\u0921", - "/deva:ddda": "\u097E", - "/deva:dddha": "\u095C", - "/deva:ddha": "\u0922", - "/deva:dha": "\u0927", - "/deva:dothigh": "\u0971", - "/deva:e": "\u090F", - "/deva:ecandra": "\u090D", - "/deva:eight": "\u096E", - "/deva:eshort": "\u090E", - "/deva:esign": "\u0947", - "/deva:esigncandra": "\u0945", - "/deva:esignprishthamatra": "\u094E", - "/deva:esignshort": "\u0946", - "/deva:fa": "\u095E", - "/deva:five": "\u096B", - "/deva:four": "\u096A", - "/deva:ga": "\u0917", - "/deva:gga": "\u097B", - "/deva:gha": "\u0918", - "/deva:ghha": "\u095A", - "/deva:glottalstop": "\u097D", - "/deva:grave": "\u0953", - "/deva:ha": "\u0939", - "/deva:i": "\u0907", - "/deva:ii": "\u0908", - "/deva:iisign": "\u0940", - "/deva:isign": "\u093F", - "/deva:ja": "\u091C", - "/deva:jha": "\u091D", - "/deva:jja": "\u097C", - "/deva:ka": "\u0915", - "/deva:kha": "\u0916", - "/deva:khha": "\u0959", - "/deva:la": "\u0932", - "/deva:lla": "\u0933", - "/deva:llla": "\u0934", - "/deva:llvocal": "\u0961", - "/deva:llvocalsign": "\u0963", - "/deva:lvocal": "\u090C", - "/deva:lvocalsign": "\u0962", - "/deva:ma": "\u092E", - "/deva:marwaridda": "\u0978", - "/deva:na": "\u0928", - "/deva:nga": "\u0919", - "/deva:nine": "\u096F", - "/deva:nna": "\u0923", - "/deva:nnna": "\u0929", - "/deva:nukta": "\u093C", - "/deva:nya": "\u091E", - "/deva:o": "\u0913", - "/deva:ocandra": "\u0911", - "/deva:oe": "\u0973", - "/deva:oesign": "\u093A", - "/deva:om": "\u0950", - "/deva:one": "\u0967", - "/deva:ooe": "\u0974", - "/deva:ooesign": "\u093B", - "/deva:oshort": "\u0912", - "/deva:osign": "\u094B", - "/deva:osigncandra": "\u0949", - "/deva:osignshort": "\u094A", - "/deva:pa": "\u092A", - "/deva:pha": "\u092B", - "/deva:qa": "\u0958", - "/deva:ra": "\u0930", - "/deva:rha": "\u095D", - "/deva:rra": "\u0931", - "/deva:rrvocal": "\u0960", - "/deva:rrvocalsign": "\u0944", - "/deva:rvocal": "\u090B", - "/deva:rvocalsign": "\u0943", - "/deva:sa": "\u0938", - "/deva:seven": "\u096D", - "/deva:sha": "\u0936", - "/deva:signelongcandra": "\u0955", - "/deva:six": "\u096C", - "/deva:ssa": "\u0937", - "/deva:ta": "\u0924", - "/deva:tha": "\u0925", - "/deva:three": "\u0969", - "/deva:tta": "\u091F", - "/deva:ttha": "\u0920", - "/deva:two": "\u0968", - "/deva:u": "\u0909", - "/deva:udatta": "\u0951", - "/deva:ue": "\u0976", - "/deva:uesign": "\u0956", - "/deva:usign": "\u0941", - "/deva:uu": "\u090A", - "/deva:uue": "\u0977", - "/deva:uuesign": "\u0957", - "/deva:uusign": "\u0942", - "/deva:va": "\u0935", - "/deva:virama": "\u094D", - "/deva:visarga": "\u0903", - "/deva:ya": "\u092F", - "/deva:yaheavy": "\u097A", - "/deva:yya": "\u095F", - "/deva:za": "\u095B", - "/deva:zero": "\u0966", - "/deva:zha": "\u0979", - "/dezh": "\u02A4", - "/dfemaledbl": "\u26A2", - "/dhabengali": "\u09A7", - "/dhadeva": "\u0927", - "/dhagujarati": "\u0AA7", - "/dhagurmukhi": "\u0A27", - "/dhook": "\u0257", - "/diaeresisgreaterfunc": "\u2369", - "/dialytikatonos": "\u0385", - "/dialytikatonoscmb": "\u0344", - "/diametersign": "\u2300", - "/diamond": "\u2666", - "/diamondShapeADotInside": "\u1F4A0", - "/diamondinsquarewhite": "\u26CB", - "/diamondoperator": "\u22C4", - "/diamondsuitwhite": "\u2662", - "/diamondunderlinefunc": "\u235A", - "/diamondwhitewithdiamondsmallblack": "\u25C8", - "/diefive": "\u2684", - "/diefour": "\u2683", - "/dieone": "\u2680", - "/dieresis": "\u00A8", - "/dieresisacute": "\uF6D7", - "/dieresisbelowcmb": "\u0324", - "/dieresiscmb": "\u0308", - "/dieresisgrave": "\uF6D8", - "/dieresistilde": "\u1FC1", - "/dieresistonos": "\u0385", - "/dieselLocomotive": "\u1F6F2", - "/diesix": "\u2685", - "/diethree": "\u2682", - "/dietwo": "\u2681", - "/differencebetween": "\u224F", - "/digamma": "\u03DD", - "/digammapamphylian": "\u0377", - "/digramgreateryang": "\u268C", - "/digramgreateryin": "\u268F", - "/digramlesseryang": "\u268E", - "/digramlesseryin": "\u268D", - "/dihiragana": "\u3062", - "/dikatakana": "\u30C2", - "/dimensionorigin": "\u2331", - "/dingbatSAns-serifzerocircle": "\u1F10B", - "/dingbatSAns-serifzerocircleblack": "\u1F10C", - "/dinsular": "\uA77A", - "/directHit": "\u1F3AF", - "/directcurrentformtwo": "\u2393", - "/dirgamurevowel": "\uA9BB", - "/disabledcar": "\u26CD", - "/disappointedButRelievedFace": "\u1F625", - "/disappointedFace": "\u1F61E", - "/discontinuousunderline": "\u2382", - "/dittomark": "\u3003", - "/divide": "\u00F7", - "/divides": "\u2223", - "/divisionslash": "\u2215", - "/divisiontimes": "\u22C7", - "/divorce": "\u26AE", - "/dizzy": "\u1F4AB", - "/dizzyFace": "\u1F635", - "/djecyr": "\u0452", - "/djecyrillic": "\u0452", - "/djekomicyr": "\u0503", - "/dkshade": "\u2593", - "/dlfullwidth": "\u3397", - "/dlinebelow": "\u1E0F", - "/dlogicalorsquare": "\u27CF", - "/dlogicalsquare": "\u27CE", - "/dlsquare": "\u3397", - "/dm2fullwidth": "\u3378", - "/dm3fullwidth": "\u3379", - "/dmacron": "\u0111", - "/dmaledbl": "\u26A3", - "/dmfullwidth": "\u3377", - "/dmonospace": "\uFF44", - "/dnblock": "\u2584", - "/dndblhorzsng": "\u2565", - "/dndblleftsng": "\u2556", - "/dndblrightsng": "\u2553", - "/dngb:airplane": "\u2708", - "/dngb:arrowfeatheredblackNE": "\u27B6", - "/dngb:arrowfeatheredblackSE": "\u27B4", - "/dngb:arrowfeatheredblackheavyNE": "\u27B9", - "/dngb:arrowfeatheredblackheavySE": "\u27B7", - "/dngb:arrowheadrightblack": "\u27A4", - "/dngb:arrowheadrightthreeDbottomlight": "\u27A3", - "/dngb:arrowheadrightthreeDtoplight": "\u27A2", - "/dngb:arrowheavyNE": "\u279A", - "/dngb:arrowheavySE": "\u2798", - "/dngb:arrowrightbacktiltedshadowedwhite": "\u27AB", - "/dngb:arrowrightblack": "\u27A1", - "/dngb:arrowrightcircledwhiteheavy": "\u27B2", - "/dngb:arrowrightcurvedownblackheavy": "\u27A5", - "/dngb:arrowrightcurveupblackheavy": "\u27A6", - "/dngb:arrowrightfeatheredblack": "\u27B5", - "/dngb:arrowrightfeatheredblackheavy": "\u27B8", - "/dngb:arrowrightfeatheredwhite": "\u27B3", - "/dngb:arrowrightfronttiltedshadowedwhite": "\u27AC", - "/dngb:arrowrightheavy": "\u2799", - "/dngb:arrowrightleftshadedwhite": "\u27AA", - "/dngb:arrowrightoutlinedopen": "\u27BE", - "/dngb:arrowrightpointed": "\u279B", - "/dngb:arrowrightpointedblackheavy": "\u27A8", - "/dngb:arrowrightrightshadedwhite": "\u27A9", - "/dngb:arrowrightroundheavy": "\u279C", - "/dngb:arrowrightsquatblack": "\u27A7", - "/dngb:arrowrighttriangle": "\u279D", - "/dngb:arrowrighttriangledashed": "\u279F", - "/dngb:arrowrighttriangledashedheavy": "\u27A0", - "/dngb:arrowrighttriangleheavy": "\u279E", - "/dngb:arrowrightwedge": "\u27BC", - "/dngb:arrowrightwedgeheavy": "\u27BD", - "/dngb:arrowrightwideheavy": "\u2794", - "/dngb:arrowshadowrightlowerwhiteheavy": "\u27AD", - "/dngb:arrowshadowrightnotchedlowerwhite": "\u27AF", - "/dngb:arrowshadowrightnotchedupperwhite": "\u27B1", - "/dngb:arrowshadowrightupperwhiteheavy": "\u27AE", - "/dngb:arrowteardropright": "\u27BA", - "/dngb:arrowteardroprightheavy": "\u27BB", - "/dngb:asteriskballoon": "\u2749", - "/dngb:asteriskballoonfour": "\u2723", - "/dngb:asteriskballoonheavyfour": "\u2724", - "/dngb:asteriskcentreopen": "\u2732", - "/dngb:asteriskclubfour": "\u2725", - "/dngb:asteriskheavy": "\u2731", - "/dngb:asteriskpointedsixteen": "\u273A", - "/dngb:asteriskteardrop": "\u273B", - "/dngb:asteriskteardropcentreopen": "\u273C", - "/dngb:asteriskteardropfour": "\u2722", - "/dngb:asteriskteardropheavy": "\u273D", - "/dngb:asteriskteardroppinwheelheavy": "\u2743", - "/dngb:asteriskteardroppropellereight": "\u274A", - "/dngb:asteriskteardroppropellerheavyeight": "\u274B", - "/dngb:ballotx": "\u2717", - "/dngb:ballotxheavy": "\u2718", - "/dngb:bracketleftpointedangleheavyornament": "\u2770", - "/dngb:bracketleftpointedanglemediumornament": "\u276C", - "/dngb:bracketrightpointedangleheavyornament": "\u2771", - "/dngb:bracketrightpointedanglemediumornament": "\u276D", - "/dngb:bracketshellleftlightornament": "\u2772", - "/dngb:bracketshellrightlightornament": "\u2773", - "/dngb:check": "\u2713", - "/dngb:checkheavy": "\u2714", - "/dngb:checkwhiteheavy": "\u2705", - "/dngb:chevronsnowflakeheavy": "\u2746", - "/dngb:circleshadowedwhite": "\u274D", - "/dngb:commaheavydoubleornament": "\u275E", - "/dngb:commaheavydoubleturnedornament": "\u275D", - "/dngb:commaheavyornament": "\u275C", - "/dngb:commaheavyturnedornament": "\u275B", - "/dngb:compasstarpointedblackeight": "\u2737", - "/dngb:compasstarpointedblackheavyeight": "\u2738", - "/dngb:cross": "\u274C", - "/dngb:crosscentreopen": "\u271B", - "/dngb:crosscentreopenheavy": "\u271C", - "/dngb:curlybracketleftmediumornament": "\u2774", - "/dngb:curlybracketrightmediumornament": "\u2775", - "/dngb:curlyloop": "\u27B0", - "/dngb:curlyloopdouble": "\u27BF", - "/dngb:curvedstemparagraphsignornament": "\u2761", - "/dngb:diamondminusxblackwhite": "\u2756", - "/dngb:divisionsignheavy": "\u2797", - "/dngb:eightnegativecircled": "\u277D", - "/dngb:eightsanscircled": "\u2787", - "/dngb:eightsansnegativecircled": "\u2791", - "/dngb:envelope": "\u2709", - "/dngb:exclamationheavy": "\u2757", - "/dngb:exclamationheavyornament": "\u2762", - "/dngb:exclamationwhiteornament": "\u2755", - "/dngb:fivenegativecircled": "\u277A", - "/dngb:fivesanscircled": "\u2784", - "/dngb:fivesansnegativecircled": "\u278E", - "/dngb:floralheart": "\u2766", - "/dngb:floralheartbulletrotated": "\u2767", - "/dngb:floretteblack": "\u273F", - "/dngb:floretteoutlinedpetalledblackeight": "\u2741", - "/dngb:florettepetalledblackwhitesix": "\u273E", - "/dngb:florettewhite": "\u2740", - "/dngb:fournegativecircled": "\u2779", - "/dngb:foursanscircled": "\u2783", - "/dngb:foursansnegativecircled": "\u278D", - "/dngb:greekcrossheavy": "\u271A", - "/dngb:greekcrossoutlined": "\u2719", - "/dngb:heartblackheavy": "\u2764", - "/dngb:heartbulletrotatedblackheavy": "\u2765", - "/dngb:heartexclamationheavyornament": "\u2763", - "/dngb:hvictory": "\u270C", - "/dngb:hwriting": "\u270D", - "/dngb:latincross": "\u271D", - "/dngb:latincrossoutlined": "\u271F", - "/dngb:latincrossshadowedwhite": "\u271E", - "/dngb:lowcommaheavydoubleornament": "\u2760", - "/dngb:lowcommaheavyornament": "\u275F", - "/dngb:maltesecross": "\u2720", - "/dngb:minussignheavy": "\u2796", - "/dngb:multiplicationx": "\u2715", - "/dngb:multiplicationxheavy": "\u2716", - "/dngb:nibblack": "\u2712", - "/dngb:nibwhite": "\u2711", - "/dngb:ninenegativecircled": "\u277E", - "/dngb:ninesanscircled": "\u2788", - "/dngb:ninesansnegativecircled": "\u2792", - "/dngb:onenegativecircled": "\u2776", - "/dngb:onesanscircled": "\u2780", - "/dngb:onesansnegativecircled": "\u278A", - "/dngb:parenthesisleftflattenedmediumornament": "\u276A", - "/dngb:parenthesisleftmediumornament": "\u2768", - "/dngb:parenthesisrightflattenedmediumornament": "\u276B", - "/dngb:parenthesisrightmediumornament": "\u2769", - "/dngb:pencil": "\u270F", - "/dngb:pencillowerright": "\u270E", - "/dngb:pencilupperright": "\u2710", - "/dngb:plussignheavy": "\u2795", - "/dngb:questionblackornament": "\u2753", - "/dngb:questionwhiteornament": "\u2754", - "/dngb:quotationleftpointedangleheavyornament": "\u276E", - "/dngb:quotationrightpointedangleheavyornament": "\u276F", - "/dngb:raisedfist": "\u270A", - "/dngb:raisedh": "\u270B", - "/dngb:safetyscissorsblack": "\u2700", - "/dngb:scissorsblack": "\u2702", - "/dngb:scissorslowerblade": "\u2703", - "/dngb:scissorsupperblade": "\u2701", - "/dngb:scissorswhite": "\u2704", - "/dngb:sevennegativecircled": "\u277C", - "/dngb:sevensanscircled": "\u2786", - "/dngb:sevensansnegativecircled": "\u2790", - "/dngb:sixnegativecircled": "\u277B", - "/dngb:sixsanscircled": "\u2785", - "/dngb:sixsansnegativecircled": "\u278F", - "/dngb:snowflake": "\u2744", - "/dngb:snowflaketight": "\u2745", - "/dngb:sparkle": "\u2747", - "/dngb:sparkleheavy": "\u2748", - "/dngb:sparkles": "\u2728", - "/dngb:spokedasteriskeight": "\u2733", - "/dngb:squaredcrossnegative": "\u274E", - "/dngb:squarelowerrightshadowedwhite": "\u2751", - "/dngb:squareshadowlowerrightwhite": "\u274F", - "/dngb:squareshadowupperrightwhite": "\u2750", - "/dngb:squareupperrightshadowedwhite": "\u2752", - "/dngb:starcentreblackwhite": "\u272C", - "/dngb:starcentreopenblack": "\u272B", - "/dngb:starcentreopenpointedcircledeight": "\u2742", - "/dngb:starcircledwhite": "\u272A", - "/dngb:starofdavid": "\u2721", - "/dngb:staroutlinedblack": "\u272D", - "/dngb:staroutlinedblackheavy": "\u272E", - "/dngb:staroutlinedstresswhite": "\u2729", - "/dngb:starpinwheel": "\u272F", - "/dngb:starpointedblackeight": "\u2734", - "/dngb:starpointedblackfour": "\u2726", - "/dngb:starpointedblacksix": "\u2736", - "/dngb:starpointedblacktwelve": "\u2739", - "/dngb:starpointedpinwheeleight": "\u2735", - "/dngb:starpointedwhitefour": "\u2727", - "/dngb:starshadowedwhite": "\u2730", - "/dngb:tapedrive": "\u2707", - "/dngb:telephonelocationsign": "\u2706", - "/dngb:tennegativecircled": "\u277F", - "/dngb:tensanscircled": "\u2789", - "/dngb:tensansnegativecircled": "\u2793", - "/dngb:threenegativecircled": "\u2778", - "/dngb:threesanscircled": "\u2782", - "/dngb:threesansnegativecircled": "\u278C", - "/dngb:twonegativecircled": "\u2777", - "/dngb:twosanscircled": "\u2781", - "/dngb:twosansnegativecircled": "\u278B", - "/dngb:verticalbarheavy": "\u275A", - "/dngb:verticalbarlight": "\u2758", - "/dngb:verticalbarmedium": "\u2759", - "/dnheavyhorzlight": "\u2530", - "/dnheavyleftlight": "\u2512", - "/dnheavyleftuplight": "\u2527", - "/dnheavyrightlight": "\u250E", - "/dnheavyrightuplight": "\u251F", - "/dnheavyuphorzlight": "\u2541", - "/dnlighthorzheavy": "\u252F", - "/dnlightleftheavy": "\u2511", - "/dnlightleftupheavy": "\u2529", - "/dnlightrightheavy": "\u250D", - "/dnlightrightupheavy": "\u2521", - "/dnlightuphorzheavy": "\u2547", - "/dnsnghorzdbl": "\u2564", - "/dnsngleftdbl": "\u2555", - "/dnsngrightdbl": "\u2552", - "/doNotLitter": "\u1F6AF", - "/dochadathai": "\u0E0E", - "/document": "\u1F5CE", - "/documentPicture": "\u1F5BB", - "/documentText": "\u1F5B9", - "/documentTextAndPicture": "\u1F5BA", - "/dodekthai": "\u0E14", - "/doesnotcontainasnormalsubgroorequalup": "\u22ED", - "/doesnotcontainasnormalsubgroup": "\u22EB", - "/doesnotdivide": "\u2224", - "/doesnotforce": "\u22AE", - "/doesnotprecede": "\u2280", - "/doesnotprecedeorequal": "\u22E0", - "/doesnotprove": "\u22AC", - "/doesnotsucceed": "\u2281", - "/doesnotsucceedorequal": "\u22E1", - "/dog": "\u1F415", - "/dogFace": "\u1F436", - "/dohiragana": "\u3069", - "/dokatakana": "\u30C9", - "/dollar": "\u0024", - "/dollarinferior": "\uF6E3", - "/dollarmonospace": "\uFF04", - "/dollaroldstyle": "\uF724", - "/dollarsmall": "\uFE69", - "/dollarsuperior": "\uF6E4", - "/dolphin": "\u1F42C", - "/dominohorizontal_00_00": "\u1F031", - "/dominohorizontal_00_01": "\u1F032", - "/dominohorizontal_00_02": "\u1F033", - "/dominohorizontal_00_03": "\u1F034", - "/dominohorizontal_00_04": "\u1F035", - "/dominohorizontal_00_05": "\u1F036", - "/dominohorizontal_00_06": "\u1F037", - "/dominohorizontal_01_00": "\u1F038", - "/dominohorizontal_01_01": "\u1F039", - "/dominohorizontal_01_02": "\u1F03A", - "/dominohorizontal_01_03": "\u1F03B", - "/dominohorizontal_01_04": "\u1F03C", - "/dominohorizontal_01_05": "\u1F03D", - "/dominohorizontal_01_06": "\u1F03E", - "/dominohorizontal_02_00": "\u1F03F", - "/dominohorizontal_02_01": "\u1F040", - "/dominohorizontal_02_02": "\u1F041", - "/dominohorizontal_02_03": "\u1F042", - "/dominohorizontal_02_04": "\u1F043", - "/dominohorizontal_02_05": "\u1F044", - "/dominohorizontal_02_06": "\u1F045", - "/dominohorizontal_03_00": "\u1F046", - "/dominohorizontal_03_01": "\u1F047", - "/dominohorizontal_03_02": "\u1F048", - "/dominohorizontal_03_03": "\u1F049", - "/dominohorizontal_03_04": "\u1F04A", - "/dominohorizontal_03_05": "\u1F04B", - "/dominohorizontal_03_06": "\u1F04C", - "/dominohorizontal_04_00": "\u1F04D", - "/dominohorizontal_04_01": "\u1F04E", - "/dominohorizontal_04_02": "\u1F04F", - "/dominohorizontal_04_03": "\u1F050", - "/dominohorizontal_04_04": "\u1F051", - "/dominohorizontal_04_05": "\u1F052", - "/dominohorizontal_04_06": "\u1F053", - "/dominohorizontal_05_00": "\u1F054", - "/dominohorizontal_05_01": "\u1F055", - "/dominohorizontal_05_02": "\u1F056", - "/dominohorizontal_05_03": "\u1F057", - "/dominohorizontal_05_04": "\u1F058", - "/dominohorizontal_05_05": "\u1F059", - "/dominohorizontal_05_06": "\u1F05A", - "/dominohorizontal_06_00": "\u1F05B", - "/dominohorizontal_06_01": "\u1F05C", - "/dominohorizontal_06_02": "\u1F05D", - "/dominohorizontal_06_03": "\u1F05E", - "/dominohorizontal_06_04": "\u1F05F", - "/dominohorizontal_06_05": "\u1F060", - "/dominohorizontal_06_06": "\u1F061", - "/dominohorizontalback": "\u1F030", - "/dominovertical_00_00": "\u1F063", - "/dominovertical_00_01": "\u1F064", - "/dominovertical_00_02": "\u1F065", - "/dominovertical_00_03": "\u1F066", - "/dominovertical_00_04": "\u1F067", - "/dominovertical_00_05": "\u1F068", - "/dominovertical_00_06": "\u1F069", - "/dominovertical_01_00": "\u1F06A", - "/dominovertical_01_01": "\u1F06B", - "/dominovertical_01_02": "\u1F06C", - "/dominovertical_01_03": "\u1F06D", - "/dominovertical_01_04": "\u1F06E", - "/dominovertical_01_05": "\u1F06F", - "/dominovertical_01_06": "\u1F070", - "/dominovertical_02_00": "\u1F071", - "/dominovertical_02_01": "\u1F072", - "/dominovertical_02_02": "\u1F073", - "/dominovertical_02_03": "\u1F074", - "/dominovertical_02_04": "\u1F075", - "/dominovertical_02_05": "\u1F076", - "/dominovertical_02_06": "\u1F077", - "/dominovertical_03_00": "\u1F078", - "/dominovertical_03_01": "\u1F079", - "/dominovertical_03_02": "\u1F07A", - "/dominovertical_03_03": "\u1F07B", - "/dominovertical_03_04": "\u1F07C", - "/dominovertical_03_05": "\u1F07D", - "/dominovertical_03_06": "\u1F07E", - "/dominovertical_04_00": "\u1F07F", - "/dominovertical_04_01": "\u1F080", - "/dominovertical_04_02": "\u1F081", - "/dominovertical_04_03": "\u1F082", - "/dominovertical_04_04": "\u1F083", - "/dominovertical_04_05": "\u1F084", - "/dominovertical_04_06": "\u1F085", - "/dominovertical_05_00": "\u1F086", - "/dominovertical_05_01": "\u1F087", - "/dominovertical_05_02": "\u1F088", - "/dominovertical_05_03": "\u1F089", - "/dominovertical_05_04": "\u1F08A", - "/dominovertical_05_05": "\u1F08B", - "/dominovertical_05_06": "\u1F08C", - "/dominovertical_06_00": "\u1F08D", - "/dominovertical_06_01": "\u1F08E", - "/dominovertical_06_02": "\u1F08F", - "/dominovertical_06_03": "\u1F090", - "/dominovertical_06_04": "\u1F091", - "/dominovertical_06_05": "\u1F092", - "/dominovertical_06_06": "\u1F093", - "/dominoverticalback": "\u1F062", - "/dong": "\u20AB", - "/door": "\u1F6AA", - "/dorusquare": "\u3326", - "/dot": "\u27D1", - "/dotaccent": "\u02D9", - "/dotaccentcmb": "\u0307", - "/dotbelowcmb": "\u0323", - "/dotbelowcomb": "\u0323", - "/dotkatakana": "\u30FB", - "/dotlessbeh": "\u066E", - "/dotlessfeh": "\u06A1", - "/dotlessi": "\u0131", - "/dotlessj": "\uF6BE", - "/dotlessjstroke": "\u025F", - "/dotlessjstrokehook": "\u0284", - "/dotlesskhahabove": "\u06E1", - "/dotlessqaf": "\u066F", - "/dotlower:hb": "\u05C5", - "/dotmath": "\u22C5", - "/dotminus": "\u2238", - "/dotplus": "\u2214", - "/dotraised": "\u2E33", - "/dots1": "\u2801", - "/dots12": "\u2803", - "/dots123": "\u2807", - "/dots1234": "\u280F", - "/dots12345": "\u281F", - "/dots123456": "\u283F", - "/dots1234567": "\u287F", - "/dots12345678": "\u28FF", - "/dots1234568": "\u28BF", - "/dots123457": "\u285F", - "/dots1234578": "\u28DF", - "/dots123458": "\u289F", - "/dots12346": "\u282F", - "/dots123467": "\u286F", - "/dots1234678": "\u28EF", - "/dots123468": "\u28AF", - "/dots12347": "\u284F", - "/dots123478": "\u28CF", - "/dots12348": "\u288F", - "/dots1235": "\u2817", - "/dots12356": "\u2837", - "/dots123567": "\u2877", - "/dots1235678": "\u28F7", - "/dots123568": "\u28B7", - "/dots12357": "\u2857", - "/dots123578": "\u28D7", - "/dots12358": "\u2897", - "/dots1236": "\u2827", - "/dots12367": "\u2867", - "/dots123678": "\u28E7", - "/dots12368": "\u28A7", - "/dots1237": "\u2847", - "/dots12378": "\u28C7", - "/dots1238": "\u2887", - "/dots124": "\u280B", - "/dots1245": "\u281B", - "/dots12456": "\u283B", - "/dots124567": "\u287B", - "/dots1245678": "\u28FB", - "/dots124568": "\u28BB", - "/dots12457": "\u285B", - "/dots124578": "\u28DB", - "/dots12458": "\u289B", - "/dots1246": "\u282B", - "/dots12467": "\u286B", - "/dots124678": "\u28EB", - "/dots12468": "\u28AB", - "/dots1247": "\u284B", - "/dots12478": "\u28CB", - "/dots1248": "\u288B", - "/dots125": "\u2813", - "/dots1256": "\u2833", - "/dots12567": "\u2873", - "/dots125678": "\u28F3", - "/dots12568": "\u28B3", - "/dots1257": "\u2853", - "/dots12578": "\u28D3", - "/dots1258": "\u2893", - "/dots126": "\u2823", - "/dots1267": "\u2863", - "/dots12678": "\u28E3", - "/dots1268": "\u28A3", - "/dots127": "\u2843", - "/dots1278": "\u28C3", - "/dots128": "\u2883", - "/dots13": "\u2805", - "/dots134": "\u280D", - "/dots1345": "\u281D", - "/dots13456": "\u283D", - "/dots134567": "\u287D", - "/dots1345678": "\u28FD", - "/dots134568": "\u28BD", - "/dots13457": "\u285D", - "/dots134578": "\u28DD", - "/dots13458": "\u289D", - "/dots1346": "\u282D", - "/dots13467": "\u286D", - "/dots134678": "\u28ED", - "/dots13468": "\u28AD", - "/dots1347": "\u284D", - "/dots13478": "\u28CD", - "/dots1348": "\u288D", - "/dots135": "\u2815", - "/dots1356": "\u2835", - "/dots13567": "\u2875", - "/dots135678": "\u28F5", - "/dots13568": "\u28B5", - "/dots1357": "\u2855", - "/dots13578": "\u28D5", - "/dots1358": "\u2895", - "/dots136": "\u2825", - "/dots1367": "\u2865", - "/dots13678": "\u28E5", - "/dots1368": "\u28A5", - "/dots137": "\u2845", - "/dots1378": "\u28C5", - "/dots138": "\u2885", - "/dots14": "\u2809", - "/dots145": "\u2819", - "/dots1456": "\u2839", - "/dots14567": "\u2879", - "/dots145678": "\u28F9", - "/dots14568": "\u28B9", - "/dots1457": "\u2859", - "/dots14578": "\u28D9", - "/dots1458": "\u2899", - "/dots146": "\u2829", - "/dots1467": "\u2869", - "/dots14678": "\u28E9", - "/dots1468": "\u28A9", - "/dots147": "\u2849", - "/dots1478": "\u28C9", - "/dots148": "\u2889", - "/dots15": "\u2811", - "/dots156": "\u2831", - "/dots1567": "\u2871", - "/dots15678": "\u28F1", - "/dots1568": "\u28B1", - "/dots157": "\u2851", - "/dots1578": "\u28D1", - "/dots158": "\u2891", - "/dots16": "\u2821", - "/dots167": "\u2861", - "/dots1678": "\u28E1", - "/dots168": "\u28A1", - "/dots17": "\u2841", - "/dots178": "\u28C1", - "/dots18": "\u2881", - "/dots2": "\u2802", - "/dots23": "\u2806", - "/dots234": "\u280E", - "/dots2345": "\u281E", - "/dots23456": "\u283E", - "/dots234567": "\u287E", - "/dots2345678": "\u28FE", - "/dots234568": "\u28BE", - "/dots23457": "\u285E", - "/dots234578": "\u28DE", - "/dots23458": "\u289E", - "/dots2346": "\u282E", - "/dots23467": "\u286E", - "/dots234678": "\u28EE", - "/dots23468": "\u28AE", - "/dots2347": "\u284E", - "/dots23478": "\u28CE", - "/dots2348": "\u288E", - "/dots235": "\u2816", - "/dots2356": "\u2836", - "/dots23567": "\u2876", - "/dots235678": "\u28F6", - "/dots23568": "\u28B6", - "/dots2357": "\u2856", - "/dots23578": "\u28D6", - "/dots2358": "\u2896", - "/dots236": "\u2826", - "/dots2367": "\u2866", - "/dots23678": "\u28E6", - "/dots2368": "\u28A6", - "/dots237": "\u2846", - "/dots2378": "\u28C6", - "/dots238": "\u2886", - "/dots24": "\u280A", - "/dots245": "\u281A", - "/dots2456": "\u283A", - "/dots24567": "\u287A", - "/dots245678": "\u28FA", - "/dots24568": "\u28BA", - "/dots2457": "\u285A", - "/dots24578": "\u28DA", - "/dots2458": "\u289A", - "/dots246": "\u282A", - "/dots2467": "\u286A", - "/dots24678": "\u28EA", - "/dots2468": "\u28AA", - "/dots247": "\u284A", - "/dots2478": "\u28CA", - "/dots248": "\u288A", - "/dots25": "\u2812", - "/dots256": "\u2832", - "/dots2567": "\u2872", - "/dots25678": "\u28F2", - "/dots2568": "\u28B2", - "/dots257": "\u2852", - "/dots2578": "\u28D2", - "/dots258": "\u2892", - "/dots26": "\u2822", - "/dots267": "\u2862", - "/dots2678": "\u28E2", - "/dots268": "\u28A2", - "/dots27": "\u2842", - "/dots278": "\u28C2", - "/dots28": "\u2882", - "/dots3": "\u2804", - "/dots34": "\u280C", - "/dots345": "\u281C", - "/dots3456": "\u283C", - "/dots34567": "\u287C", - "/dots345678": "\u28FC", - "/dots34568": "\u28BC", - "/dots3457": "\u285C", - "/dots34578": "\u28DC", - "/dots3458": "\u289C", - "/dots346": "\u282C", - "/dots3467": "\u286C", - "/dots34678": "\u28EC", - "/dots3468": "\u28AC", - "/dots347": "\u284C", - "/dots3478": "\u28CC", - "/dots348": "\u288C", - "/dots35": "\u2814", - "/dots356": "\u2834", - "/dots3567": "\u2874", - "/dots35678": "\u28F4", - "/dots3568": "\u28B4", - "/dots357": "\u2854", - "/dots3578": "\u28D4", - "/dots358": "\u2894", - "/dots36": "\u2824", - "/dots367": "\u2864", - "/dots3678": "\u28E4", - "/dots368": "\u28A4", - "/dots37": "\u2844", - "/dots378": "\u28C4", - "/dots38": "\u2884", - "/dots4": "\u2808", - "/dots45": "\u2818", - "/dots456": "\u2838", - "/dots4567": "\u2878", - "/dots45678": "\u28F8", - "/dots4568": "\u28B8", - "/dots457": "\u2858", - "/dots4578": "\u28D8", - "/dots458": "\u2898", - "/dots46": "\u2828", - "/dots467": "\u2868", - "/dots4678": "\u28E8", - "/dots468": "\u28A8", - "/dots47": "\u2848", - "/dots478": "\u28C8", - "/dots48": "\u2888", - "/dots5": "\u2810", - "/dots56": "\u2830", - "/dots567": "\u2870", - "/dots5678": "\u28F0", - "/dots568": "\u28B0", - "/dots57": "\u2850", - "/dots578": "\u28D0", - "/dots58": "\u2890", - "/dots6": "\u2820", - "/dots67": "\u2860", - "/dots678": "\u28E0", - "/dots68": "\u28A0", - "/dots7": "\u2840", - "/dots78": "\u28C0", - "/dots8": "\u2880", - "/dotsquarefour": "\u2E2C", - "/dottedcircle": "\u25CC", - "/dottedcross": "\u205C", - "/dotupper:hb": "\u05C4", - "/doublebarvertical": "\u23F8", - "/doubleyodpatah": "\uFB1F", - "/doubleyodpatahhebrew": "\uFB1F", - "/doughnut": "\u1F369", - "/doveOfPeace": "\u1F54A", - "/downtackbelowcmb": "\u031E", - "/downtackmod": "\u02D5", - "/downwarrowleftofuparrow": "\u21F5", - "/dparen": "\u249F", - "/dparenthesized": "\u249F", - "/drachma": "\u20AF", - "/dragon": "\u1F409", - "/dragonFace": "\u1F432", - "/draughtskingblack": "\u26C3", - "/draughtskingwhite": "\u26C1", - "/draughtsmanblack": "\u26C2", - "/draughtsmanwhite": "\u26C0", - "/dress": "\u1F457", - "/driveslow": "\u26DA", - "/dromedaryCamel": "\u1F42A", - "/droplet": "\u1F4A7", - "/dsquare": "\u1F1A5", - "/dsuperior": "\uF6EB", - "/dtail": "\u0256", - "/dtopbar": "\u018C", - "/duhiragana": "\u3065", - "/dukatakana": "\u30C5", - "/dul": "\u068E", - "/dul.fina": "\uFB87", - "/dul.isol": "\uFB86", - "/dum": "\uA771", - "/dvd": "\u1F4C0", - "/dyeh": "\u0684", - "/dyeh.fina": "\uFB73", - "/dyeh.init": "\uFB74", - "/dyeh.isol": "\uFB72", - "/dyeh.medi": "\uFB75", - "/dz": "\u01F3", - "/dzaltone": "\u02A3", - "/dzcaron": "\u01C6", - "/dzcurl": "\u02A5", - "/dzeabkhasiancyrillic": "\u04E1", - "/dzeabkhcyr": "\u04E1", - "/dzecyr": "\u0455", - "/dzecyrillic": "\u0455", - "/dzed": "\u02A3", - "/dzedcurl": "\u02A5", - "/dzhecyr": "\u045F", - "/dzhecyrillic": "\u045F", - "/dzjekomicyr": "\u0507", - "/dzzhecyr": "\u052B", - "/e": "\u0065", - "/e-mail": "\u1F4E7", - "/e.fina": "\uFBE5", - "/e.inferior": "\u2091", - "/e.init": "\uFBE6", - "/e.isol": "\uFBE4", - "/e.medi": "\uFBE7", - "/eVfullwidth": "\u32CE", - "/eacute": "\u00E9", - "/earOfMaize": "\u1F33D", - "/earOfRice": "\u1F33E", - "/earth": "\u2641", - "/earthGlobeAmericas": "\u1F30E", - "/earthGlobeAsiaAustralia": "\u1F30F", - "/earthGlobeEuropeAfrica": "\u1F30D", - "/earthground": "\u23DA", - "/earthideographiccircled": "\u328F", - "/earthideographicparen": "\u322F", - "/eastsyriaccross": "\u2671", - "/ebengali": "\u098F", - "/ebopomofo": "\u311C", - "/ebreve": "\u0115", - "/ecandradeva": "\u090D", - "/ecandragujarati": "\u0A8D", - "/ecandravowelsigndeva": "\u0945", - "/ecandravowelsigngujarati": "\u0AC5", - "/ecaron": "\u011B", - "/ecedilla": "\u0229", - "/ecedillabreve": "\u1E1D", - "/echarmenian": "\u0565", - "/echyiwnarmenian": "\u0587", - "/ecircle": "\u24D4", - "/ecirclekatakana": "\u32D3", - "/ecircumflex": "\u00EA", - "/ecircumflexacute": "\u1EBF", - "/ecircumflexbelow": "\u1E19", - "/ecircumflexdotbelow": "\u1EC7", - "/ecircumflexgrave": "\u1EC1", - "/ecircumflexhoi": "\u1EC3", - "/ecircumflexhookabove": "\u1EC3", - "/ecircumflextilde": "\u1EC5", - "/ecyrillic": "\u0454", - "/edblgrave": "\u0205", - "/edblstruckitalic": "\u2147", - "/edeva": "\u090F", - "/edieresis": "\u00EB", - "/edot": "\u0117", - "/edotaccent": "\u0117", - "/edotbelow": "\u1EB9", - "/eegurmukhi": "\u0A0F", - "/eekaasquare": "\u3308", - "/eematragurmukhi": "\u0A47", - "/efcyr": "\u0444", - "/efcyrillic": "\u0444", - "/egrave": "\u00E8", - "/egravedbl": "\u0205", - "/egujarati": "\u0A8F", - "/egyptain": "\uA725", - "/egyptalef": "\uA723", - "/eharmenian": "\u0567", - "/ehbopomofo": "\u311D", - "/ehiragana": "\u3048", - "/ehoi": "\u1EBB", - "/ehookabove": "\u1EBB", - "/eibopomofo": "\u311F", - "/eight": "\u0038", - "/eight.inferior": "\u2088", - "/eight.roman": "\u2167", - "/eight.romansmall": "\u2177", - "/eight.superior": "\u2078", - "/eightarabic": "\u0668", - "/eightbengali": "\u09EE", - "/eightcircle": "\u2467", - "/eightcircledbl": "\u24FC", - "/eightcircleinversesansserif": "\u2791", - "/eightcomma": "\u1F109", - "/eightdeva": "\u096E", - "/eighteencircle": "\u2471", - "/eighteencircleblack": "\u24F2", - "/eighteenparen": "\u2485", - "/eighteenparenthesized": "\u2485", - "/eighteenperiod": "\u2499", - "/eightfar": "\u06F8", - "/eightgujarati": "\u0AEE", - "/eightgurmukhi": "\u0A6E", - "/eighthackarabic": "\u0668", - "/eighthangzhou": "\u3028", - "/eighthnote": "\u266A", - "/eighthnotebeamed": "\u266B", - "/eightideographiccircled": "\u3287", - "/eightideographicparen": "\u3227", - "/eightinferior": "\u2088", - "/eightksquare": "\u1F19F", - "/eightmonospace": "\uFF18", - "/eightoldstyle": "\uF738", - "/eightparen": "\u247B", - "/eightparenthesized": "\u247B", - "/eightperiod": "\u248F", - "/eightpersian": "\u06F8", - "/eightroman": "\u2177", - "/eightsuperior": "\u2078", - "/eightthai": "\u0E58", - "/eightycirclesquare": "\u324F", - "/einvertedbreve": "\u0207", - "/eiotifiedcyr": "\u0465", - "/eiotifiedcyrillic": "\u0465", - "/eject": "\u23CF", - "/ekatakana": "\u30A8", - "/ekatakanahalfwidth": "\uFF74", - "/ekonkargurmukhi": "\u0A74", - "/ekorean": "\u3154", - "/elcyr": "\u043B", - "/elcyrillic": "\u043B", - "/electricLightBulb": "\u1F4A1", - "/electricPlug": "\u1F50C", - "/electricTorch": "\u1F526", - "/electricalintersection": "\u23E7", - "/electricarrow": "\u2301", - "/element": "\u2208", - "/elementdotabove": "\u22F5", - "/elementlonghorizontalstroke": "\u22F2", - "/elementopeningup": "\u27D2", - "/elementoverbar": "\u22F6", - "/elementoverbarsmall": "\u22F7", - "/elementsmall": "\u220A", - "/elementsmallverticalbarhorizontalstroke": "\u22F4", - "/elementtwoshorizontalstroke": "\u22F9", - "/elementunderbar": "\u22F8", - "/elementverticalbarhorizontalstroke": "\u22F3", - "/elephant": "\u1F418", - "/eleven.roman": "\u216A", - "/eleven.romansmall": "\u217A", - "/elevencircle": "\u246A", - "/elevencircleblack": "\u24EB", - "/elevenparen": "\u247E", - "/elevenparenthesized": "\u247E", - "/elevenperiod": "\u2492", - "/elevenroman": "\u217A", - "/elhookcyr": "\u0513", - "/ellipsis": "\u2026", - "/ellipsisdiagonaldownright": "\u22F1", - "/ellipsisdiagonalupright": "\u22F0", - "/ellipsismidhorizontal": "\u22EF", - "/ellipsisvertical": "\u22EE", - "/elmiddlehookcyr": "\u0521", - "/elsharptailcyr": "\u04C6", - "/eltailcyr": "\u052F", - "/emacron": "\u0113", - "/emacronacute": "\u1E17", - "/emacrongrave": "\u1E15", - "/emcyr": "\u043C", - "/emcyrillic": "\u043C", - "/emdash": "\u2014", - "/emdashdbl": "\u2E3A", - "/emdashtpl": "\u2E3B", - "/emdashvertical": "\uFE31", - "/emojiModifierFitzpatrickType-1-2": "\u1F3FB", - "/emojiModifierFitzpatrickType-3": "\u1F3FC", - "/emojiModifierFitzpatrickType-4": "\u1F3FD", - "/emojiModifierFitzpatrickType-5": "\u1F3FE", - "/emojiModifierFitzpatrickType-6": "\u1F3FF", - "/emonospace": "\uFF45", - "/emphasis": "\u2383", - "/emphasismarkarmenian": "\u055B", - "/emptyDocument": "\u1F5CB", - "/emptyNote": "\u1F5C5", - "/emptyNotePad": "\u1F5C7", - "/emptyNotePage": "\u1F5C6", - "/emptyPage": "\u1F5CC", - "/emptyPages": "\u1F5CD", - "/emptyset": "\u2205", - "/emquad": "\u2001", - "/emsharptailcyr": "\u04CE", - "/emspace": "\u2003", - "/enbopomofo": "\u3123", - "/encyr": "\u043D", - "/encyrillic": "\u043D", - "/endLeftwardsArrowAbove": "\u1F51A", - "/endash": "\u2013", - "/endashvertical": "\uFE32", - "/endescendercyrillic": "\u04A3", - "/endpro": "\u220E", - "/eng": "\u014B", - "/engbopomofo": "\u3125", - "/engecyr": "\u04A5", - "/enghecyrillic": "\u04A5", - "/enhookcyr": "\u04C8", - "/enhookcyrillic": "\u04C8", - "/enhookleftcyr": "\u0529", - "/enmiddlehookcyr": "\u0523", - "/enotch": "\u2C78", - "/enquad": "\u2000", - "/ensharptailcyr": "\u04CA", - "/enspace": "\u2002", - "/entailcyr": "\u04A3", - "/enter": "\u2386", - "/enterpriseideographiccircled": "\u32AD", - "/enterpriseideographicparen": "\u323D", - "/envelopeDownwardsArrowAbove": "\u1F4E9", - "/envelopeLightning": "\u1F584", - "/eogonek": "\u0119", - "/eokorean": "\u3153", - "/eopen": "\u025B", - "/eopenclosed": "\u029A", - "/eopenreversed": "\u025C", - "/eopenreversedclosed": "\u025E", - "/eopenreversedhook": "\u025D", - "/eparen": "\u24A0", - "/eparenthesized": "\u24A0", - "/epsilon": "\u03B5", - "/epsilonacute": "\u1F73", - "/epsilonasper": "\u1F11", - "/epsilonasperacute": "\u1F15", - "/epsilonaspergrave": "\u1F13", - "/epsilongrave": "\u1F72", - "/epsilonlenis": "\u1F10", - "/epsilonlenisacute": "\u1F14", - "/epsilonlenisgrave": "\u1F12", - "/epsilonlunatesymbol": "\u03F5", - "/epsilonreversedlunatesymbol": "\u03F6", - "/epsilontonos": "\u03AD", - "/epsilonunderlinefunc": "\u2377", - "/equal": "\u003D", - "/equal.inferior": "\u208C", - "/equal.superior": "\u207C", - "/equalandparallel": "\u22D5", - "/equalbydefinition": "\u225D", - "/equalmonospace": "\uFF1D", - "/equalorgreater": "\u22DD", - "/equalorless": "\u22DC", - "/equalorprecedes": "\u22DE", - "/equalorsucceeds": "\u22DF", - "/equalscolon": "\u2255", - "/equalsmall": "\uFE66", - "/equalsuperior": "\u207C", - "/equiangular": "\u225A", - "/equivalence": "\u2261", - "/equivalent": "\u224D", - "/eranameheiseisquare": "\u337B", - "/eranamemeizisquare": "\u337E", - "/eranamesyouwasquare": "\u337C", - "/eranametaisyousquare": "\u337D", - "/eraseleft": "\u232B", - "/eraseright": "\u2326", - "/erbopomofo": "\u3126", - "/ercyr": "\u0440", - "/ercyrillic": "\u0440", - "/ereversed": "\u0258", - "/ereversedcyr": "\u044D", - "/ereversedcyrillic": "\u044D", - "/ereverseddieresiscyr": "\u04ED", - "/ergfullwidth": "\u32CD", - "/ertickcyr": "\u048F", - "/escript": "\u212F", - "/escyr": "\u0441", - "/escyrillic": "\u0441", - "/esdescendercyrillic": "\u04AB", - "/esh": "\u0283", - "/eshcurl": "\u0286", - "/eshortdeva": "\u090E", - "/eshortvowelsigndeva": "\u0946", - "/eshreversedloop": "\u01AA", - "/eshsquatreversed": "\u0285", - "/esmallhiragana": "\u3047", - "/esmallkatakana": "\u30A7", - "/esmallkatakanahalfwidth": "\uFF6A", - "/estailcyr": "\u04AB", - "/estimated": "\u212E", - "/estimates": "\u2259", - "/estroke": "\u0247", - "/esukuudosquare": "\u3307", - "/esuperior": "\uF6EC", - "/et": "\uA76B", - "/eta": "\u03B7", - "/etaacute": "\u1F75", - "/etaacuteiotasub": "\u1FC4", - "/etaasper": "\u1F21", - "/etaasperacute": "\u1F25", - "/etaasperacuteiotasub": "\u1F95", - "/etaaspergrave": "\u1F23", - "/etaaspergraveiotasub": "\u1F93", - "/etaasperiotasub": "\u1F91", - "/etaaspertilde": "\u1F27", - "/etaaspertildeiotasub": "\u1F97", - "/etagrave": "\u1F74", - "/etagraveiotasub": "\u1FC2", - "/etaiotasub": "\u1FC3", - "/etalenis": "\u1F20", - "/etalenisacute": "\u1F24", - "/etalenisacuteiotasub": "\u1F94", - "/etalenisgrave": "\u1F22", - "/etalenisgraveiotasub": "\u1F92", - "/etalenisiotasub": "\u1F90", - "/etalenistilde": "\u1F26", - "/etalenistildeiotasub": "\u1F96", - "/etarmenian": "\u0568", - "/etatilde": "\u1FC6", - "/etatildeiotasub": "\u1FC7", - "/etatonos": "\u03AE", - "/eth": "\u00F0", - "/ethi:aaglottal": "\u12A3", - "/ethi:aglottal": "\u12A0", - "/ethi:ba": "\u1260", - "/ethi:baa": "\u1263", - "/ethi:be": "\u1265", - "/ethi:bee": "\u1264", - "/ethi:bi": "\u1262", - "/ethi:bo": "\u1266", - "/ethi:bu": "\u1261", - "/ethi:bwa": "\u1267", - "/ethi:ca": "\u1278", - "/ethi:caa": "\u127B", - "/ethi:ce": "\u127D", - "/ethi:cee": "\u127C", - "/ethi:cha": "\u1328", - "/ethi:chaa": "\u132B", - "/ethi:che": "\u132D", - "/ethi:chee": "\u132C", - "/ethi:chi": "\u132A", - "/ethi:cho": "\u132E", - "/ethi:chu": "\u1329", - "/ethi:chwa": "\u132F", - "/ethi:ci": "\u127A", - "/ethi:co": "\u127E", - "/ethi:colon": "\u1365", - "/ethi:comma": "\u1363", - "/ethi:cu": "\u1279", - "/ethi:cwa": "\u127F", - "/ethi:da": "\u12F0", - "/ethi:daa": "\u12F3", - "/ethi:dda": "\u12F8", - "/ethi:ddaa": "\u12FB", - "/ethi:dde": "\u12FD", - "/ethi:ddee": "\u12FC", - "/ethi:ddi": "\u12FA", - "/ethi:ddo": "\u12FE", - "/ethi:ddu": "\u12F9", - "/ethi:ddwa": "\u12FF", - "/ethi:de": "\u12F5", - "/ethi:dee": "\u12F4", - "/ethi:di": "\u12F2", - "/ethi:do": "\u12F6", - "/ethi:du": "\u12F1", - "/ethi:dwa": "\u12F7", - "/ethi:eeglottal": "\u12A4", - "/ethi:eglottal": "\u12A5", - "/ethi:eight": "\u1370", - "/ethi:eighty": "\u1379", - "/ethi:fa": "\u1348", - "/ethi:faa": "\u134B", - "/ethi:fe": "\u134D", - "/ethi:fee": "\u134C", - "/ethi:fi": "\u134A", - "/ethi:fifty": "\u1376", - "/ethi:five": "\u136D", - "/ethi:fo": "\u134E", - "/ethi:forty": "\u1375", - "/ethi:four": "\u136C", - "/ethi:fu": "\u1349", - "/ethi:fullstop": "\u1362", - "/ethi:fwa": "\u134F", - "/ethi:fya": "\u135A", - "/ethi:ga": "\u1308", - "/ethi:gaa": "\u130B", - "/ethi:ge": "\u130D", - "/ethi:gee": "\u130C", - "/ethi:geminationandvowellengthmarkcmb": "\u135D", - "/ethi:geminationmarkcmb": "\u135F", - "/ethi:gga": "\u1318", - "/ethi:ggaa": "\u131B", - "/ethi:gge": "\u131D", - "/ethi:ggee": "\u131C", - "/ethi:ggi": "\u131A", - "/ethi:ggo": "\u131E", - "/ethi:ggu": "\u1319", - "/ethi:ggwaa": "\u131F", - "/ethi:gi": "\u130A", - "/ethi:go": "\u130E", - "/ethi:goa": "\u130F", - "/ethi:gu": "\u1309", - "/ethi:gwa": "\u1310", - "/ethi:gwaa": "\u1313", - "/ethi:gwe": "\u1315", - "/ethi:gwee": "\u1314", - "/ethi:gwi": "\u1312", - "/ethi:ha": "\u1200", - "/ethi:haa": "\u1203", - "/ethi:he": "\u1205", - "/ethi:hee": "\u1204", - "/ethi:hha": "\u1210", - "/ethi:hhaa": "\u1213", - "/ethi:hhe": "\u1215", - "/ethi:hhee": "\u1214", - "/ethi:hhi": "\u1212", - "/ethi:hho": "\u1216", - "/ethi:hhu": "\u1211", - "/ethi:hhwa": "\u1217", - "/ethi:hi": "\u1202", - "/ethi:ho": "\u1206", - "/ethi:hoa": "\u1207", - "/ethi:hu": "\u1201", - "/ethi:hundred": "\u137B", - "/ethi:iglottal": "\u12A2", - "/ethi:ja": "\u1300", - "/ethi:jaa": "\u1303", - "/ethi:je": "\u1305", - "/ethi:jee": "\u1304", - "/ethi:ji": "\u1302", - "/ethi:jo": "\u1306", - "/ethi:ju": "\u1301", - "/ethi:jwa": "\u1307", - "/ethi:ka": "\u12A8", - "/ethi:kaa": "\u12AB", - "/ethi:ke": "\u12AD", - "/ethi:kee": "\u12AC", - "/ethi:ki": "\u12AA", - "/ethi:ko": "\u12AE", - "/ethi:koa": "\u12AF", - "/ethi:ku": "\u12A9", - "/ethi:kwa": "\u12B0", - "/ethi:kwaa": "\u12B3", - "/ethi:kwe": "\u12B5", - "/ethi:kwee": "\u12B4", - "/ethi:kwi": "\u12B2", - "/ethi:kxa": "\u12B8", - "/ethi:kxaa": "\u12BB", - "/ethi:kxe": "\u12BD", - "/ethi:kxee": "\u12BC", - "/ethi:kxi": "\u12BA", - "/ethi:kxo": "\u12BE", - "/ethi:kxu": "\u12B9", - "/ethi:kxwa": "\u12C0", - "/ethi:kxwaa": "\u12C3", - "/ethi:kxwe": "\u12C5", - "/ethi:kxwee": "\u12C4", - "/ethi:kxwi": "\u12C2", - "/ethi:la": "\u1208", - "/ethi:laa": "\u120B", - "/ethi:le": "\u120D", - "/ethi:lee": "\u120C", - "/ethi:li": "\u120A", - "/ethi:lo": "\u120E", - "/ethi:lu": "\u1209", - "/ethi:lwa": "\u120F", - "/ethi:ma": "\u1218", - "/ethi:maa": "\u121B", - "/ethi:me": "\u121D", - "/ethi:mee": "\u121C", - "/ethi:mi": "\u121A", - "/ethi:mo": "\u121E", - "/ethi:mu": "\u1219", - "/ethi:mwa": "\u121F", - "/ethi:mya": "\u1359", - "/ethi:na": "\u1290", - "/ethi:naa": "\u1293", - "/ethi:ne": "\u1295", - "/ethi:nee": "\u1294", - "/ethi:ni": "\u1292", - "/ethi:nine": "\u1371", - "/ethi:ninety": "\u137A", - "/ethi:no": "\u1296", - "/ethi:nu": "\u1291", - "/ethi:nwa": "\u1297", - "/ethi:nya": "\u1298", - "/ethi:nyaa": "\u129B", - "/ethi:nye": "\u129D", - "/ethi:nyee": "\u129C", - "/ethi:nyi": "\u129A", - "/ethi:nyo": "\u129E", - "/ethi:nyu": "\u1299", - "/ethi:nywa": "\u129F", - "/ethi:oglottal": "\u12A6", - "/ethi:one": "\u1369", - "/ethi:pa": "\u1350", - "/ethi:paa": "\u1353", - "/ethi:paragraphseparator": "\u1368", - "/ethi:pe": "\u1355", - "/ethi:pee": "\u1354", - "/ethi:pha": "\u1330", - "/ethi:phaa": "\u1333", - "/ethi:pharyngeala": "\u12D0", - "/ethi:pharyngealaa": "\u12D3", - "/ethi:pharyngeale": "\u12D5", - "/ethi:pharyngealee": "\u12D4", - "/ethi:pharyngeali": "\u12D2", - "/ethi:pharyngealo": "\u12D6", - "/ethi:pharyngealu": "\u12D1", - "/ethi:phe": "\u1335", - "/ethi:phee": "\u1334", - "/ethi:phi": "\u1332", - "/ethi:pho": "\u1336", - "/ethi:phu": "\u1331", - "/ethi:phwa": "\u1337", - "/ethi:pi": "\u1352", - "/ethi:po": "\u1356", - "/ethi:prefacecolon": "\u1366", - "/ethi:pu": "\u1351", - "/ethi:pwa": "\u1357", - "/ethi:qa": "\u1240", - "/ethi:qaa": "\u1243", - "/ethi:qe": "\u1245", - "/ethi:qee": "\u1244", - "/ethi:qha": "\u1250", - "/ethi:qhaa": "\u1253", - "/ethi:qhe": "\u1255", - "/ethi:qhee": "\u1254", - "/ethi:qhi": "\u1252", - "/ethi:qho": "\u1256", - "/ethi:qhu": "\u1251", - "/ethi:qhwa": "\u1258", - "/ethi:qhwaa": "\u125B", - "/ethi:qhwe": "\u125D", - "/ethi:qhwee": "\u125C", - "/ethi:qhwi": "\u125A", - "/ethi:qi": "\u1242", - "/ethi:qo": "\u1246", - "/ethi:qoa": "\u1247", - "/ethi:qu": "\u1241", - "/ethi:questionmark": "\u1367", - "/ethi:qwa": "\u1248", - "/ethi:qwaa": "\u124B", - "/ethi:qwe": "\u124D", - "/ethi:qwee": "\u124C", - "/ethi:qwi": "\u124A", - "/ethi:ra": "\u1228", - "/ethi:raa": "\u122B", - "/ethi:re": "\u122D", - "/ethi:ree": "\u122C", - "/ethi:ri": "\u122A", - "/ethi:ro": "\u122E", - "/ethi:ru": "\u1229", - "/ethi:rwa": "\u122F", - "/ethi:rya": "\u1358", - "/ethi:sa": "\u1230", - "/ethi:saa": "\u1233", - "/ethi:se": "\u1235", - "/ethi:sectionmark": "\u1360", - "/ethi:see": "\u1234", - "/ethi:semicolon": "\u1364", - "/ethi:seven": "\u136F", - "/ethi:seventy": "\u1378", - "/ethi:sha": "\u1238", - "/ethi:shaa": "\u123B", - "/ethi:she": "\u123D", - "/ethi:shee": "\u123C", - "/ethi:shi": "\u123A", - "/ethi:sho": "\u123E", - "/ethi:shu": "\u1239", - "/ethi:shwa": "\u123F", - "/ethi:si": "\u1232", - "/ethi:six": "\u136E", - "/ethi:sixty": "\u1377", - "/ethi:so": "\u1236", - "/ethi:su": "\u1231", - "/ethi:swa": "\u1237", - "/ethi:sza": "\u1220", - "/ethi:szaa": "\u1223", - "/ethi:sze": "\u1225", - "/ethi:szee": "\u1224", - "/ethi:szi": "\u1222", - "/ethi:szo": "\u1226", - "/ethi:szu": "\u1221", - "/ethi:szwa": "\u1227", - "/ethi:ta": "\u1270", - "/ethi:taa": "\u1273", - "/ethi:te": "\u1275", - "/ethi:tee": "\u1274", - "/ethi:ten": "\u1372", - "/ethi:tenthousand": "\u137C", - "/ethi:tha": "\u1320", - "/ethi:thaa": "\u1323", - "/ethi:the": "\u1325", - "/ethi:thee": "\u1324", - "/ethi:thi": "\u1322", - "/ethi:thirty": "\u1374", - "/ethi:tho": "\u1326", - "/ethi:three": "\u136B", - "/ethi:thu": "\u1321", - "/ethi:thwa": "\u1327", - "/ethi:ti": "\u1272", - "/ethi:to": "\u1276", - "/ethi:tsa": "\u1338", - "/ethi:tsaa": "\u133B", - "/ethi:tse": "\u133D", - "/ethi:tsee": "\u133C", - "/ethi:tsi": "\u133A", - "/ethi:tso": "\u133E", - "/ethi:tsu": "\u1339", - "/ethi:tswa": "\u133F", - "/ethi:tu": "\u1271", - "/ethi:twa": "\u1277", - "/ethi:twenty": "\u1373", - "/ethi:two": "\u136A", - "/ethi:tza": "\u1340", - "/ethi:tzaa": "\u1343", - "/ethi:tze": "\u1345", - "/ethi:tzee": "\u1344", - "/ethi:tzi": "\u1342", - "/ethi:tzo": "\u1346", - "/ethi:tzoa": "\u1347", - "/ethi:tzu": "\u1341", - "/ethi:uglottal": "\u12A1", - "/ethi:va": "\u1268", - "/ethi:vaa": "\u126B", - "/ethi:ve": "\u126D", - "/ethi:vee": "\u126C", - "/ethi:vi": "\u126A", - "/ethi:vo": "\u126E", - "/ethi:vowellengthmarkcmb": "\u135E", - "/ethi:vu": "\u1269", - "/ethi:vwa": "\u126F", - "/ethi:wa": "\u12C8", - "/ethi:waa": "\u12CB", - "/ethi:waglottal": "\u12A7", - "/ethi:we": "\u12CD", - "/ethi:wee": "\u12CC", - "/ethi:wi": "\u12CA", - "/ethi:wo": "\u12CE", - "/ethi:woa": "\u12CF", - "/ethi:wordspace": "\u1361", - "/ethi:wu": "\u12C9", - "/ethi:xa": "\u1280", - "/ethi:xaa": "\u1283", - "/ethi:xe": "\u1285", - "/ethi:xee": "\u1284", - "/ethi:xi": "\u1282", - "/ethi:xo": "\u1286", - "/ethi:xoa": "\u1287", - "/ethi:xu": "\u1281", - "/ethi:xwa": "\u1288", - "/ethi:xwaa": "\u128B", - "/ethi:xwe": "\u128D", - "/ethi:xwee": "\u128C", - "/ethi:xwi": "\u128A", - "/ethi:ya": "\u12E8", - "/ethi:yaa": "\u12EB", - "/ethi:ye": "\u12ED", - "/ethi:yee": "\u12EC", - "/ethi:yi": "\u12EA", - "/ethi:yo": "\u12EE", - "/ethi:yoa": "\u12EF", - "/ethi:yu": "\u12E9", - "/ethi:za": "\u12D8", - "/ethi:zaa": "\u12DB", - "/ethi:ze": "\u12DD", - "/ethi:zee": "\u12DC", - "/ethi:zha": "\u12E0", - "/ethi:zhaa": "\u12E3", - "/ethi:zhe": "\u12E5", - "/ethi:zhee": "\u12E4", - "/ethi:zhi": "\u12E2", - "/ethi:zho": "\u12E6", - "/ethi:zhu": "\u12E1", - "/ethi:zhwa": "\u12E7", - "/ethi:zi": "\u12DA", - "/ethi:zo": "\u12DE", - "/ethi:zu": "\u12D9", - "/ethi:zwa": "\u12DF", - "/etilde": "\u1EBD", - "/etildebelow": "\u1E1B", - "/etnahta:hb": "\u0591", - "/etnahtafoukhhebrew": "\u0591", - "/etnahtafoukhlefthebrew": "\u0591", - "/etnahtahebrew": "\u0591", - "/etnahtalefthebrew": "\u0591", - "/eturned": "\u01DD", - "/eukorean": "\u3161", - "/eukrcyr": "\u0454", - "/euler": "\u2107", - "/euro": "\u20AC", - "/euroarchaic": "\u20A0", - "/europeanCastle": "\u1F3F0", - "/europeanPostOffice": "\u1F3E4", - "/evergreenTree": "\u1F332", - "/evowelsignbengali": "\u09C7", - "/evowelsigndeva": "\u0947", - "/evowelsigngujarati": "\u0AC7", - "/excellentideographiccircled": "\u329D", - "/excess": "\u2239", - "/exclam": "\u0021", - "/exclamarmenian": "\u055C", - "/exclamationquestion": "\u2049", - "/exclamdbl": "\u203C", - "/exclamdown": "\u00A1", - "/exclamdownsmall": "\uF7A1", - "/exclammonospace": "\uFF01", - "/exclamsmall": "\uF721", - "/existential": "\u2203", - "/expressionlessFace": "\u1F611", - "/extraterrestrialAlien": "\u1F47D", - "/eye": "\u1F441", - "/eyeglasses": "\u1F453", - "/eyes": "\u1F440", - "/ezh": "\u0292", - "/ezhcaron": "\u01EF", - "/ezhcurl": "\u0293", - "/ezhreversed": "\u01B9", - "/ezhtail": "\u01BA", - "/f": "\u0066", - "/f_f": "\uFB00", - "/f_f_i": "\uFB03", - "/f_f_l": "\uFB04", - "/faceMassage": "\u1F486", - "/faceSavouringDeliciousFood": "\u1F60B", - "/faceScreamingInFear": "\u1F631", - "/faceThrowingAKiss": "\u1F618", - "/faceWithColdSweat": "\u1F613", - "/faceWithLookOfTriumph": "\u1F624", - "/faceWithMedicalMask": "\u1F637", - "/faceWithNoGoodGesture": "\u1F645", - "/faceWithOkGesture": "\u1F646", - "/faceWithOpenMouth": "\u1F62E", - "/faceWithOpenMouthAndColdSweat": "\u1F630", - "/faceWithRollingEyes": "\u1F644", - "/faceWithStuckOutTongue": "\u1F61B", - "/faceWithStuckOutTongueAndTightlyClosedEyes": "\u1F61D", - "/faceWithStuckOutTongueAndWinkingEye": "\u1F61C", - "/faceWithTearsOfJoy": "\u1F602", - "/faceWithoutMouth": "\u1F636", - "/facsimile": "\u213B", - "/factory": "\u1F3ED", - "/fadeva": "\u095E", - "/fagurmukhi": "\u0A5E", - "/fahrenheit": "\u2109", - "/fallenLeaf": "\u1F342", - "/fallingdiagonal": "\u27CD", - "/fallingdiagonalincircleinsquareblackwhite": "\u26DE", - "/family": "\u1F46A", - "/farsi": "\u262B", - "/farsiYehDigitFourBelow": "\u0777", - "/farsiYehDigitThreeAbove": "\u0776", - "/farsiYehDigitTwoAbove": "\u0775", - "/fatha": "\u064E", - "/fathaIsol": "\uFE76", - "/fathaMedi": "\uFE77", - "/fathaarabic": "\u064E", - "/fathalowarabic": "\u064E", - "/fathasmall": "\u0618", - "/fathatan": "\u064B", - "/fathatanIsol": "\uFE70", - "/fathatanarabic": "\u064B", - "/fathatwodotsdots": "\u065E", - "/fatherChristmas": "\u1F385", - "/faxIcon": "\u1F5B7", - "/faxMachine": "\u1F4E0", - "/fbopomofo": "\u3108", - "/fcircle": "\u24D5", - "/fdot": "\u1E1F", - "/fdotaccent": "\u1E1F", - "/fearfulFace": "\u1F628", - "/februarytelegraph": "\u32C1", - "/feh.fina": "\uFED2", - "/feh.init": "\uFED3", - "/feh.init_alefmaksura.fina": "\uFC31", - "/feh.init_hah.fina": "\uFC2E", - "/feh.init_hah.medi": "\uFCBF", - "/feh.init_jeem.fina": "\uFC2D", - "/feh.init_jeem.medi": "\uFCBE", - "/feh.init_khah.fina": "\uFC2F", - "/feh.init_khah.medi": "\uFCC0", - "/feh.init_khah.medi_meem.medi": "\uFD7D", - "/feh.init_meem.fina": "\uFC30", - "/feh.init_meem.medi": "\uFCC1", - "/feh.init_yeh.fina": "\uFC32", - "/feh.isol": "\uFED1", - "/feh.medi": "\uFED4", - "/feh.medi_alefmaksura.fina": "\uFC7C", - "/feh.medi_khah.medi_meem.fina": "\uFD7C", - "/feh.medi_meem.medi_yeh.fina": "\uFDC1", - "/feh.medi_yeh.fina": "\uFC7D", - "/fehThreeDotsUpBelow": "\u0761", - "/fehTwoDotsBelow": "\u0760", - "/feharabic": "\u0641", - "/feharmenian": "\u0586", - "/fehdotbelow": "\u06A3", - "/fehdotbelowright": "\u06A2", - "/fehfinalarabic": "\uFED2", - "/fehinitialarabic": "\uFED3", - "/fehmedialarabic": "\uFED4", - "/fehthreedotsbelow": "\u06A5", - "/feicoptic": "\u03E5", - "/female": "\u2640", - "/femaleideographiccircled": "\u329B", - "/feng": "\u02A9", - "/ferrisWheel": "\u1F3A1", - "/ferry": "\u26F4", - "/festivalideographicparen": "\u3240", - "/ff": "\uFB00", - "/ffi": "\uFB03", - "/ffl": "\uFB04", - "/fhook": "\u0192", - "/fi": "\uFB01", # ligature "fi" - "/fieldHockeyStickAndBall": "\u1F3D1", - "/fifteencircle": "\u246E", - "/fifteencircleblack": "\u24EF", - "/fifteenparen": "\u2482", - "/fifteenparenthesized": "\u2482", - "/fifteenperiod": "\u2496", - "/fifty.roman": "\u216C", - "/fifty.romansmall": "\u217C", - "/fiftycircle": "\u32BF", - "/fiftycirclesquare": "\u324C", - "/fiftyearlyform.roman": "\u2186", - "/fiftythousand.roman": "\u2187", - "/figuredash": "\u2012", - "/figurespace": "\u2007", - "/fileCabinet": "\u1F5C4", - "/fileFolder": "\u1F4C1", - "/filledbox": "\u25A0", - "/filledrect": "\u25AC", - "/filledstopabove": "\u06EC", - "/filmFrames": "\u1F39E", - "/filmProjector": "\u1F4FD", - "/finalkaf": "\u05DA", - "/finalkaf:hb": "\u05DA", - "/finalkafdagesh": "\uFB3A", - "/finalkafdageshhebrew": "\uFB3A", - "/finalkafhebrew": "\u05DA", - "/finalkafqamats": "\u05DA", - "/finalkafqamatshebrew": "\u05DA", - "/finalkafsheva": "\u05DA", - "/finalkafshevahebrew": "\u05DA", - "/finalkafwithdagesh:hb": "\uFB3A", - "/finalmem": "\u05DD", - "/finalmem:hb": "\u05DD", - "/finalmemhebrew": "\u05DD", - "/finalmemwide:hb": "\uFB26", - "/finalnun": "\u05DF", - "/finalnun:hb": "\u05DF", - "/finalnunhebrew": "\u05DF", - "/finalpe": "\u05E3", - "/finalpe:hb": "\u05E3", - "/finalpehebrew": "\u05E3", - "/finalpewithdagesh:hb": "\uFB43", - "/finalsigma": "\u03C2", - "/finaltsadi": "\u05E5", - "/finaltsadi:hb": "\u05E5", - "/finaltsadihebrew": "\u05E5", - "/financialideographiccircled": "\u3296", - "/financialideographicparen": "\u3236", - "/finsular": "\uA77C", - "/fire": "\u1F525", - "/fireEngine": "\u1F692", - "/fireideographiccircled": "\u328B", - "/fireideographicparen": "\u322B", - "/fireworkSparkler": "\u1F387", - "/fireworks": "\u1F386", - "/firstQuarterMoon": "\u1F313", - "/firstQuarterMoonFace": "\u1F31B", - "/firstquartermoon": "\u263D", - "/firststrongisolate": "\u2068", - "/firsttonechinese": "\u02C9", - "/fish": "\u1F41F", - "/fishCakeSwirlDesign": "\u1F365", - "/fisheye": "\u25C9", - "/fishingPoleAndFish": "\u1F3A3", - "/fistedHandSign": "\u1F44A", - "/fitacyr": "\u0473", - "/fitacyrillic": "\u0473", - "/five": "\u0035", - "/five.inferior": "\u2085", - "/five.roman": "\u2164", - "/five.romansmall": "\u2174", - "/five.superior": "\u2075", - "/fivearabic": "\u0665", - "/fivebengali": "\u09EB", - "/fivecircle": "\u2464", - "/fivecircledbl": "\u24F9", - "/fivecircleinversesansserif": "\u278E", - "/fivecomma": "\u1F106", - "/fivedeva": "\u096B", - "/fivedot": "\u2E2D", - "/fivedotpunctuation": "\u2059", - "/fiveeighths": "\u215D", - "/fivefar": "\u06F5", - "/fivegujarati": "\u0AEB", - "/fivegurmukhi": "\u0A6B", - "/fivehackarabic": "\u0665", - "/fivehangzhou": "\u3025", - "/fivehundred.roman": "\u216E", - "/fivehundred.romansmall": "\u217E", - "/fiveideographiccircled": "\u3284", - "/fiveideographicparen": "\u3224", - "/fiveinferior": "\u2085", - "/fivemonospace": "\uFF15", - "/fiveoldstyle": "\uF735", - "/fiveparen": "\u2478", - "/fiveparenthesized": "\u2478", - "/fiveperiod": "\u248C", - "/fivepersian": "\u06F5", - "/fivepointedstar": "\u066D", - "/fivepointonesquare": "\u1F1A0", - "/fiveroman": "\u2174", - "/fivesixths": "\u215A", - "/fivesuperior": "\u2075", - "/fivethai": "\u0E55", - "/fivethousand.roman": "\u2181", - "/fl": "\uFB02", - "/flagblack": "\u2691", - "/flaghorizontalmiddlestripeblackwhite": "\u26FF", - "/flaginhole": "\u26F3", - "/flagwhite": "\u2690", - "/flatness": "\u23E5", - "/fleurdelis": "\u269C", - "/flexedBiceps": "\u1F4AA", - "/floorleft": "\u230A", - "/floorright": "\u230B", - "/floppyDisk": "\u1F4BE", - "/floralheartbulletreversedrotated": "\u2619", - "/florin": "\u0192", - "/flower": "\u2698", - "/flowerPlayingCards": "\u1F3B4", - "/flowerpunctuationmark": "\u2055", - "/flushedFace": "\u1F633", - "/flyingEnvelope": "\u1F585", - "/flyingSaucer": "\u1F6F8", - "/fmfullwidth": "\u3399", - "/fmonospace": "\uFF46", - "/fmsquare": "\u3399", - "/fofanthai": "\u0E1F", - "/fofathai": "\u0E1D", - "/fog": "\u1F32B", - "/foggy": "\u1F301", - "/folder": "\u1F5C0", - "/fongmanthai": "\u0E4F", - "/footnote": "\u0602", - "/footprints": "\u1F463", - "/footsquare": "\u23CD", - "/forall": "\u2200", - "/forces": "\u22A9", - "/fork": "\u2442", - "/forkKnife": "\u1F374", - "/forkKnifePlate": "\u1F37D", - "/forsamaritan": "\u214F", - "/fortycircle": "\u32B5", - "/fortycirclesquare": "\u324B", - "/fortyeightcircle": "\u32BD", - "/fortyfivecircle": "\u32BA", - "/fortyfourcircle": "\u32B9", - "/fortyninecircle": "\u32BE", - "/fortyonecircle": "\u32B6", - "/fortysevencircle": "\u32BC", - "/fortysixcircle": "\u32BB", - "/fortythreecircle": "\u32B8", - "/fortytwocircle": "\u32B7", - "/fountain": "\u26F2", - "/four": "\u0034", - "/four.inferior": "\u2084", - "/four.roman": "\u2163", - "/four.romansmall": "\u2173", - "/four.superior": "\u2074", - "/fourLeafClover": "\u1F340", - "/fourarabic": "\u0664", - "/fourbengali": "\u09EA", - "/fourcircle": "\u2463", - "/fourcircledbl": "\u24F8", - "/fourcircleinversesansserif": "\u278D", - "/fourcomma": "\u1F105", - "/fourdeva": "\u096A", - "/fourdotmark": "\u205B", - "/fourdotpunctuation": "\u2058", - "/fourfar": "\u06F4", - "/fourfifths": "\u2158", - "/fourgujarati": "\u0AEA", - "/fourgurmukhi": "\u0A6A", - "/fourhackarabic": "\u0664", - "/fourhangzhou": "\u3024", - "/fourideographiccircled": "\u3283", - "/fourideographicparen": "\u3223", - "/fourinferior": "\u2084", - "/fourksquare": "\u1F19E", - "/fourmonospace": "\uFF14", - "/fournumeratorbengali": "\u09F7", - "/fouroldstyle": "\uF734", - "/fourparen": "\u2477", - "/fourparenthesized": "\u2477", - "/fourperemspace": "\u2005", - "/fourperiod": "\u248B", - "/fourpersian": "\u06F4", - "/fourroman": "\u2173", - "/foursuperior": "\u2074", - "/fourteencircle": "\u246D", - "/fourteencircleblack": "\u24EE", - "/fourteenparen": "\u2481", - "/fourteenparenthesized": "\u2481", - "/fourteenperiod": "\u2495", - "/fourthai": "\u0E54", - "/fourthtonechinese": "\u02CB", - "/fparen": "\u24A1", - "/fparenthesized": "\u24A1", - "/fraction": "\u2044", - "/frameAnX": "\u1F5BE", - "/framePicture": "\u1F5BC", - "/frameTiles": "\u1F5BD", - "/franc": "\u20A3", - "/freesquare": "\u1F193", - "/frenchFries": "\u1F35F", - "/freversedepigraphic": "\uA7FB", - "/friedShrimp": "\u1F364", - "/frogFace": "\u1F438", - "/front-facingBabyChick": "\u1F425", - "/frown": "\u2322", - "/frowningFaceWithOpenMouth": "\u1F626", - "/frowningfacewhite": "\u2639", - "/fstroke": "\uA799", - "/fturned": "\u214E", - "/fuelpump": "\u26FD", - "/fullBlock": "\u2588", - "/fullMoon": "\u1F315", - "/fullMoonFace": "\u1F31D", - "/functionapplication": "\u2061", - "/funeralurn": "\u26B1", - "/fuse": "\u23DB", - "/fwd:A": "\uFF21", - "/fwd:B": "\uFF22", - "/fwd:C": "\uFF23", - "/fwd:D": "\uFF24", - "/fwd:E": "\uFF25", - "/fwd:F": "\uFF26", - "/fwd:G": "\uFF27", - "/fwd:H": "\uFF28", - "/fwd:I": "\uFF29", - "/fwd:J": "\uFF2A", - "/fwd:K": "\uFF2B", - "/fwd:L": "\uFF2C", - "/fwd:M": "\uFF2D", - "/fwd:N": "\uFF2E", - "/fwd:O": "\uFF2F", - "/fwd:P": "\uFF30", - "/fwd:Q": "\uFF31", - "/fwd:R": "\uFF32", - "/fwd:S": "\uFF33", - "/fwd:T": "\uFF34", - "/fwd:U": "\uFF35", - "/fwd:V": "\uFF36", - "/fwd:W": "\uFF37", - "/fwd:X": "\uFF38", - "/fwd:Y": "\uFF39", - "/fwd:Z": "\uFF3A", - "/fwd:a": "\uFF41", - "/fwd:ampersand": "\uFF06", - "/fwd:asciicircum": "\uFF3E", - "/fwd:asciitilde": "\uFF5E", - "/fwd:asterisk": "\uFF0A", - "/fwd:at": "\uFF20", - "/fwd:b": "\uFF42", - "/fwd:backslash": "\uFF3C", - "/fwd:bar": "\uFF5C", - "/fwd:braceleft": "\uFF5B", - "/fwd:braceright": "\uFF5D", - "/fwd:bracketleft": "\uFF3B", - "/fwd:bracketright": "\uFF3D", - "/fwd:brokenbar": "\uFFE4", - "/fwd:c": "\uFF43", - "/fwd:centsign": "\uFFE0", - "/fwd:colon": "\uFF1A", - "/fwd:comma": "\uFF0C", - "/fwd:d": "\uFF44", - "/fwd:dollar": "\uFF04", - "/fwd:e": "\uFF45", - "/fwd:eight": "\uFF18", - "/fwd:equal": "\uFF1D", - "/fwd:exclam": "\uFF01", - "/fwd:f": "\uFF46", - "/fwd:five": "\uFF15", - "/fwd:four": "\uFF14", - "/fwd:g": "\uFF47", - "/fwd:grave": "\uFF40", - "/fwd:greater": "\uFF1E", - "/fwd:h": "\uFF48", - "/fwd:hyphen": "\uFF0D", - "/fwd:i": "\uFF49", - "/fwd:j": "\uFF4A", - "/fwd:k": "\uFF4B", - "/fwd:l": "\uFF4C", - "/fwd:leftwhiteparenthesis": "\uFF5F", - "/fwd:less": "\uFF1C", - "/fwd:m": "\uFF4D", - "/fwd:macron": "\uFFE3", - "/fwd:n": "\uFF4E", - "/fwd:nine": "\uFF19", - "/fwd:notsign": "\uFFE2", - "/fwd:numbersign": "\uFF03", - "/fwd:o": "\uFF4F", - "/fwd:one": "\uFF11", - "/fwd:p": "\uFF50", - "/fwd:parenthesisleft": "\uFF08", - "/fwd:parenthesisright": "\uFF09", - "/fwd:percent": "\uFF05", - "/fwd:period": "\uFF0E", - "/fwd:plus": "\uFF0B", - "/fwd:poundsign": "\uFFE1", - "/fwd:q": "\uFF51", - "/fwd:question": "\uFF1F", - "/fwd:quotedbl": "\uFF02", - "/fwd:quotesingle": "\uFF07", - "/fwd:r": "\uFF52", - "/fwd:rightwhiteparenthesis": "\uFF60", - "/fwd:s": "\uFF53", - "/fwd:semicolon": "\uFF1B", - "/fwd:seven": "\uFF17", - "/fwd:six": "\uFF16", - "/fwd:slash": "\uFF0F", - "/fwd:t": "\uFF54", - "/fwd:three": "\uFF13", - "/fwd:two": "\uFF12", - "/fwd:u": "\uFF55", - "/fwd:underscore": "\uFF3F", - "/fwd:v": "\uFF56", - "/fwd:w": "\uFF57", - "/fwd:wonsign": "\uFFE6", - "/fwd:x": "\uFF58", - "/fwd:y": "\uFF59", - "/fwd:yensign": "\uFFE5", - "/fwd:z": "\uFF5A", - "/fwd:zero": "\uFF10", - "/g": "\u0067", - "/gabengali": "\u0997", - "/gacute": "\u01F5", - "/gadeva": "\u0917", - "/gaf": "\u06AF", - "/gaf.fina": "\uFB93", - "/gaf.init": "\uFB94", - "/gaf.isol": "\uFB92", - "/gaf.medi": "\uFB95", - "/gafarabic": "\u06AF", - "/gaffinalarabic": "\uFB93", - "/gafinitialarabic": "\uFB94", - "/gafmedialarabic": "\uFB95", - "/gafring": "\u06B0", - "/gafthreedotsabove": "\u06B4", - "/gaftwodotsbelow": "\u06B2", - "/gagujarati": "\u0A97", - "/gagurmukhi": "\u0A17", - "/gahiragana": "\u304C", - "/gakatakana": "\u30AC", - "/galsquare": "\u33FF", - "/gameDie": "\u1F3B2", - "/gamma": "\u03B3", - "/gammadblstruck": "\u213D", - "/gammalatinsmall": "\u0263", - "/gammasuperior": "\u02E0", - "/gammasupmod": "\u02E0", - "/gamurda": "\uA993", - "/gangiacoptic": "\u03EB", - "/ganmasquare": "\u330F", - "/garonsquare": "\u330E", - "/gbfullwidth": "\u3387", - "/gbopomofo": "\u310D", - "/gbreve": "\u011F", - "/gcaron": "\u01E7", - "/gcedilla": "\u0123", - "/gcircle": "\u24D6", - "/gcircumflex": "\u011D", - "/gcommaaccent": "\u0123", - "/gdot": "\u0121", - "/gdotaccent": "\u0121", - "/gear": "\u2699", - "/gearhles": "\u26EE", - "/gearouthub": "\u26ED", - "/gecyr": "\u0433", - "/gecyrillic": "\u0433", - "/gehiragana": "\u3052", - "/gehookcyr": "\u0495", - "/gehookstrokecyr": "\u04FB", - "/gekatakana": "\u30B2", - "/gemStone": "\u1F48E", - "/gemini": "\u264A", - "/geometricallyequal": "\u2251", - "/geometricallyequivalent": "\u224E", - "/geometricproportion": "\u223A", - "/geresh:hb": "\u05F3", - "/gereshMuqdam:hb": "\u059D", - "/gereshaccenthebrew": "\u059C", - "/gereshhebrew": "\u05F3", - "/gereshmuqdamhebrew": "\u059D", - "/germandbls": "\u00DF", - "/germanpenny": "\u20B0", - "/gershayim:hb": "\u05F4", - "/gershayimaccenthebrew": "\u059E", - "/gershayimhebrew": "\u05F4", - "/gestrokecyr": "\u0493", - "/getailcyr": "\u04F7", - "/getamark": "\u3013", - "/geupcyr": "\u0491", - "/ghabengali": "\u0998", - "/ghadarmenian": "\u0572", - "/ghadeva": "\u0918", - "/ghagujarati": "\u0A98", - "/ghagurmukhi": "\u0A18", - "/ghain": "\u063A", - "/ghain.fina": "\uFECE", - "/ghain.init": "\uFECF", - "/ghain.init_alefmaksura.fina": "\uFCF9", - "/ghain.init_jeem.fina": "\uFC2B", - "/ghain.init_jeem.medi": "\uFCBC", - "/ghain.init_meem.fina": "\uFC2C", - "/ghain.init_meem.medi": "\uFCBD", - "/ghain.init_yeh.fina": "\uFCFA", - "/ghain.isol": "\uFECD", - "/ghain.medi": "\uFED0", - "/ghain.medi_alefmaksura.fina": "\uFD15", - "/ghain.medi_meem.medi_alefmaksura.fina": "\uFD7B", - "/ghain.medi_meem.medi_meem.fina": "\uFD79", - "/ghain.medi_meem.medi_yeh.fina": "\uFD7A", - "/ghain.medi_yeh.fina": "\uFD16", - "/ghainarabic": "\u063A", - "/ghaindotbelow": "\u06FC", - "/ghainfinalarabic": "\uFECE", - "/ghaininitialarabic": "\uFECF", - "/ghainmedialarabic": "\uFED0", - "/ghemiddlehookcyrillic": "\u0495", - "/ghestrokecyrillic": "\u0493", - "/gheupturncyrillic": "\u0491", - "/ghhadeva": "\u095A", - "/ghhagurmukhi": "\u0A5A", - "/ghook": "\u0260", - "/ghost": "\u1F47B", - "/ghzfullwidth": "\u3393", - "/ghzsquare": "\u3393", - "/gigasquare": "\u3310", - "/gihiragana": "\u304E", - "/gikatakana": "\u30AE", - "/gimarmenian": "\u0563", - "/gimel": "\u05D2", - "/gimel:hb": "\u05D2", - "/gimeldagesh": "\uFB32", - "/gimeldageshhebrew": "\uFB32", - "/gimelhebrew": "\u05D2", - "/gimelwithdagesh:hb": "\uFB32", - "/giniisquare": "\u3311", - "/ginsularturned": "\uA77F", - "/girl": "\u1F467", - "/girls": "\u1F6CA", - "/girudaasquare": "\u3313", - "/gjecyr": "\u0453", - "/gjecyrillic": "\u0453", - "/globeMeridians": "\u1F310", - "/glottalinvertedstroke": "\u01BE", - "/glottalstop": "\u0294", - "/glottalstopinverted": "\u0296", - "/glottalstopmod": "\u02C0", - "/glottalstopreversed": "\u0295", - "/glottalstopreversedmod": "\u02C1", - "/glottalstopreversedsuperior": "\u02E4", - "/glottalstopstroke": "\u02A1", - "/glottalstopstrokereversed": "\u02A2", - "/glottalstopsupreversedmod": "\u02E4", - "/glowingStar": "\u1F31F", - "/gmacron": "\u1E21", - "/gmonospace": "\uFF47", - "/gmtr:diamondblack": "\u25C6", - "/gmtr:diamondwhite": "\u25C7", - "/gnrl:hyphen": "\u2010", - "/goat": "\u1F410", - "/gobliquestroke": "\uA7A1", - "/gohiragana": "\u3054", - "/gokatakana": "\u30B4", - "/golfer": "\u1F3CC", - "/gpafullwidth": "\u33AC", - "/gparen": "\u24A2", - "/gparenthesized": "\u24A2", - "/gpasquare": "\u33AC", - "/gr:acute": "\u1FFD", - "/gr:grave": "\u1FEF", - "/gr:question": "\u037E", - "/gr:tilde": "\u1FC0", - "/gradient": "\u2207", - "/graduationCap": "\u1F393", - "/grapes": "\u1F347", - "/grave": "\u0060", - "/gravebelowcmb": "\u0316", - "/gravecmb": "\u0300", - "/gravecomb": "\u0300", - "/gravedblmiddlemod": "\u02F5", - "/gravedeva": "\u0953", - "/gravelowmod": "\u02CE", - "/gravemiddlemod": "\u02F4", - "/gravemod": "\u02CB", - "/gravemonospace": "\uFF40", - "/gravetonecmb": "\u0340", - "/greater": "\u003E", - "/greaterbutnotequal": "\u2269", - "/greaterbutnotequivalent": "\u22E7", - "/greaterdot": "\u22D7", - "/greaterequal": "\u2265", - "/greaterequalorless": "\u22DB", - "/greatermonospace": "\uFF1E", - "/greaterorequivalent": "\u2273", - "/greaterorless": "\u2277", - "/greateroverequal": "\u2267", - "/greatersmall": "\uFE65", - "/greenApple": "\u1F34F", - "/greenBook": "\u1F4D7", - "/greenHeart": "\u1F49A", - "/grimacingFace": "\u1F62C", - "/grinningCatFaceWithSmilingEyes": "\u1F638", - "/grinningFace": "\u1F600", - "/grinningFaceWithSmilingEyes": "\u1F601", - "/growingHeart": "\u1F497", - "/gscript": "\u0261", - "/gstroke": "\u01E5", - "/guarani": "\u20B2", - "/guardsman": "\u1F482", - "/gueh": "\u06B3", - "/gueh.fina": "\uFB97", - "/gueh.init": "\uFB98", - "/gueh.isol": "\uFB96", - "/gueh.medi": "\uFB99", - "/guhiragana": "\u3050", - "/guillemetleft": "\u00AB", - "/guillemetright": "\u00BB", - "/guillemotleft": "\u00AB", - "/guillemotright": "\u00BB", - "/guilsinglleft": "\u2039", - "/guilsinglright": "\u203A", - "/guitar": "\u1F3B8", - "/gujr:a": "\u0A85", - "/gujr:aa": "\u0A86", - "/gujr:aasign": "\u0ABE", - "/gujr:abbreviation": "\u0AF0", - "/gujr:ai": "\u0A90", - "/gujr:aisign": "\u0AC8", - "/gujr:anusvara": "\u0A82", - "/gujr:au": "\u0A94", - "/gujr:ausign": "\u0ACC", - "/gujr:avagraha": "\u0ABD", - "/gujr:ba": "\u0AAC", - "/gujr:bha": "\u0AAD", - "/gujr:binducandra": "\u0A81", - "/gujr:ca": "\u0A9A", - "/gujr:cha": "\u0A9B", - "/gujr:circlenuktaabove": "\u0AFE", - "/gujr:da": "\u0AA6", - "/gujr:dda": "\u0AA1", - "/gujr:ddha": "\u0AA2", - "/gujr:dha": "\u0AA7", - "/gujr:e": "\u0A8F", - "/gujr:ecandra": "\u0A8D", - "/gujr:eight": "\u0AEE", - "/gujr:esign": "\u0AC7", - "/gujr:esigncandra": "\u0AC5", - "/gujr:five": "\u0AEB", - "/gujr:four": "\u0AEA", - "/gujr:ga": "\u0A97", - "/gujr:gha": "\u0A98", - "/gujr:ha": "\u0AB9", - "/gujr:i": "\u0A87", - "/gujr:ii": "\u0A88", - "/gujr:iisign": "\u0AC0", - "/gujr:isign": "\u0ABF", - "/gujr:ja": "\u0A9C", - "/gujr:jha": "\u0A9D", - "/gujr:ka": "\u0A95", - "/gujr:kha": "\u0A96", - "/gujr:la": "\u0AB2", - "/gujr:lla": "\u0AB3", - "/gujr:llvocal": "\u0AE1", - "/gujr:llvocalsign": "\u0AE3", - "/gujr:lvocal": "\u0A8C", - "/gujr:lvocalsign": "\u0AE2", - "/gujr:ma": "\u0AAE", - "/gujr:maddah": "\u0AFC", - "/gujr:na": "\u0AA8", - "/gujr:nga": "\u0A99", - "/gujr:nine": "\u0AEF", - "/gujr:nna": "\u0AA3", - "/gujr:nukta": "\u0ABC", - "/gujr:nya": "\u0A9E", - "/gujr:o": "\u0A93", - "/gujr:ocandra": "\u0A91", - "/gujr:om": "\u0AD0", - "/gujr:one": "\u0AE7", - "/gujr:osign": "\u0ACB", - "/gujr:osigncandra": "\u0AC9", - "/gujr:pa": "\u0AAA", - "/gujr:pha": "\u0AAB", - "/gujr:ra": "\u0AB0", - "/gujr:rrvocal": "\u0AE0", - "/gujr:rrvocalsign": "\u0AC4", - "/gujr:rupee": "\u0AF1", - "/gujr:rvocal": "\u0A8B", - "/gujr:rvocalsign": "\u0AC3", - "/gujr:sa": "\u0AB8", - "/gujr:seven": "\u0AED", - "/gujr:sha": "\u0AB6", - "/gujr:shadda": "\u0AFB", - "/gujr:six": "\u0AEC", - "/gujr:ssa": "\u0AB7", - "/gujr:sukun": "\u0AFA", - "/gujr:ta": "\u0AA4", - "/gujr:tha": "\u0AA5", - "/gujr:three": "\u0AE9", - "/gujr:three-dotnuktaabove": "\u0AFD", - "/gujr:tta": "\u0A9F", - "/gujr:ttha": "\u0AA0", - "/gujr:two": "\u0AE8", - "/gujr:two-circlenuktaabove": "\u0AFF", - "/gujr:u": "\u0A89", - "/gujr:usign": "\u0AC1", - "/gujr:uu": "\u0A8A", - "/gujr:uusign": "\u0AC2", - "/gujr:va": "\u0AB5", - "/gujr:virama": "\u0ACD", - "/gujr:visarga": "\u0A83", - "/gujr:ya": "\u0AAF", - "/gujr:zero": "\u0AE6", - "/gujr:zha": "\u0AF9", - "/gukatakana": "\u30B0", - "/guramusquare": "\u3318", - "/guramutonsquare": "\u3319", - "/guru:a": "\u0A05", - "/guru:aa": "\u0A06", - "/guru:aasign": "\u0A3E", - "/guru:adakbindisign": "\u0A01", - "/guru:addak": "\u0A71", - "/guru:ai": "\u0A10", - "/guru:aisign": "\u0A48", - "/guru:au": "\u0A14", - "/guru:ausign": "\u0A4C", - "/guru:ba": "\u0A2C", - "/guru:bha": "\u0A2D", - "/guru:bindisign": "\u0A02", - "/guru:ca": "\u0A1A", - "/guru:cha": "\u0A1B", - "/guru:da": "\u0A26", - "/guru:dda": "\u0A21", - "/guru:ddha": "\u0A22", - "/guru:dha": "\u0A27", - "/guru:ee": "\u0A0F", - "/guru:eesign": "\u0A47", - "/guru:eight": "\u0A6E", - "/guru:ekonkar": "\u0A74", - "/guru:fa": "\u0A5E", - "/guru:five": "\u0A6B", - "/guru:four": "\u0A6A", - "/guru:ga": "\u0A17", - "/guru:gha": "\u0A18", - "/guru:ghha": "\u0A5A", - "/guru:ha": "\u0A39", - "/guru:i": "\u0A07", - "/guru:ii": "\u0A08", - "/guru:iisign": "\u0A40", - "/guru:iri": "\u0A72", - "/guru:isign": "\u0A3F", - "/guru:ja": "\u0A1C", - "/guru:jha": "\u0A1D", - "/guru:ka": "\u0A15", - "/guru:kha": "\u0A16", - "/guru:khha": "\u0A59", - "/guru:la": "\u0A32", - "/guru:lla": "\u0A33", - "/guru:ma": "\u0A2E", - "/guru:na": "\u0A28", - "/guru:nga": "\u0A19", - "/guru:nine": "\u0A6F", - "/guru:nna": "\u0A23", - "/guru:nukta": "\u0A3C", - "/guru:nya": "\u0A1E", - "/guru:one": "\u0A67", - "/guru:oo": "\u0A13", - "/guru:oosign": "\u0A4B", - "/guru:pa": "\u0A2A", - "/guru:pha": "\u0A2B", - "/guru:ra": "\u0A30", - "/guru:rra": "\u0A5C", - "/guru:sa": "\u0A38", - "/guru:seven": "\u0A6D", - "/guru:sha": "\u0A36", - "/guru:six": "\u0A6C", - "/guru:ta": "\u0A24", - "/guru:tha": "\u0A25", - "/guru:three": "\u0A69", - "/guru:tippi": "\u0A70", - "/guru:tta": "\u0A1F", - "/guru:ttha": "\u0A20", - "/guru:two": "\u0A68", - "/guru:u": "\u0A09", - "/guru:udaatsign": "\u0A51", - "/guru:ura": "\u0A73", - "/guru:usign": "\u0A41", - "/guru:uu": "\u0A0A", - "/guru:uusign": "\u0A42", - "/guru:va": "\u0A35", - "/guru:virama": "\u0A4D", - "/guru:visarga": "\u0A03", - "/guru:ya": "\u0A2F", - "/guru:yakashsign": "\u0A75", - "/guru:za": "\u0A5B", - "/guru:zero": "\u0A66", - "/gyfullwidth": "\u33C9", - "/gysquare": "\u33C9", - "/h": "\u0068", - "/h.inferior": "\u2095", - "/haabkhasiancyrillic": "\u04A9", - "/haabkhcyr": "\u04A9", - "/haaltonearabic": "\u06C1", - "/habengali": "\u09B9", - "/hacirclekatakana": "\u32E9", - "/hacyr": "\u0445", - "/hadescendercyrillic": "\u04B3", - "/hadeva": "\u0939", - "/hafullwidth": "\u33CA", - "/hagujarati": "\u0AB9", - "/hagurmukhi": "\u0A39", - "/hah": "\u062D", - "/hah.fina": "\uFEA2", - "/hah.init": "\uFEA3", - "/hah.init_alefmaksura.fina": "\uFCFF", - "/hah.init_jeem.fina": "\uFC17", - "/hah.init_jeem.medi": "\uFCA9", - "/hah.init_meem.fina": "\uFC18", - "/hah.init_meem.medi": "\uFCAA", - "/hah.init_yeh.fina": "\uFD00", - "/hah.isol": "\uFEA1", - "/hah.medi": "\uFEA4", - "/hah.medi_alefmaksura.fina": "\uFD1B", - "/hah.medi_jeem.medi_yeh.fina": "\uFDBF", - "/hah.medi_meem.medi_alefmaksura.fina": "\uFD5B", - "/hah.medi_meem.medi_yeh.fina": "\uFD5A", - "/hah.medi_yeh.fina": "\uFD1C", - "/hahDigitFourBelow": "\u077C", - "/hahSmallTahAbove": "\u0772", - "/hahSmallTahBelow": "\u076E", - "/hahSmallTahTwoDots": "\u076F", - "/hahThreeDotsUpBelow": "\u0758", - "/hahTwoDotsAbove": "\u0757", - "/haharabic": "\u062D", - "/hahfinalarabic": "\uFEA2", - "/hahhamza": "\u0681", - "/hahinitialarabic": "\uFEA3", - "/hahiragana": "\u306F", - "/hahmedialarabic": "\uFEA4", - "/hahookcyr": "\u04FD", - "/hahthreedotsabove": "\u0685", - "/hahtwodotsvertical": "\u0682", - "/haircut": "\u1F487", - "/hairspace": "\u200A", - "/haitusquare": "\u332A", - "/hakatakana": "\u30CF", - "/hakatakanahalfwidth": "\uFF8A", - "/halantgurmukhi": "\u0A4D", - "/halfcircleleftblack": "\u25D6", - "/halfcirclerightblack": "\u25D7", - "/hamburger": "\u1F354", - "/hammer": "\u1F528", - "/hammerAndWrench": "\u1F6E0", - "/hammerpick": "\u2692", - "/hammersickle": "\u262D", - "/hamsterFace": "\u1F439", - "/hamza": "\u0621", - "/hamzaIsol": "\uFE80", - "/hamzaabove": "\u0654", - "/hamzaarabic": "\u0621", - "/hamzabelow": "\u0655", - "/hamzadammaarabic": "\u0621", - "/hamzadammatanarabic": "\u0621", - "/hamzafathaarabic": "\u0621", - "/hamzafathatanarabic": "\u0621", - "/hamzalowarabic": "\u0621", - "/hamzalowkasraarabic": "\u0621", - "/hamzalowkasratanarabic": "\u0621", - "/hamzasukunarabic": "\u0621", - "/handbag": "\u1F45C", - "/handtailfishhookturned": "\u02AF", - "/hangulchieuchaparen": "\u3217", - "/hangulchieuchparen": "\u3209", - "/hangulcieucaparen": "\u3216", - "/hangulcieucparen": "\u3208", - "/hangulcieucuparen": "\u321C", - "/hanguldottonemarkdbl": "\u302F", - "/hangulfiller": "\u3164", - "/hangulhieuhaparen": "\u321B", - "/hangulhieuhparen": "\u320D", - "/hangulieungaparen": "\u3215", - "/hangulieungparen": "\u3207", - "/hangulkhieukhaparen": "\u3218", - "/hangulkhieukhparen": "\u320A", - "/hangulkiyeokaparen": "\u320E", - "/hangulkiyeokparen": "\u3200", - "/hangulmieumaparen": "\u3212", - "/hangulmieumparen": "\u3204", - "/hangulnieunaparen": "\u320F", - "/hangulnieunparen": "\u3201", - "/hangulphieuphaparen": "\u321A", - "/hangulphieuphparen": "\u320C", - "/hangulpieupaparen": "\u3213", - "/hangulpieupparen": "\u3205", - "/hangulrieulaparen": "\u3211", - "/hangulrieulparen": "\u3203", - "/hangulsingledottonemark": "\u302E", - "/hangulsiosaparen": "\u3214", - "/hangulsiosparen": "\u3206", - "/hangulthieuthaparen": "\u3219", - "/hangulthieuthparen": "\u320B", - "/hangultikeutaparen": "\u3210", - "/hangultikeutparen": "\u3202", - "/happyPersonRaisingOneHand": "\u1F64B", - "/hardDisk": "\u1F5B4", - "/hardcyr": "\u044A", - "/hardsigncyrillic": "\u044A", - "/harpoondownbarbleft": "\u21C3", - "/harpoondownbarbright": "\u21C2", - "/harpoonleftbarbdown": "\u21BD", - "/harpoonleftbarbup": "\u21BC", - "/harpoonrightbarbdown": "\u21C1", - "/harpoonrightbarbup": "\u21C0", - "/harpoonupbarbleft": "\u21BF", - "/harpoonupbarbright": "\u21BE", - "/hasquare": "\u33CA", - "/hastrokecyr": "\u04FF", - "/hatafPatah:hb": "\u05B2", - "/hatafQamats:hb": "\u05B3", - "/hatafSegol:hb": "\u05B1", - "/hatafpatah": "\u05B2", - "/hatafpatah16": "\u05B2", - "/hatafpatah23": "\u05B2", - "/hatafpatah2f": "\u05B2", - "/hatafpatahhebrew": "\u05B2", - "/hatafpatahnarrowhebrew": "\u05B2", - "/hatafpatahquarterhebrew": "\u05B2", - "/hatafpatahwidehebrew": "\u05B2", - "/hatafqamats": "\u05B3", - "/hatafqamats1b": "\u05B3", - "/hatafqamats28": "\u05B3", - "/hatafqamats34": "\u05B3", - "/hatafqamatshebrew": "\u05B3", - "/hatafqamatsnarrowhebrew": "\u05B3", - "/hatafqamatsquarterhebrew": "\u05B3", - "/hatafqamatswidehebrew": "\u05B3", - "/hatafsegol": "\u05B1", - "/hatafsegol17": "\u05B1", - "/hatafsegol24": "\u05B1", - "/hatafsegol30": "\u05B1", - "/hatafsegolhebrew": "\u05B1", - "/hatafsegolnarrowhebrew": "\u05B1", - "/hatafsegolquarterhebrew": "\u05B1", - "/hatafsegolwidehebrew": "\u05B1", - "/hatchingChick": "\u1F423", - "/haveideographiccircled": "\u3292", - "/haveideographicparen": "\u3232", - "/hbar": "\u0127", - "/hbopomofo": "\u310F", - "/hbrevebelow": "\u1E2B", - "/hcaron": "\u021F", - "/hcedilla": "\u1E29", - "/hcircle": "\u24D7", - "/hcircumflex": "\u0125", - "/hcsquare": "\u1F1A6", - "/hdescender": "\u2C68", - "/hdieresis": "\u1E27", - "/hdot": "\u1E23", - "/hdotaccent": "\u1E23", - "/hdotbelow": "\u1E25", - "/hdrsquare": "\u1F1A7", - "/he": "\u05D4", - "/he:hb": "\u05D4", - "/headphone": "\u1F3A7", - "/headstonegraveyard": "\u26FC", - "/hearNoEvilMonkey": "\u1F649", - "/heart": "\u2665", - "/heartArrow": "\u1F498", - "/heartDecoration": "\u1F49F", - "/heartRibbon": "\u1F49D", - "/heartTipOnTheLeft": "\u1F394", - "/heartblack": "\u2665", - "/heartsuitblack": "\u2665", - "/heartsuitwhite": "\u2661", - "/heartwhite": "\u2661", - "/heavyDollarSign": "\u1F4B2", - "/heavyLatinCross": "\u1F547", - "/heavydbldashhorz": "\u254D", - "/heavydbldashvert": "\u254F", - "/heavydn": "\u257B", - "/heavydnhorz": "\u2533", - "/heavydnleft": "\u2513", - "/heavydnright": "\u250F", - "/heavyhorz": "\u2501", - "/heavyleft": "\u2578", - "/heavyleftlightright": "\u257E", - "/heavyquaddashhorz": "\u2509", - "/heavyquaddashvert": "\u250B", - "/heavyright": "\u257A", - "/heavytrpldashhorz": "\u2505", - "/heavytrpldashvert": "\u2507", - "/heavyup": "\u2579", - "/heavyuphorz": "\u253B", - "/heavyupleft": "\u251B", - "/heavyuplightdn": "\u257F", - "/heavyupright": "\u2517", - "/heavyvert": "\u2503", - "/heavyverthorz": "\u254B", - "/heavyvertleft": "\u252B", - "/heavyvertright": "\u2523", - "/hecirclekatakana": "\u32EC", - "/hedagesh": "\uFB34", - "/hedageshhebrew": "\uFB34", - "/hedinterlacedpentagramleft": "\u26E6", - "/hedinterlacedpentagramright": "\u26E5", - "/heh": "\u0647", - "/heh.fina": "\uFEEA", - "/heh.init": "\uFEEB", - "/heh.init_alefmaksura.fina": "\uFC53", - "/heh.init_jeem.fina": "\uFC51", - "/heh.init_jeem.medi": "\uFCD7", - "/heh.init_meem.fina": "\uFC52", - "/heh.init_meem.medi": "\uFCD8", - "/heh.init_meem.medi_jeem.medi": "\uFD93", - "/heh.init_meem.medi_meem.medi": "\uFD94", - "/heh.init_superscriptalef.medi": "\uFCD9", - "/heh.init_yeh.fina": "\uFC54", - "/heh.isol": "\uFEE9", - "/heh.medi": "\uFEEC", - "/hehaltonearabic": "\u06C1", - "/heharabic": "\u0647", - "/hehdoachashmee": "\u06BE", - "/hehdoachashmee.fina": "\uFBAB", - "/hehdoachashmee.init": "\uFBAC", - "/hehdoachashmee.isol": "\uFBAA", - "/hehdoachashmee.medi": "\uFBAD", - "/hehebrew": "\u05D4", - "/hehfinalaltonearabic": "\uFBA7", - "/hehfinalalttwoarabic": "\uFEEA", - "/hehfinalarabic": "\uFEEA", - "/hehgoal": "\u06C1", - "/hehgoal.fina": "\uFBA7", - "/hehgoal.init": "\uFBA8", - "/hehgoal.isol": "\uFBA6", - "/hehgoal.medi": "\uFBA9", - "/hehgoalhamza": "\u06C2", - "/hehhamzaabovefinalarabic": "\uFBA5", - "/hehhamzaaboveisolatedarabic": "\uFBA4", - "/hehinitialaltonearabic": "\uFBA8", - "/hehinitialarabic": "\uFEEB", - "/hehinvertedV": "\u06FF", - "/hehiragana": "\u3078", - "/hehmedialaltonearabic": "\uFBA9", - "/hehmedialarabic": "\uFEEC", - "/hehyeh": "\u06C0", - "/hehyeh.fina": "\uFBA5", - "/hehyeh.isol": "\uFBA4", - "/heiseierasquare": "\u337B", - "/hekatakana": "\u30D8", - "/hekatakanahalfwidth": "\uFF8D", - "/hekutaarusquare": "\u3336", - "/helicopter": "\u1F681", - "/helm": "\u2388", - "/helmetcrosswhite": "\u26D1", - "/heng": "\uA727", - "/henghook": "\u0267", - "/herb": "\u1F33F", - "/hermitianconjugatematrix": "\u22B9", - "/herutusquare": "\u3339", - "/het": "\u05D7", - "/het:hb": "\u05D7", - "/heta": "\u0371", - "/hethebrew": "\u05D7", - "/hewide:hb": "\uFB23", - "/hewithmapiq:hb": "\uFB34", - "/hfishhookturned": "\u02AE", - "/hhalf": "\u2C76", - "/hhook": "\u0266", - "/hhooksuperior": "\u02B1", - "/hhooksupmod": "\u02B1", - "/hi-ressquare": "\u1F1A8", - "/hibiscus": "\u1F33A", - "/hicirclekatakana": "\u32EA", - "/hieuhacirclekorean": "\u327B", - "/hieuhaparenkorean": "\u321B", - "/hieuhcirclekorean": "\u326D", - "/hieuhkorean": "\u314E", - "/hieuhparenkorean": "\u320D", - "/high-heeledShoe": "\u1F460", - "/highBrightness": "\u1F506", - "/highSpeedTrain": "\u1F684", - "/highSpeedTrainWithBulletNose": "\u1F685", - "/highhamza": "\u0674", - "/highideographiccircled": "\u32A4", - "/highvoltage": "\u26A1", - "/hihiragana": "\u3072", - "/hikatakana": "\u30D2", - "/hikatakanahalfwidth": "\uFF8B", - "/hira:a": "\u3042", - "/hira:asmall": "\u3041", - "/hira:ba": "\u3070", - "/hira:be": "\u3079", - "/hira:bi": "\u3073", - "/hira:bo": "\u307C", - "/hira:bu": "\u3076", - "/hira:da": "\u3060", - "/hira:de": "\u3067", - "/hira:di": "\u3062", - "/hira:digraphyori": "\u309F", - "/hira:do": "\u3069", - "/hira:du": "\u3065", - "/hira:e": "\u3048", - "/hira:esmall": "\u3047", - "/hira:ga": "\u304C", - "/hira:ge": "\u3052", - "/hira:gi": "\u304E", - "/hira:go": "\u3054", - "/hira:gu": "\u3050", - "/hira:ha": "\u306F", - "/hira:he": "\u3078", - "/hira:hi": "\u3072", - "/hira:ho": "\u307B", - "/hira:hu": "\u3075", - "/hira:i": "\u3044", - "/hira:ismall": "\u3043", - "/hira:iterationhiragana": "\u309D", - "/hira:ka": "\u304B", - "/hira:kasmall": "\u3095", - "/hira:ke": "\u3051", - "/hira:kesmall": "\u3096", - "/hira:ki": "\u304D", - "/hira:ko": "\u3053", - "/hira:ku": "\u304F", - "/hira:ma": "\u307E", - "/hira:me": "\u3081", - "/hira:mi": "\u307F", - "/hira:mo": "\u3082", - "/hira:mu": "\u3080", - "/hira:n": "\u3093", - "/hira:na": "\u306A", - "/hira:ne": "\u306D", - "/hira:ni": "\u306B", - "/hira:no": "\u306E", - "/hira:nu": "\u306C", - "/hira:o": "\u304A", - "/hira:osmall": "\u3049", - "/hira:pa": "\u3071", - "/hira:pe": "\u307A", - "/hira:pi": "\u3074", - "/hira:po": "\u307D", - "/hira:pu": "\u3077", - "/hira:ra": "\u3089", - "/hira:re": "\u308C", - "/hira:ri": "\u308A", - "/hira:ro": "\u308D", - "/hira:ru": "\u308B", - "/hira:sa": "\u3055", - "/hira:se": "\u305B", - "/hira:semivoicedmarkkana": "\u309C", - "/hira:semivoicedmarkkanacmb": "\u309A", - "/hira:si": "\u3057", - "/hira:so": "\u305D", - "/hira:su": "\u3059", - "/hira:ta": "\u305F", - "/hira:te": "\u3066", - "/hira:ti": "\u3061", - "/hira:to": "\u3068", - "/hira:tu": "\u3064", - "/hira:tusmall": "\u3063", - "/hira:u": "\u3046", - "/hira:usmall": "\u3045", - "/hira:voicediterationhiragana": "\u309E", - "/hira:voicedmarkkana": "\u309B", - "/hira:voicedmarkkanacmb": "\u3099", - "/hira:vu": "\u3094", - "/hira:wa": "\u308F", - "/hira:wasmall": "\u308E", - "/hira:we": "\u3091", - "/hira:wi": "\u3090", - "/hira:wo": "\u3092", - "/hira:ya": "\u3084", - "/hira:yasmall": "\u3083", - "/hira:yo": "\u3088", - "/hira:yosmall": "\u3087", - "/hira:yu": "\u3086", - "/hira:yusmall": "\u3085", - "/hira:za": "\u3056", - "/hira:ze": "\u305C", - "/hira:zi": "\u3058", - "/hira:zo": "\u305E", - "/hira:zu": "\u305A", - "/hiriq": "\u05B4", - "/hiriq14": "\u05B4", - "/hiriq21": "\u05B4", - "/hiriq2d": "\u05B4", - "/hiriq:hb": "\u05B4", - "/hiriqhebrew": "\u05B4", - "/hiriqnarrowhebrew": "\u05B4", - "/hiriqquarterhebrew": "\u05B4", - "/hiriqwidehebrew": "\u05B4", - "/historicsite": "\u26EC", - "/hlinebelow": "\u1E96", - "/hmonospace": "\uFF48", - "/hoarmenian": "\u0570", - "/hocho": "\u1F52A", - "/hocirclekatakana": "\u32ED", - "/hohipthai": "\u0E2B", - "/hohiragana": "\u307B", - "/hokatakana": "\u30DB", - "/hokatakanahalfwidth": "\uFF8E", - "/holam": "\u05B9", - "/holam19": "\u05B9", - "/holam26": "\u05B9", - "/holam32": "\u05B9", - "/holam:hb": "\u05B9", - "/holamHaser:hb": "\u05BA", - "/holamhebrew": "\u05B9", - "/holamnarrowhebrew": "\u05B9", - "/holamquarterhebrew": "\u05B9", - "/holamwidehebrew": "\u05B9", - "/hole": "\u1F573", - "/homotic": "\u223B", - "/honeyPot": "\u1F36F", - "/honeybee": "\u1F41D", - "/honokhukthai": "\u0E2E", - "/honsquare": "\u333F", - "/hook": "\u2440", - "/hookabovecomb": "\u0309", - "/hookcmb": "\u0309", - "/hookpalatalizedbelowcmb": "\u0321", - "/hookretroflexbelowcmb": "\u0322", - "/hoonsquare": "\u3342", - "/hoorusquare": "\u3341", - "/horicoptic": "\u03E9", - "/horizontalTrafficLight": "\u1F6A5", - "/horizontalbar": "\u2015", - "/horizontalbarwhitearrowonpedestalup": "\u21EC", - "/horizontalmalestroke": "\u26A9", - "/horncmb": "\u031B", - "/horse": "\u1F40E", - "/horseFace": "\u1F434", - "/horseRacing": "\u1F3C7", - "/hospital": "\u1F3E5", - "/hotDog": "\u1F32D", - "/hotPepper": "\u1F336", - "/hotbeverage": "\u2615", - "/hotel": "\u1F3E8", - "/hotsprings": "\u2668", - "/hourglass": "\u231B", - "/hourglassflowings": "\u23F3", - "/house": "\u2302", - "/houseBuilding": "\u1F3E0", - "/houseBuildings": "\u1F3D8", - "/houseGarden": "\u1F3E1", - "/hpafullwidth": "\u3371", - "/hpalatalhook": "\uA795", - "/hparen": "\u24A3", - "/hparenthesized": "\u24A3", - "/hpfullwidth": "\u33CB", - "/hryvnia": "\u20B4", - "/hsuperior": "\u02B0", - "/hsupmod": "\u02B0", - "/hturned": "\u0265", - "/htypeopencircuit": "\u238F", - "/huaraddosquare": "\u3332", - "/hucirclekatakana": "\u32EB", - "/huhiragana": "\u3075", - "/huiitosquare": "\u3333", - "/hukatakana": "\u30D5", - "/hukatakanahalfwidth": "\uFF8C", - "/hundredPoints": "\u1F4AF", - "/hundredthousandscmbcyr": "\u0488", - "/hungarumlaut": "\u02DD", - "/hungarumlautcmb": "\u030B", - "/huransquare": "\u3335", - "/hushedFace": "\u1F62F", - "/hv": "\u0195", - "/hwd:a": "\uFFC2", - "/hwd:ae": "\uFFC3", - "/hwd:blacksquare": "\uFFED", - "/hwd:chieuch": "\uFFBA", - "/hwd:cieuc": "\uFFB8", - "/hwd:downwardsarrow": "\uFFEC", - "/hwd:e": "\uFFC7", - "/hwd:eo": "\uFFC6", - "/hwd:eu": "\uFFDA", - "/hwd:formslightvertical": "\uFFE8", - "/hwd:hangulfiller": "\uFFA0", - "/hwd:hieuh": "\uFFBE", - "/hwd:i": "\uFFDC", - "/hwd:ideographiccomma": "\uFF64", - "/hwd:ideographicfullstop": "\uFF61", - "/hwd:ieung": "\uFFB7", - "/hwd:kata:a": "\uFF71", - "/hwd:kata:asmall": "\uFF67", - "/hwd:kata:e": "\uFF74", - "/hwd:kata:esmall": "\uFF6A", - "/hwd:kata:ha": "\uFF8A", - "/hwd:kata:he": "\uFF8D", - "/hwd:kata:hi": "\uFF8B", - "/hwd:kata:ho": "\uFF8E", - "/hwd:kata:hu": "\uFF8C", - "/hwd:kata:i": "\uFF72", - "/hwd:kata:ismall": "\uFF68", - "/hwd:kata:ka": "\uFF76", - "/hwd:kata:ke": "\uFF79", - "/hwd:kata:ki": "\uFF77", - "/hwd:kata:ko": "\uFF7A", - "/hwd:kata:ku": "\uFF78", - "/hwd:kata:ma": "\uFF8F", - "/hwd:kata:me": "\uFF92", - "/hwd:kata:mi": "\uFF90", - "/hwd:kata:middledot": "\uFF65", - "/hwd:kata:mo": "\uFF93", - "/hwd:kata:mu": "\uFF91", - "/hwd:kata:n": "\uFF9D", - "/hwd:kata:na": "\uFF85", - "/hwd:kata:ne": "\uFF88", - "/hwd:kata:ni": "\uFF86", - "/hwd:kata:no": "\uFF89", - "/hwd:kata:nu": "\uFF87", - "/hwd:kata:o": "\uFF75", - "/hwd:kata:osmall": "\uFF6B", - "/hwd:kata:prolongedkana": "\uFF70", - "/hwd:kata:ra": "\uFF97", - "/hwd:kata:re": "\uFF9A", - "/hwd:kata:ri": "\uFF98", - "/hwd:kata:ro": "\uFF9B", - "/hwd:kata:ru": "\uFF99", - "/hwd:kata:sa": "\uFF7B", - "/hwd:kata:se": "\uFF7E", - "/hwd:kata:semi-voiced": "\uFF9F", - "/hwd:kata:si": "\uFF7C", - "/hwd:kata:so": "\uFF7F", - "/hwd:kata:su": "\uFF7D", - "/hwd:kata:ta": "\uFF80", - "/hwd:kata:te": "\uFF83", - "/hwd:kata:ti": "\uFF81", - "/hwd:kata:to": "\uFF84", - "/hwd:kata:tu": "\uFF82", - "/hwd:kata:tusmall": "\uFF6F", - "/hwd:kata:u": "\uFF73", - "/hwd:kata:usmall": "\uFF69", - "/hwd:kata:voiced": "\uFF9E", - "/hwd:kata:wa": "\uFF9C", - "/hwd:kata:wo": "\uFF66", - "/hwd:kata:ya": "\uFF94", - "/hwd:kata:yasmall": "\uFF6C", - "/hwd:kata:yo": "\uFF96", - "/hwd:kata:yosmall": "\uFF6E", - "/hwd:kata:yu": "\uFF95", - "/hwd:kata:yusmall": "\uFF6D", - "/hwd:khieukh": "\uFFBB", - "/hwd:kiyeok": "\uFFA1", - "/hwd:kiyeoksios": "\uFFA3", - "/hwd:leftcornerbracket": "\uFF62", - "/hwd:leftwardsarrow": "\uFFE9", - "/hwd:mieum": "\uFFB1", - "/hwd:nieun": "\uFFA4", - "/hwd:nieuncieuc": "\uFFA5", - "/hwd:nieunhieuh": "\uFFA6", - "/hwd:o": "\uFFCC", - "/hwd:oe": "\uFFCF", - "/hwd:phieuph": "\uFFBD", - "/hwd:pieup": "\uFFB2", - "/hwd:pieupsios": "\uFFB4", - "/hwd:rieul": "\uFFA9", - "/hwd:rieulhieuh": "\uFFB0", - "/hwd:rieulkiyeok": "\uFFAA", - "/hwd:rieulmieum": "\uFFAB", - "/hwd:rieulphieuph": "\uFFAF", - "/hwd:rieulpieup": "\uFFAC", - "/hwd:rieulsios": "\uFFAD", - "/hwd:rieulthieuth": "\uFFAE", - "/hwd:rightcornerbracket": "\uFF63", - "/hwd:rightwardsarrow": "\uFFEB", - "/hwd:sios": "\uFFB5", - "/hwd:ssangcieuc": "\uFFB9", - "/hwd:ssangkiyeok": "\uFFA2", - "/hwd:ssangpieup": "\uFFB3", - "/hwd:ssangsios": "\uFFB6", - "/hwd:ssangtikeut": "\uFFA8", - "/hwd:thieuth": "\uFFBC", - "/hwd:tikeut": "\uFFA7", - "/hwd:u": "\uFFD3", - "/hwd:upwardsarrow": "\uFFEA", - "/hwd:wa": "\uFFCD", - "/hwd:wae": "\uFFCE", - "/hwd:we": "\uFFD5", - "/hwd:weo": "\uFFD4", - "/hwd:whitecircle": "\uFFEE", - "/hwd:wi": "\uFFD6", - "/hwd:ya": "\uFFC4", - "/hwd:yae": "\uFFC5", - "/hwd:ye": "\uFFCB", - "/hwd:yeo": "\uFFCA", - "/hwd:yi": "\uFFDB", - "/hwd:yo": "\uFFD2", - "/hwd:yu": "\uFFD7", - "/hyphen": "\u002D", - "/hyphenationpoint": "\u2027", - "/hyphenbullet": "\u2043", - "/hyphendbl": "\u2E40", - "/hyphendbloblique": "\u2E17", - "/hyphendieresis": "\u2E1A", - "/hypheninferior": "\uF6E5", - "/hyphenminus": "\u002D", - "/hyphenmonospace": "\uFF0D", - "/hyphensmall": "\uFE63", - "/hyphensoft": "\u00AD", - "/hyphensuperior": "\uF6E6", - "/hyphentwo": "\u2010", - "/hypodiastole": "\u2E12", - "/hysteresis": "\u238E", - "/hzfullwidth": "\u3390", - "/i": "\u0069", - "/i.superior": "\u2071", - "/iacute": "\u00ED", - "/iacyrillic": "\u044F", - "/iaepigraphic": "\uA7FE", - "/ibengali": "\u0987", - "/ibopomofo": "\u3127", - "/ibreve": "\u012D", - "/icaron": "\u01D0", - "/iceCream": "\u1F368", - "/iceHockeyStickAndPuck": "\u1F3D2", - "/iceskate": "\u26F8", - "/icircle": "\u24D8", - "/icirclekatakana": "\u32D1", - "/icircumflex": "\u00EE", - "/icyr": "\u0438", - "/icyrillic": "\u0456", - "/idblgrave": "\u0209", - "/idblstruckitalic": "\u2148", - "/ideographearthcircle": "\u328F", - "/ideographfirecircle": "\u328B", - "/ideographicallianceparen": "\u323F", - "/ideographiccallparen": "\u323A", - "/ideographiccentrecircle": "\u32A5", - "/ideographicclose": "\u3006", - "/ideographiccomma": "\u3001", - "/ideographiccommaleft": "\uFF64", - "/ideographiccongratulationparen": "\u3237", - "/ideographiccorrectcircle": "\u32A3", - "/ideographicdepartingtonemark": "\u302C", - "/ideographicearthparen": "\u322F", - "/ideographicenteringtonemark": "\u302D", - "/ideographicenterpriseparen": "\u323D", - "/ideographicexcellentcircle": "\u329D", - "/ideographicfestivalparen": "\u3240", - "/ideographicfinancialcircle": "\u3296", - "/ideographicfinancialparen": "\u3236", - "/ideographicfireparen": "\u322B", - "/ideographichalffillspace": "\u303F", - "/ideographichaveparen": "\u3232", - "/ideographichighcircle": "\u32A4", - "/ideographiciterationmark": "\u3005", - "/ideographiclaborcircle": "\u3298", - "/ideographiclaborparen": "\u3238", - "/ideographicleftcircle": "\u32A7", - "/ideographicleveltonemark": "\u302A", - "/ideographiclowcircle": "\u32A6", - "/ideographicmedicinecircle": "\u32A9", - "/ideographicmetalparen": "\u322E", - "/ideographicmoonparen": "\u322A", - "/ideographicnameparen": "\u3234", - "/ideographicperiod": "\u3002", - "/ideographicprintcircle": "\u329E", - "/ideographicreachparen": "\u3243", - "/ideographicrepresentparen": "\u3239", - "/ideographicresourceparen": "\u323E", - "/ideographicrightcircle": "\u32A8", - "/ideographicrisingtonemark": "\u302B", - "/ideographicsecretcircle": "\u3299", - "/ideographicselfparen": "\u3242", - "/ideographicsocietyparen": "\u3233", - "/ideographicspace": "\u3000", - "/ideographicspecialparen": "\u3235", - "/ideographicstockparen": "\u3231", - "/ideographicstudyparen": "\u323B", - "/ideographicsunparen": "\u3230", - "/ideographicsuperviseparen": "\u323C", - "/ideographictelegraphlinefeedseparatorsymbol": "\u3037", - "/ideographictelegraphsymbolforhoureight": "\u3360", - "/ideographictelegraphsymbolforhoureighteen": "\u336A", - "/ideographictelegraphsymbolforhoureleven": "\u3363", - "/ideographictelegraphsymbolforhourfifteen": "\u3367", - "/ideographictelegraphsymbolforhourfive": "\u335D", - "/ideographictelegraphsymbolforhourfour": "\u335C", - "/ideographictelegraphsymbolforhourfourteen": "\u3366", - "/ideographictelegraphsymbolforhournine": "\u3361", - "/ideographictelegraphsymbolforhournineteen": "\u336B", - "/ideographictelegraphsymbolforhourone": "\u3359", - "/ideographictelegraphsymbolforhourseven": "\u335F", - "/ideographictelegraphsymbolforhourseventeen": "\u3369", - "/ideographictelegraphsymbolforhoursix": "\u335E", - "/ideographictelegraphsymbolforhoursixteen": "\u3368", - "/ideographictelegraphsymbolforhourten": "\u3362", - "/ideographictelegraphsymbolforhourthirteen": "\u3365", - "/ideographictelegraphsymbolforhourthree": "\u335B", - "/ideographictelegraphsymbolforhourtwelve": "\u3364", - "/ideographictelegraphsymbolforhourtwenty": "\u336C", - "/ideographictelegraphsymbolforhourtwentyfour": "\u3370", - "/ideographictelegraphsymbolforhourtwentyone": "\u336D", - "/ideographictelegraphsymbolforhourtwentythree": "\u336F", - "/ideographictelegraphsymbolforhourtwentytwo": "\u336E", - "/ideographictelegraphsymbolforhourtwo": "\u335A", - "/ideographictelegraphsymbolforhourzero": "\u3358", - "/ideographicvariationindicator": "\u303E", - "/ideographicwaterparen": "\u322C", - "/ideographicwoodparen": "\u322D", - "/ideographiczero": "\u3007", - "/ideographmetalcircle": "\u328E", - "/ideographmooncircle": "\u328A", - "/ideographnamecircle": "\u3294", - "/ideographsuncircle": "\u3290", - "/ideographwatercircle": "\u328C", - "/ideographwoodcircle": "\u328D", - "/ideva": "\u0907", - "/idieresis": "\u00EF", - "/idieresisacute": "\u1E2F", - "/idieresiscyr": "\u04E5", - "/idieresiscyrillic": "\u04E5", - "/idotbelow": "\u1ECB", - "/idsquare": "\u1F194", - "/iebrevecyr": "\u04D7", - "/iebrevecyrillic": "\u04D7", - "/iecyr": "\u0435", - "/iecyrillic": "\u0435", - "/iegravecyr": "\u0450", - "/iepigraphicsideways": "\uA7F7", - "/ieungacirclekorean": "\u3275", - "/ieungaparenkorean": "\u3215", - "/ieungcirclekorean": "\u3267", - "/ieungkorean": "\u3147", - "/ieungparenkorean": "\u3207", - "/ieungucirclekorean": "\u327E", - "/igrave": "\u00EC", - "/igravecyr": "\u045D", - "/igravedbl": "\u0209", - "/igujarati": "\u0A87", - "/igurmukhi": "\u0A07", - "/ihiragana": "\u3044", - "/ihoi": "\u1EC9", - "/ihookabove": "\u1EC9", - "/iibengali": "\u0988", - "/iicyrillic": "\u0438", - "/iideva": "\u0908", - "/iigujarati": "\u0A88", - "/iigurmukhi": "\u0A08", - "/iimatragurmukhi": "\u0A40", - "/iinvertedbreve": "\u020B", - "/iishortcyrillic": "\u0439", - "/iivowelsignbengali": "\u09C0", - "/iivowelsigndeva": "\u0940", - "/iivowelsigngujarati": "\u0AC0", - "/ij": "\u0133", - "/ikatakana": "\u30A4", - "/ikatakanahalfwidth": "\uFF72", - "/ikawi": "\uA985", - "/ikorean": "\u3163", - "/ilde": "\u02DC", - "/iluy:hb": "\u05AC", - "/iluyhebrew": "\u05AC", - "/imacron": "\u012B", - "/imacroncyr": "\u04E3", - "/imacroncyrillic": "\u04E3", - "/image": "\u22B7", - "/imageorapproximatelyequal": "\u2253", - "/imatragurmukhi": "\u0A3F", - "/imonospace": "\uFF49", - "/imp": "\u1F47F", - "/inboxTray": "\u1F4E5", - "/incomingEnvelope": "\u1F4E8", - "/increaseFontSize": "\u1F5DA", - "/increment": "\u2206", - "/indianrupee": "\u20B9", - "/infinity": "\u221E", - "/information": "\u2139", - "/infullwidth": "\u33CC", - "/inhibitarabicformshaping": "\u206C", - "/inhibitsymmetricswapping": "\u206A", - "/iniarmenian": "\u056B", - "/iningusquare": "\u3304", - "/inmationDeskPerson": "\u1F481", - "/inputLatinCapitalLetters": "\u1F520", - "/inputLatinLetters": "\u1F524", - "/inputLatinSmallLetters": "\u1F521", - "/inputNumbers": "\u1F522", - "/inputS": "\u1F523", - "/insertion": "\u2380", - "/integral": "\u222B", - "/integralbottom": "\u2321", - "/integralbt": "\u2321", - "/integralclockwise": "\u2231", - "/integralcontour": "\u222E", - "/integralcontouranticlockwise": "\u2233", - "/integralcontourclockwise": "\u2232", - "/integraldbl": "\u222C", - "/integralex": "\uF8F5", - "/integralextension": "\u23AE", - "/integralsurface": "\u222F", - "/integraltop": "\u2320", - "/integraltp": "\u2320", - "/integraltpl": "\u222D", - "/integralvolume": "\u2230", - "/intercalate": "\u22BA", - "/interlinearanchor": "\uFFF9", - "/interlinearseparator": "\uFFFA", - "/interlinearterminator": "\uFFFB", - "/interlockedfemalemale": "\u26A4", - "/interrobang": "\u203D", - "/interrobanginverted": "\u2E18", - "/intersection": "\u2229", - "/intersectionarray": "\u22C2", - "/intersectiondbl": "\u22D2", - "/intisquare": "\u3305", - "/invbullet": "\u25D8", - "/invcircle": "\u25D9", - "/inverteddamma": "\u0657", - "/invertedfork": "\u2443", - "/invertedpentagram": "\u26E7", - "/invertedundertie": "\u2054", - "/invisibleplus": "\u2064", - "/invisibleseparator": "\u2063", - "/invisibletimes": "\u2062", - "/invsmileface": "\u263B", - "/iocyr": "\u0451", - "/iocyrillic": "\u0451", - "/iogonek": "\u012F", - "/iota": "\u03B9", - "/iotaacute": "\u1F77", - "/iotaadscript": "\u1FBE", - "/iotaasper": "\u1F31", - "/iotaasperacute": "\u1F35", - "/iotaaspergrave": "\u1F33", - "/iotaaspertilde": "\u1F37", - "/iotabreve": "\u1FD0", - "/iotadieresis": "\u03CA", - "/iotadieresisacute": "\u1FD3", - "/iotadieresisgrave": "\u1FD2", - "/iotadieresistilde": "\u1FD7", - "/iotadieresistonos": "\u0390", - "/iotafunc": "\u2373", - "/iotagrave": "\u1F76", - "/iotalatin": "\u0269", - "/iotalenis": "\u1F30", - "/iotalenisacute": "\u1F34", - "/iotalenisgrave": "\u1F32", - "/iotalenistilde": "\u1F36", - "/iotasub": "\u037A", - "/iotatilde": "\u1FD6", - "/iotatonos": "\u03AF", - "/iotaturned": "\u2129", - "/iotaunderlinefunc": "\u2378", - "/iotawithmacron": "\u1FD1", - "/ipa:Ismall": "\u026A", - "/ipa:alpha": "\u0251", - "/ipa:ereversed": "\u0258", - "/ipa:esh": "\u0283", - "/ipa:gamma": "\u0263", - "/ipa:glottalstop": "\u0294", - "/ipa:gscript": "\u0261", - "/ipa:iota": "\u0269", - "/ipa:phi": "\u0278", - "/ipa:rtail": "\u027D", - "/ipa:schwa": "\u0259", - "/ipa:upsilon": "\u028A", - "/iparen": "\u24A4", - "/iparenthesized": "\u24A4", - "/irigurmukhi": "\u0A72", - "/is": "\uA76D", - "/isen-isenpada": "\uA9DF", - "/ishortcyr": "\u0439", - "/ishortsharptailcyr": "\u048B", - "/ismallhiragana": "\u3043", - "/ismallkatakana": "\u30A3", - "/ismallkatakanahalfwidth": "\uFF68", - "/issharbengali": "\u09FA", - "/istroke": "\u0268", - "/isuperior": "\uF6ED", - "/itemideographiccircled": "\u32A0", - "/iterationhiragana": "\u309D", - "/iterationkatakana": "\u30FD", - "/itilde": "\u0129", - "/itildebelow": "\u1E2D", - "/iubopomofo": "\u3129", - "/iucyrillic": "\u044E", - "/iufullwidth": "\u337A", - "/iukrcyr": "\u0456", - "/ivowelsignbengali": "\u09BF", - "/ivowelsigndeva": "\u093F", - "/ivowelsigngujarati": "\u0ABF", - "/izakayaLantern": "\u1F3EE", - "/izhitsacyr": "\u0475", - "/izhitsacyrillic": "\u0475", - "/izhitsadblgravecyrillic": "\u0477", - "/izhitsagravedblcyr": "\u0477", - "/j": "\u006A", - "/j.inferior": "\u2C7C", - "/jaarmenian": "\u0571", - "/jabengali": "\u099C", - "/jackOLantern": "\u1F383", - "/jadeva": "\u091C", - "/jagujarati": "\u0A9C", - "/jagurmukhi": "\u0A1C", - "/jamahaprana": "\uA999", - "/januarytelegraph": "\u32C0", - "/japaneseBeginner": "\u1F530", - "/japaneseCastle": "\u1F3EF", - "/japaneseDolls": "\u1F38E", - "/japaneseGoblin": "\u1F47A", - "/japaneseOgre": "\u1F479", - "/japanesePostOffice": "\u1F3E3", - "/japanesebank": "\u26FB", - "/java:a": "\uA984", - "/java:ai": "\uA98D", - "/java:ba": "\uA9A7", - "/java:ca": "\uA995", - "/java:da": "\uA9A2", - "/java:dda": "\uA99D", - "/java:e": "\uA98C", - "/java:eight": "\uA9D8", - "/java:five": "\uA9D5", - "/java:four": "\uA9D4", - "/java:ga": "\uA992", - "/java:ha": "\uA9B2", - "/java:i": "\uA986", - "/java:ii": "\uA987", - "/java:ja": "\uA997", - "/java:ka": "\uA98F", - "/java:la": "\uA9AD", - "/java:ma": "\uA9A9", - "/java:na": "\uA9A4", - "/java:nga": "\uA994", - "/java:nine": "\uA9D9", - "/java:nya": "\uA99A", - "/java:o": "\uA98E", - "/java:one": "\uA9D1", - "/java:pa": "\uA9A5", - "/java:ra": "\uA9AB", - "/java:sa": "\uA9B1", - "/java:seven": "\uA9D7", - "/java:six": "\uA9D6", - "/java:ta": "\uA9A0", - "/java:three": "\uA9D3", - "/java:tta": "\uA99B", - "/java:two": "\uA9D2", - "/java:u": "\uA988", - "/java:wa": "\uA9AE", - "/java:ya": "\uA9AA", - "/java:zero": "\uA9D0", - "/jbopomofo": "\u3110", - "/jcaron": "\u01F0", - "/jcircle": "\u24D9", - "/jcircumflex": "\u0135", - "/jcrossedtail": "\u029D", - "/jdblstruckitalic": "\u2149", - "/jdotlessstroke": "\u025F", - "/jeans": "\u1F456", - "/jecyr": "\u0458", - "/jecyrillic": "\u0458", - "/jeem": "\u062C", - "/jeem.fina": "\uFE9E", - "/jeem.init": "\uFE9F", - "/jeem.init_alefmaksura.fina": "\uFD01", - "/jeem.init_hah.fina": "\uFC15", - "/jeem.init_hah.medi": "\uFCA7", - "/jeem.init_meem.fina": "\uFC16", - "/jeem.init_meem.medi": "\uFCA8", - "/jeem.init_meem.medi_hah.medi": "\uFD59", - "/jeem.init_yeh.fina": "\uFD02", - "/jeem.isol": "\uFE9D", - "/jeem.medi": "\uFEA0", - "/jeem.medi_alefmaksura.fina": "\uFD1D", - "/jeem.medi_hah.medi_alefmaksura.fina": "\uFDA6", - "/jeem.medi_hah.medi_yeh.fina": "\uFDBE", - "/jeem.medi_meem.medi_alefmaksura.fina": "\uFDA7", - "/jeem.medi_meem.medi_hah.fina": "\uFD58", - "/jeem.medi_meem.medi_yeh.fina": "\uFDA5", - "/jeem.medi_yeh.fina": "\uFD1E", - "/jeemabove": "\u06DA", - "/jeemarabic": "\u062C", - "/jeemfinalarabic": "\uFE9E", - "/jeeminitialarabic": "\uFE9F", - "/jeemmedialarabic": "\uFEA0", - "/jeh": "\u0698", - "/jeh.fina": "\uFB8B", - "/jeh.isol": "\uFB8A", - "/jeharabic": "\u0698", - "/jehfinalarabic": "\uFB8B", - "/jhabengali": "\u099D", - "/jhadeva": "\u091D", - "/jhagujarati": "\u0A9D", - "/jhagurmukhi": "\u0A1D", - "/jheharmenian": "\u057B", - "/jis": "\u3004", - "/jiterup": "\u2643", - "/jmonospace": "\uFF4A", - "/jotdiaeresisfunc": "\u2364", - "/jotunderlinefunc": "\u235B", - "/joystick": "\u1F579", - "/jparen": "\u24A5", - "/jparenthesized": "\u24A5", - "/jstroke": "\u0249", - "/jsuperior": "\u02B2", - "/jsupmod": "\u02B2", - "/jueuicircle": "\u327D", - "/julytelegraph": "\u32C6", - "/junetelegraph": "\u32C5", - "/juno": "\u26B5", - "/k": "\u006B", - "/k.inferior": "\u2096", - "/kaaba": "\u1F54B", - "/kaaleutcyr": "\u051F", - "/kabashkcyr": "\u04A1", - "/kabashkircyrillic": "\u04A1", - "/kabengali": "\u0995", - "/kacirclekatakana": "\u32D5", - "/kacute": "\u1E31", - "/kacyr": "\u043A", - "/kacyrillic": "\u043A", - "/kadescendercyrillic": "\u049B", - "/kadeva": "\u0915", - "/kaf": "\u05DB", - "/kaf.fina": "\uFEDA", - "/kaf.init": "\uFEDB", - "/kaf.init_alef.fina": "\uFC37", - "/kaf.init_alefmaksura.fina": "\uFC3D", - "/kaf.init_hah.fina": "\uFC39", - "/kaf.init_hah.medi": "\uFCC5", - "/kaf.init_jeem.fina": "\uFC38", - "/kaf.init_jeem.medi": "\uFCC4", - "/kaf.init_khah.fina": "\uFC3A", - "/kaf.init_khah.medi": "\uFCC6", - "/kaf.init_lam.fina": "\uFC3B", - "/kaf.init_lam.medi": "\uFCC7", - "/kaf.init_meem.fina": "\uFC3C", - "/kaf.init_meem.medi": "\uFCC8", - "/kaf.init_meem.medi_meem.medi": "\uFDC3", - "/kaf.init_yeh.fina": "\uFC3E", - "/kaf.isol": "\uFED9", - "/kaf.medi": "\uFEDC", - "/kaf.medi_alef.fina": "\uFC80", - "/kaf.medi_alefmaksura.fina": "\uFC83", - "/kaf.medi_lam.fina": "\uFC81", - "/kaf.medi_lam.medi": "\uFCEB", - "/kaf.medi_meem.fina": "\uFC82", - "/kaf.medi_meem.medi": "\uFCEC", - "/kaf.medi_meem.medi_meem.fina": "\uFDBB", - "/kaf.medi_meem.medi_yeh.fina": "\uFDB7", - "/kaf.medi_yeh.fina": "\uFC84", - "/kaf:hb": "\u05DB", - "/kafTwoDotsAbove": "\u077F", - "/kafarabic": "\u0643", - "/kafdagesh": "\uFB3B", - "/kafdageshhebrew": "\uFB3B", - "/kafdotabove": "\u06AC", - "/kaffinalarabic": "\uFEDA", - "/kafhebrew": "\u05DB", - "/kafinitialarabic": "\uFEDB", - "/kafmedialarabic": "\uFEDC", - "/kafrafehebrew": "\uFB4D", - "/kafring": "\u06AB", - "/kafswash": "\u06AA", - "/kafthreedotsbelow": "\u06AE", - "/kafullwidth": "\u3384", - "/kafwide:hb": "\uFB24", - "/kafwithdagesh:hb": "\uFB3B", - "/kafwithrafe:hb": "\uFB4D", - "/kagujarati": "\u0A95", - "/kagurmukhi": "\u0A15", - "/kahiragana": "\u304B", - "/kahookcyr": "\u04C4", - "/kahookcyrillic": "\u04C4", - "/kairisquare": "\u330B", - "/kaisymbol": "\u03D7", - "/kakatakana": "\u30AB", - "/kakatakanahalfwidth": "\uFF76", - "/kamurda": "\uA991", - "/kappa": "\u03BA", - "/kappa.math": "\u03F0", - "/kappasymbolgreek": "\u03F0", - "/kapyeounmieumkorean": "\u3171", - "/kapyeounphieuphkorean": "\u3184", - "/kapyeounpieupkorean": "\u3178", - "/kapyeounssangpieupkorean": "\u3179", - "/karattosquare": "\u330C", - "/karoriisquare": "\u330D", - "/kasasak": "\uA990", - "/kashida": "\u0640", - "/kashidaFina": "\uFE73", - "/kashidaautoarabic": "\u0640", - "/kashidaautonosidebearingarabic": "\u0640", - "/kashmiriyeh": "\u0620", - "/kasmallkatakana": "\u30F5", - "/kasquare": "\u3384", - "/kasra": "\u0650", - "/kasraIsol": "\uFE7A", - "/kasraMedi": "\uFE7B", - "/kasraarabic": "\u0650", - "/kasrasmall": "\u061A", - "/kasratan": "\u064D", - "/kasratanIsol": "\uFE74", - "/kasratanarabic": "\u064D", - "/kastrokecyr": "\u049F", - "/kastrokecyrillic": "\u049F", - "/kata:a": "\u30A2", - "/kata:asmall": "\u30A1", - "/kata:ba": "\u30D0", - "/kata:be": "\u30D9", - "/kata:bi": "\u30D3", - "/kata:bo": "\u30DC", - "/kata:bu": "\u30D6", - "/kata:da": "\u30C0", - "/kata:de": "\u30C7", - "/kata:di": "\u30C2", - "/kata:digraphkoto": "\u30FF", - "/kata:do": "\u30C9", - "/kata:doublehyphenkana": "\u30A0", - "/kata:du": "\u30C5", - "/kata:e": "\u30A8", - "/kata:esmall": "\u30A7", - "/kata:ga": "\u30AC", - "/kata:ge": "\u30B2", - "/kata:gi": "\u30AE", - "/kata:go": "\u30B4", - "/kata:gu": "\u30B0", - "/kata:ha": "\u30CF", - "/kata:he": "\u30D8", - "/kata:hi": "\u30D2", - "/kata:ho": "\u30DB", - "/kata:hu": "\u30D5", - "/kata:i": "\u30A4", - "/kata:ismall": "\u30A3", - "/kata:iteration": "\u30FD", - "/kata:ka": "\u30AB", - "/kata:kasmall": "\u30F5", - "/kata:ke": "\u30B1", - "/kata:kesmall": "\u30F6", - "/kata:ki": "\u30AD", - "/kata:ko": "\u30B3", - "/kata:ku": "\u30AF", - "/kata:ma": "\u30DE", - "/kata:me": "\u30E1", - "/kata:mi": "\u30DF", - "/kata:middledot": "\u30FB", - "/kata:mo": "\u30E2", - "/kata:mu": "\u30E0", - "/kata:n": "\u30F3", - "/kata:na": "\u30CA", - "/kata:ne": "\u30CD", - "/kata:ni": "\u30CB", - "/kata:no": "\u30CE", - "/kata:nu": "\u30CC", - "/kata:o": "\u30AA", - "/kata:osmall": "\u30A9", - "/kata:pa": "\u30D1", - "/kata:pe": "\u30DA", - "/kata:pi": "\u30D4", - "/kata:po": "\u30DD", - "/kata:prolongedkana": "\u30FC", - "/kata:pu": "\u30D7", - "/kata:ra": "\u30E9", - "/kata:re": "\u30EC", - "/kata:ri": "\u30EA", - "/kata:ro": "\u30ED", - "/kata:ru": "\u30EB", - "/kata:sa": "\u30B5", - "/kata:se": "\u30BB", - "/kata:si": "\u30B7", - "/kata:so": "\u30BD", - "/kata:su": "\u30B9", - "/kata:ta": "\u30BF", - "/kata:te": "\u30C6", - "/kata:ti": "\u30C1", - "/kata:to": "\u30C8", - "/kata:tu": "\u30C4", - "/kata:tusmall": "\u30C3", - "/kata:u": "\u30A6", - "/kata:usmall": "\u30A5", - "/kata:va": "\u30F7", - "/kata:ve": "\u30F9", - "/kata:vi": "\u30F8", - "/kata:vo": "\u30FA", - "/kata:voicediteration": "\u30FE", - "/kata:vu": "\u30F4", - "/kata:wa": "\u30EF", - "/kata:wasmall": "\u30EE", - "/kata:we": "\u30F1", - "/kata:wi": "\u30F0", - "/kata:wo": "\u30F2", - "/kata:ya": "\u30E4", - "/kata:yasmall": "\u30E3", - "/kata:yo": "\u30E8", - "/kata:yosmall": "\u30E7", - "/kata:yu": "\u30E6", - "/kata:yusmall": "\u30E5", - "/kata:za": "\u30B6", - "/kata:ze": "\u30BC", - "/kata:zi": "\u30B8", - "/kata:zo": "\u30BE", - "/kata:zu": "\u30BA", - "/katahiraprolongmarkhalfwidth": "\uFF70", - "/katailcyr": "\u049B", - "/kaverticalstrokecyr": "\u049D", - "/kaverticalstrokecyrillic": "\u049D", - "/kavykainvertedlow": "\u2E45", - "/kavykalow": "\u2E47", - "/kavykawithdotlow": "\u2E48", - "/kavykawithkavykaaboveinvertedlow": "\u2E46", - "/kbfullwidth": "\u3385", - "/kbopomofo": "\u310E", - "/kcalfullwidth": "\u3389", - "/kcalsquare": "\u3389", - "/kcaron": "\u01E9", - "/kcedilla": "\u0137", - "/kcircle": "\u24DA", - "/kcommaaccent": "\u0137", - "/kdescender": "\u2C6A", - "/kdiagonalstroke": "\uA743", - "/kdotbelow": "\u1E33", - "/kecirclekatakana": "\u32D8", - "/keesusquare": "\u331C", - "/keharmenian": "\u0584", - "/keheh": "\u06A9", - "/keheh.fina": "\uFB8F", - "/keheh.init": "\uFB90", - "/keheh.isol": "\uFB8E", - "/keheh.medi": "\uFB91", - "/kehehDotAbove": "\u0762", - "/kehehThreeDotsAbove": "\u0763", - "/kehehThreeDotsUpBelow": "\u0764", - "/kehehthreedotsbelow": "\u063C", - "/kehehtwodotsabove": "\u063B", - "/kehiragana": "\u3051", - "/kekatakana": "\u30B1", - "/kekatakanahalfwidth": "\uFF79", - "/kelvin": "\u212A", - "/kenarmenian": "\u056F", - "/keretconsonant": "\uA9BD", - "/kesmallkatakana": "\u30F6", - "/key": "\u1F511", - "/keyboardAndMouse": "\u1F5A6", - "/keycapTen": "\u1F51F", - "/kgfullwidth": "\u338F", - "/kgreenlandic": "\u0138", - "/khabengali": "\u0996", - "/khacyrillic": "\u0445", - "/khadeva": "\u0916", - "/khagujarati": "\u0A96", - "/khagurmukhi": "\u0A16", - "/khah": "\u062E", - "/khah.fina": "\uFEA6", - "/khah.init": "\uFEA7", - "/khah.init_alefmaksura.fina": "\uFD03", - "/khah.init_hah.fina": "\uFC1A", - "/khah.init_jeem.fina": "\uFC19", - "/khah.init_jeem.medi": "\uFCAB", - "/khah.init_meem.fina": "\uFC1B", - "/khah.init_meem.medi": "\uFCAC", - "/khah.init_yeh.fina": "\uFD04", - "/khah.isol": "\uFEA5", - "/khah.medi": "\uFEA8", - "/khah.medi_alefmaksura.fina": "\uFD1F", - "/khah.medi_yeh.fina": "\uFD20", - "/khaharabic": "\u062E", - "/khahfinalarabic": "\uFEA6", - "/khahinitialarabic": "\uFEA7", - "/khahmedialarabic": "\uFEA8", - "/kheicoptic": "\u03E7", - "/khhadeva": "\u0959", - "/khhagurmukhi": "\u0A59", - "/khieukhacirclekorean": "\u3278", - "/khieukhaparenkorean": "\u3218", - "/khieukhcirclekorean": "\u326A", - "/khieukhkorean": "\u314B", - "/khieukhparenkorean": "\u320A", - "/khokhaithai": "\u0E02", - "/khokhonthai": "\u0E05", - "/khokhuatthai": "\u0E03", - "/khokhwaithai": "\u0E04", - "/khomutthai": "\u0E5B", - "/khook": "\u0199", - "/khorakhangthai": "\u0E06", - "/khzfullwidth": "\u3391", - "/khzsquare": "\u3391", - "/kicirclekatakana": "\u32D6", - "/kihiragana": "\u304D", - "/kikatakana": "\u30AD", - "/kikatakanahalfwidth": "\uFF77", - "/kimono": "\u1F458", - "/kindergartenideographiccircled": "\u3245", - "/kingblack": "\u265A", - "/kingwhite": "\u2654", - "/kip": "\u20AD", - "/kiroguramusquare": "\u3315", - "/kiromeetorusquare": "\u3316", - "/kirosquare": "\u3314", - "/kirowattosquare": "\u3317", - "/kiss": "\u1F48F", - "/kissMark": "\u1F48B", - "/kissingCatFaceWithClosedEyes": "\u1F63D", - "/kissingFace": "\u1F617", - "/kissingFaceWithClosedEyes": "\u1F61A", - "/kissingFaceWithSmilingEyes": "\u1F619", - "/kiyeokacirclekorean": "\u326E", - "/kiyeokaparenkorean": "\u320E", - "/kiyeokcirclekorean": "\u3260", - "/kiyeokkorean": "\u3131", - "/kiyeokparenkorean": "\u3200", - "/kiyeoksioskorean": "\u3133", - "/kjecyr": "\u045C", - "/kjecyrillic": "\u045C", - "/kkfullwidth": "\u33CD", - "/klfullwidth": "\u3398", - "/klinebelow": "\u1E35", - "/klsquare": "\u3398", - "/km2fullwidth": "\u33A2", - "/km3fullwidth": "\u33A6", - "/kmcapitalfullwidth": "\u33CE", - "/kmcubedsquare": "\u33A6", - "/kmfullwidth": "\u339E", - "/kmonospace": "\uFF4B", - "/kmsquaredsquare": "\u33A2", - "/knda:a": "\u0C85", - "/knda:aa": "\u0C86", - "/knda:aasign": "\u0CBE", - "/knda:ai": "\u0C90", - "/knda:ailength": "\u0CD6", - "/knda:aisign": "\u0CC8", - "/knda:anusvara": "\u0C82", - "/knda:au": "\u0C94", - "/knda:ausign": "\u0CCC", - "/knda:avagraha": "\u0CBD", - "/knda:ba": "\u0CAC", - "/knda:bha": "\u0CAD", - "/knda:ca": "\u0C9A", - "/knda:cha": "\u0C9B", - "/knda:da": "\u0CA6", - "/knda:dda": "\u0CA1", - "/knda:ddha": "\u0CA2", - "/knda:dha": "\u0CA7", - "/knda:e": "\u0C8E", - "/knda:ee": "\u0C8F", - "/knda:eesign": "\u0CC7", - "/knda:eight": "\u0CEE", - "/knda:esign": "\u0CC6", - "/knda:fa": "\u0CDE", - "/knda:five": "\u0CEB", - "/knda:four": "\u0CEA", - "/knda:ga": "\u0C97", - "/knda:gha": "\u0C98", - "/knda:ha": "\u0CB9", - "/knda:i": "\u0C87", - "/knda:ii": "\u0C88", - "/knda:iisign": "\u0CC0", - "/knda:isign": "\u0CBF", - "/knda:ja": "\u0C9C", - "/knda:jha": "\u0C9D", - "/knda:jihvamuliya": "\u0CF1", - "/knda:ka": "\u0C95", - "/knda:kha": "\u0C96", - "/knda:la": "\u0CB2", - "/knda:length": "\u0CD5", - "/knda:lla": "\u0CB3", - "/knda:llvocal": "\u0CE1", - "/knda:llvocalsign": "\u0CE3", - "/knda:lvocal": "\u0C8C", - "/knda:lvocalsign": "\u0CE2", - "/knda:ma": "\u0CAE", - "/knda:na": "\u0CA8", - "/knda:nga": "\u0C99", - "/knda:nine": "\u0CEF", - "/knda:nna": "\u0CA3", - "/knda:nukta": "\u0CBC", - "/knda:nya": "\u0C9E", - "/knda:o": "\u0C92", - "/knda:one": "\u0CE7", - "/knda:oo": "\u0C93", - "/knda:oosign": "\u0CCB", - "/knda:osign": "\u0CCA", - "/knda:pa": "\u0CAA", - "/knda:pha": "\u0CAB", - "/knda:ra": "\u0CB0", - "/knda:rra": "\u0CB1", - "/knda:rrvocal": "\u0CE0", - "/knda:rrvocalsign": "\u0CC4", - "/knda:rvocal": "\u0C8B", - "/knda:rvocalsign": "\u0CC3", - "/knda:sa": "\u0CB8", - "/knda:seven": "\u0CED", - "/knda:sha": "\u0CB6", - "/knda:signcandrabindu": "\u0C81", - "/knda:signspacingcandrabindu": "\u0C80", - "/knda:six": "\u0CEC", - "/knda:ssa": "\u0CB7", - "/knda:ta": "\u0CA4", - "/knda:tha": "\u0CA5", - "/knda:three": "\u0CE9", - "/knda:tta": "\u0C9F", - "/knda:ttha": "\u0CA0", - "/knda:two": "\u0CE8", - "/knda:u": "\u0C89", - "/knda:upadhmaniya": "\u0CF2", - "/knda:usign": "\u0CC1", - "/knda:uu": "\u0C8A", - "/knda:uusign": "\u0CC2", - "/knda:va": "\u0CB5", - "/knda:virama": "\u0CCD", - "/knda:visarga": "\u0C83", - "/knda:ya": "\u0CAF", - "/knda:zero": "\u0CE6", - "/knightblack": "\u265E", - "/knightwhite": "\u2658", - "/ko:a": "\u314F", - "/ko:ae": "\u3150", - "/ko:aejungseong": "\u1162", - "/ko:aeujungseong": "\u11A3", - "/ko:ajungseong": "\u1161", - "/ko:aojungseong": "\u1176", - "/ko:araea": "\u318D", - "/ko:araeae": "\u318E", - "/ko:araeaeojungseong": "\u119F", - "/ko:araeaijungseong": "\u11A1", - "/ko:araeajungseong": "\u119E", - "/ko:araeaujungseong": "\u11A0", - "/ko:aujungseong": "\u1177", - "/ko:ceongchieumchieuchchoseong": "\u1155", - "/ko:ceongchieumcieucchoseong": "\u1150", - "/ko:ceongchieumsioschoseong": "\u113E", - "/ko:ceongchieumssangcieucchoseong": "\u1151", - "/ko:ceongchieumssangsioschoseong": "\u113F", - "/ko:chieuch": "\u314A", - "/ko:chieuchchoseong": "\u110E", - "/ko:chieuchhieuhchoseong": "\u1153", - "/ko:chieuchjongseong": "\u11BE", - "/ko:chieuchkhieukhchoseong": "\u1152", - "/ko:chitueumchieuchchoseong": "\u1154", - "/ko:chitueumcieucchoseong": "\u114E", - "/ko:chitueumsioschoseong": "\u113C", - "/ko:chitueumssangcieucchoseong": "\u114F", - "/ko:chitueumssangsioschoseong": "\u113D", - "/ko:cieuc": "\u3148", - "/ko:cieucchoseong": "\u110C", - "/ko:cieucieungchoseong": "\u114D", - "/ko:cieucjongseong": "\u11BD", - "/ko:e": "\u3154", - "/ko:ejungseong": "\u1166", - "/ko:eo": "\u3153", - "/ko:eo_eujungseong": "\u117C", - "/ko:eojungseong": "\u1165", - "/ko:eoojungseong": "\u117A", - "/ko:eoujungseong": "\u117B", - "/ko:eu": "\u3161", - "/ko:eueujungseong": "\u1196", - "/ko:eujungseong": "\u1173", - "/ko:euujungseong": "\u1195", - "/ko:filler": "\u3164", - "/ko:fillerchoseong": "\u115F", - "/ko:fillerjungseong": "\u1160", - "/ko:hieuh": "\u314E", - "/ko:hieuhchoseong": "\u1112", - "/ko:hieuhjongseong": "\u11C2", - "/ko:hieuhmieumjongseong": "\u11F7", - "/ko:hieuhnieunjongseong": "\u11F5", - "/ko:hieuhpieupjongseong": "\u11F8", - "/ko:hieuhrieuljongseong": "\u11F6", - "/ko:i": "\u3163", - "/ko:iajungseong": "\u1198", - "/ko:iaraeajungseong": "\u119D", - "/ko:ieujungseong": "\u119C", - "/ko:ieung": "\u3147", - "/ko:ieungchieuchchoseong": "\u1149", - "/ko:ieungchoseong": "\u110B", - "/ko:ieungcieucchoseong": "\u1148", - "/ko:ieungjongseong": "\u11BC", - "/ko:ieungkhieukhjongseong": "\u11EF", - "/ko:ieungkiyeokchoseong": "\u1141", - "/ko:ieungkiyeokjongseong": "\u11EC", - "/ko:ieungmieumchoseong": "\u1143", - "/ko:ieungpansioschoseong": "\u1146", - "/ko:ieungphieuphchoseong": "\u114B", - "/ko:ieungpieupchoseong": "\u1144", - "/ko:ieungsioschoseong": "\u1145", - "/ko:ieungssangkiyeokjongseong": "\u11ED", - "/ko:ieungthieuthchoseong": "\u114A", - "/ko:ieungtikeutchoseong": "\u1142", - "/ko:ijungseong": "\u1175", - "/ko:iojungseong": "\u119A", - "/ko:iujungseong": "\u119B", - "/ko:iyajungseong": "\u1199", - "/ko:kapyeounmieum": "\u3171", - "/ko:kapyeounmieumchoseong": "\u111D", - "/ko:kapyeounmieumjongseong": "\u11E2", - "/ko:kapyeounphieuph": "\u3184", - "/ko:kapyeounphieuphchoseong": "\u1157", - "/ko:kapyeounphieuphjongseong": "\u11F4", - "/ko:kapyeounpieup": "\u3178", - "/ko:kapyeounpieupchoseong": "\u112B", - "/ko:kapyeounpieupjongseong": "\u11E6", - "/ko:kapyeounrieulchoseong": "\u111B", - "/ko:kapyeounssangpieup": "\u3179", - "/ko:kapyeounssangpieupchoseong": "\u112C", - "/ko:khieukh": "\u314B", - "/ko:khieukhchoseong": "\u110F", - "/ko:khieukhjongseong": "\u11BF", - "/ko:kiyeok": "\u3131", - "/ko:kiyeokchieuchjongseong": "\u11FC", - "/ko:kiyeokchoseong": "\u1100", - "/ko:kiyeokhieuhjongseong": "\u11FE", - "/ko:kiyeokjongseong": "\u11A8", - "/ko:kiyeokkhieukhjongseong": "\u11FD", - "/ko:kiyeoknieunjongseong": "\u11FA", - "/ko:kiyeokpieupjongseong": "\u11FB", - "/ko:kiyeokrieuljongseong": "\u11C3", - "/ko:kiyeoksios": "\u3133", - "/ko:kiyeoksiosjongseong": "\u11AA", - "/ko:kiyeoksioskiyeokjongseong": "\u11C4", - "/ko:kiyeoktikeutchoseong": "\u115A", - "/ko:mieum": "\u3141", - "/ko:mieumchieuchjongseong": "\u11E0", - "/ko:mieumchoseong": "\u1106", - "/ko:mieumhieuhjongseong": "\u11E1", - "/ko:mieumjongseong": "\u11B7", - "/ko:mieumkiyeokjongseong": "\u11DA", - "/ko:mieumpansios": "\u3170", - "/ko:mieumpansiosjongseong": "\u11DF", - "/ko:mieumpieup": "\u316E", - "/ko:mieumpieupchoseong": "\u111C", - "/ko:mieumpieupjongseong": "\u11DC", - "/ko:mieumrieuljongseong": "\u11DB", - "/ko:mieumsios": "\u316F", - "/ko:mieumsiosjongseong": "\u11DD", - "/ko:mieumssangsiosjongseong": "\u11DE", - "/ko:nieun": "\u3134", - "/ko:nieunchoseong": "\u1102", - "/ko:nieuncieuc": "\u3135", - "/ko:nieuncieucchoseong": "\u115C", - "/ko:nieuncieucjongseong": "\u11AC", - "/ko:nieunhieuh": "\u3136", - "/ko:nieunhieuhchoseong": "\u115D", - "/ko:nieunhieuhjongseong": "\u11AD", - "/ko:nieunjongseong": "\u11AB", - "/ko:nieunkiyeokchoseong": "\u1113", - "/ko:nieunkiyeokjongseong": "\u11C5", - "/ko:nieunpansios": "\u3168", - "/ko:nieunpansiosjongseong": "\u11C8", - "/ko:nieunpieupchoseong": "\u1116", - "/ko:nieunsios": "\u3167", - "/ko:nieunsioschoseong": "\u115B", - "/ko:nieunsiosjongseong": "\u11C7", - "/ko:nieunthieuthjongseong": "\u11C9", - "/ko:nieuntikeut": "\u3166", - "/ko:nieuntikeutchoseong": "\u1115", - "/ko:nieuntikeutjongseong": "\u11C6", - "/ko:o": "\u3157", - "/ko:o_ejungseong": "\u1180", - "/ko:o_eojungseong": "\u117F", - "/ko:oe": "\u315A", - "/ko:oejungseong": "\u116C", - "/ko:ojungseong": "\u1169", - "/ko:oojungseong": "\u1182", - "/ko:oujungseong": "\u1183", - "/ko:oyaejungseong": "\u11A7", - "/ko:oyajungseong": "\u11A6", - "/ko:oyejungseong": "\u1181", - "/ko:pansios": "\u317F", - "/ko:pansioschoseong": "\u1140", - "/ko:pansiosjongseong": "\u11EB", - "/ko:phieuph": "\u314D", - "/ko:phieuphchoseong": "\u1111", - "/ko:phieuphjongseong": "\u11C1", - "/ko:phieuphpieupchoseong": "\u1156", - "/ko:phieuphpieupjongseong": "\u11F3", - "/ko:pieup": "\u3142", - "/ko:pieupchieuchchoseong": "\u1128", - "/ko:pieupchoseong": "\u1107", - "/ko:pieupcieuc": "\u3176", - "/ko:pieupcieucchoseong": "\u1127", - "/ko:pieuphieuhjongseong": "\u11E5", - "/ko:pieupjongseong": "\u11B8", - "/ko:pieupkiyeok": "\u3172", - "/ko:pieupkiyeokchoseong": "\u111E", - "/ko:pieupnieunchoseong": "\u111F", - "/ko:pieupphieuphchoseong": "\u112A", - "/ko:pieupphieuphjongseong": "\u11E4", - "/ko:pieuprieuljongseong": "\u11E3", - "/ko:pieupsios": "\u3144", - "/ko:pieupsioschoseong": "\u1121", - "/ko:pieupsioscieucchoseong": "\u1126", - "/ko:pieupsiosjongseong": "\u11B9", - "/ko:pieupsioskiyeok": "\u3174", - "/ko:pieupsioskiyeokchoseong": "\u1122", - "/ko:pieupsiospieupchoseong": "\u1124", - "/ko:pieupsiostikeut": "\u3175", - "/ko:pieupsiostikeutchoseong": "\u1123", - "/ko:pieupssangsioschoseong": "\u1125", - "/ko:pieupthieuth": "\u3177", - "/ko:pieupthieuthchoseong": "\u1129", - "/ko:pieuptikeut": "\u3173", - "/ko:pieuptikeutchoseong": "\u1120", - "/ko:rieul": "\u3139", - "/ko:rieulchoseong": "\u1105", - "/ko:rieulhieuh": "\u3140", - "/ko:rieulhieuhchoseong": "\u111A", - "/ko:rieulhieuhjongseong": "\u11B6", - "/ko:rieuljongseong": "\u11AF", - "/ko:rieulkapyeounpieupjongseong": "\u11D5", - "/ko:rieulkhieukhjongseong": "\u11D8", - "/ko:rieulkiyeok": "\u313A", - "/ko:rieulkiyeokjongseong": "\u11B0", - "/ko:rieulkiyeoksios": "\u3169", - "/ko:rieulkiyeoksiosjongseong": "\u11CC", - "/ko:rieulmieum": "\u313B", - "/ko:rieulmieumjongseong": "\u11B1", - "/ko:rieulmieumkiyeokjongseong": "\u11D1", - "/ko:rieulmieumsiosjongseong": "\u11D2", - "/ko:rieulnieunchoseong": "\u1118", - "/ko:rieulnieunjongseong": "\u11CD", - "/ko:rieulpansios": "\u316C", - "/ko:rieulpansiosjongseong": "\u11D7", - "/ko:rieulphieuph": "\u313F", - "/ko:rieulphieuphjongseong": "\u11B5", - "/ko:rieulpieup": "\u313C", - "/ko:rieulpieuphieuhjongseong": "\u11D4", - "/ko:rieulpieupjongseong": "\u11B2", - "/ko:rieulpieupsios": "\u316B", - "/ko:rieulpieupsiosjongseong": "\u11D3", - "/ko:rieulsios": "\u313D", - "/ko:rieulsiosjongseong": "\u11B3", - "/ko:rieulssangsiosjongseong": "\u11D6", - "/ko:rieulthieuth": "\u313E", - "/ko:rieulthieuthjongseong": "\u11B4", - "/ko:rieultikeut": "\u316A", - "/ko:rieultikeuthieuhjongseong": "\u11CF", - "/ko:rieultikeutjongseong": "\u11CE", - "/ko:rieulyeorinhieuh": "\u316D", - "/ko:rieulyeorinhieuhjongseong": "\u11D9", - "/ko:sios": "\u3145", - "/ko:sioschieuchchoseong": "\u1137", - "/ko:sioschoseong": "\u1109", - "/ko:sioscieuc": "\u317E", - "/ko:sioscieucchoseong": "\u1136", - "/ko:sioshieuhchoseong": "\u113B", - "/ko:siosieungchoseong": "\u1135", - "/ko:siosjongseong": "\u11BA", - "/ko:sioskhieukhchoseong": "\u1138", - "/ko:sioskiyeok": "\u317A", - "/ko:sioskiyeokchoseong": "\u112D", - "/ko:sioskiyeokjongseong": "\u11E7", - "/ko:siosmieumchoseong": "\u1131", - "/ko:siosnieun": "\u317B", - "/ko:siosnieunchoseong": "\u112E", - "/ko:siosphieuphchoseong": "\u113A", - "/ko:siospieup": "\u317D", - "/ko:siospieupchoseong": "\u1132", - "/ko:siospieupjongseong": "\u11EA", - "/ko:siospieupkiyeokchoseong": "\u1133", - "/ko:siosrieulchoseong": "\u1130", - "/ko:siosrieuljongseong": "\u11E9", - "/ko:siosssangsioschoseong": "\u1134", - "/ko:siosthieuthchoseong": "\u1139", - "/ko:siostikeut": "\u317C", - "/ko:siostikeutchoseong": "\u112F", - "/ko:siostikeutjongseong": "\u11E8", - "/ko:ssangaraeajungseong": "\u11A2", - "/ko:ssangcieuc": "\u3149", - "/ko:ssangcieucchoseong": "\u110D", - "/ko:ssanghieuh": "\u3185", - "/ko:ssanghieuhchoseong": "\u1158", - "/ko:ssangieung": "\u3180", - "/ko:ssangieungchoseong": "\u1147", - "/ko:ssangieungjongseong": "\u11EE", - "/ko:ssangkiyeok": "\u3132", - "/ko:ssangkiyeokchoseong": "\u1101", - "/ko:ssangkiyeokjongseong": "\u11A9", - "/ko:ssangnieun": "\u3165", - "/ko:ssangnieunchoseong": "\u1114", - "/ko:ssangnieunjongseong": "\u11FF", - "/ko:ssangpieup": "\u3143", - "/ko:ssangpieupchoseong": "\u1108", - "/ko:ssangrieulchoseong": "\u1119", - "/ko:ssangrieuljongseong": "\u11D0", - "/ko:ssangsios": "\u3146", - "/ko:ssangsioschoseong": "\u110A", - "/ko:ssangsiosjongseong": "\u11BB", - "/ko:ssangtikeut": "\u3138", - "/ko:ssangtikeutchoseong": "\u1104", - "/ko:thieuth": "\u314C", - "/ko:thieuthchoseong": "\u1110", - "/ko:thieuthjongseong": "\u11C0", - "/ko:tikeut": "\u3137", - "/ko:tikeutchoseong": "\u1103", - "/ko:tikeutjongseong": "\u11AE", - "/ko:tikeutkiyeokchoseong": "\u1117", - "/ko:tikeutkiyeokjongseong": "\u11CA", - "/ko:tikeutrieulchoseong": "\u115E", - "/ko:tikeutrieuljongseong": "\u11CB", - "/ko:u": "\u315C", - "/ko:uaejungseong": "\u118A", - "/ko:uajungseong": "\u1189", - "/ko:ueo_eujungseong": "\u118B", - "/ko:ujungseong": "\u116E", - "/ko:uujungseong": "\u118D", - "/ko:uyejungseong": "\u118C", - "/ko:wa": "\u3158", - "/ko:wae": "\u3159", - "/ko:waejungseong": "\u116B", - "/ko:wajungseong": "\u116A", - "/ko:we": "\u315E", - "/ko:wejungseong": "\u1170", - "/ko:weo": "\u315D", - "/ko:weojungseong": "\u116F", - "/ko:wi": "\u315F", - "/ko:wijungseong": "\u1171", - "/ko:ya": "\u3151", - "/ko:yae": "\u3152", - "/ko:yaejungseong": "\u1164", - "/ko:yajungseong": "\u1163", - "/ko:yaojungseong": "\u1178", - "/ko:yaujungseong": "\u11A4", - "/ko:yayojungseong": "\u1179", - "/ko:ye": "\u3156", - "/ko:yejungseong": "\u1168", - "/ko:yeo": "\u3155", - "/ko:yeojungseong": "\u1167", - "/ko:yeoojungseong": "\u117D", - "/ko:yeorinhieuh": "\u3186", - "/ko:yeorinhieuhchoseong": "\u1159", - "/ko:yeorinhieuhjongseong": "\u11F9", - "/ko:yeoujungseong": "\u117E", - "/ko:yeoyajungseong": "\u11A5", - "/ko:yesieung": "\u3181", - "/ko:yesieungchoseong": "\u114C", - "/ko:yesieungjongseong": "\u11F0", - "/ko:yesieungpansios": "\u3183", - "/ko:yesieungpansiosjongseong": "\u11F2", - "/ko:yesieungsios": "\u3182", - "/ko:yesieungsiosjongseong": "\u11F1", - "/ko:yi": "\u3162", - "/ko:yijungseong": "\u1174", - "/ko:yiujungseong": "\u1197", - "/ko:yo": "\u315B", - "/ko:yoi": "\u3189", - "/ko:yoijungseong": "\u1188", - "/ko:yojungseong": "\u116D", - "/ko:yoojungseong": "\u1187", - "/ko:yoya": "\u3187", - "/ko:yoyae": "\u3188", - "/ko:yoyaejungseong": "\u1185", - "/ko:yoyajungseong": "\u1184", - "/ko:yoyeojungseong": "\u1186", - "/ko:yu": "\u3160", - "/ko:yuajungseong": "\u118E", - "/ko:yuejungseong": "\u1190", - "/ko:yueojungseong": "\u118F", - "/ko:yui": "\u318C", - "/ko:yuijungseong": "\u1194", - "/ko:yujungseong": "\u1172", - "/ko:yuujungseong": "\u1193", - "/ko:yuye": "\u318B", - "/ko:yuyejungseong": "\u1192", - "/ko:yuyeo": "\u318A", - "/ko:yuyeojungseong": "\u1191", - "/koala": "\u1F428", - "/kobliquestroke": "\uA7A3", - "/kocirclekatakana": "\u32D9", - "/kohiragana": "\u3053", - "/kohmfullwidth": "\u33C0", - "/kohmsquare": "\u33C0", - "/kokaithai": "\u0E01", - "/kokatakana": "\u30B3", - "/kokatakanahalfwidth": "\uFF7A", - "/kooposquare": "\u331E", - "/koppa": "\u03DF", - "/koppaarchaic": "\u03D9", - "/koppacyr": "\u0481", - "/koppacyrillic": "\u0481", - "/koreanstandardsymbol": "\u327F", - "/koroniscmb": "\u0343", - "/korunasquare": "\u331D", - "/kotoideographiccircled": "\u3247", - "/kpafullwidth": "\u33AA", - "/kparen": "\u24A6", - "/kparenthesized": "\u24A6", - "/kpasquare": "\u33AA", - "/kra": "\u0138", - "/ksicyr": "\u046F", - "/ksicyrillic": "\u046F", - "/kstroke": "\uA741", - "/kstrokediagonalstroke": "\uA745", - "/ktfullwidth": "\u33CF", - "/ktsquare": "\u33CF", - "/kturned": "\u029E", - "/kucirclekatakana": "\u32D7", - "/kuhiragana": "\u304F", - "/kukatakana": "\u30AF", - "/kukatakanahalfwidth": "\uFF78", - "/kuroonesquare": "\u331B", - "/kuruzeirosquare": "\u331A", - "/kvfullwidth": "\u33B8", - "/kvsquare": "\u33B8", - "/kwfullwidth": "\u33BE", - "/kwsquare": "\u33BE", - "/kyuriisquare": "\u3312", - "/l": "\u006C", - "/l.inferior": "\u2097", - "/label": "\u1F3F7", - "/labengali": "\u09B2", - "/laborideographiccircled": "\u3298", - "/laborideographicparen": "\u3238", - "/lacute": "\u013A", - "/ladeva": "\u0932", - "/ladyBeetle": "\u1F41E", - "/lagujarati": "\u0AB2", - "/lagurmukhi": "\u0A32", - "/lakkhangyaothai": "\u0E45", - "/lam": "\u0644", - "/lam.fina": "\uFEDE", - "/lam.init": "\uFEDF", - "/lam.init_alef.fina": "\uFEFB", - "/lam.init_alef.medi_hamzaabove.fina": "\uFEF7", - "/lam.init_alef.medi_hamzabelow.fina": "\uFEF9", - "/lam.init_alef.medi_maddaabove.fina": "\uFEF5", - "/lam.init_alefmaksura.fina": "\uFC43", - "/lam.init_hah.fina": "\uFC40", - "/lam.init_hah.medi": "\uFCCA", - "/lam.init_hah.medi_meem.medi": "\uFDB5", - "/lam.init_heh.medi": "\uFCCD", - "/lam.init_jeem.fina": "\uFC3F", - "/lam.init_jeem.medi": "\uFCC9", - "/lam.init_jeem.medi_jeem.medi": "\uFD83", - "/lam.init_jeem.medi_meem.medi": "\uFDBA", - "/lam.init_khah.fina": "\uFC41", - "/lam.init_khah.medi": "\uFCCB", - "/lam.init_khah.medi_meem.medi": "\uFD86", - "/lam.init_meem.fina": "\uFC42", - "/lam.init_meem.medi": "\uFCCC", - "/lam.init_meem.medi_hah.medi": "\uFD88", - "/lam.init_yeh.fina": "\uFC44", - "/lam.isol": "\uFEDD", - "/lam.medi": "\uFEE0", - "/lam.medi_alef.fina": "\uFEFC", - "/lam.medi_alef.medi_hamzaabove.fina": "\uFEF8", - "/lam.medi_alef.medi_hamzabelow.fina": "\uFEFA", - "/lam.medi_alef.medi_maddaabove.fina": "\uFEF6", - "/lam.medi_alefmaksura.fina": "\uFC86", - "/lam.medi_hah.medi_alefmaksura.fina": "\uFD82", - "/lam.medi_hah.medi_meem.fina": "\uFD80", - "/lam.medi_hah.medi_yeh.fina": "\uFD81", - "/lam.medi_jeem.medi_jeem.fina": "\uFD84", - "/lam.medi_jeem.medi_meem.fina": "\uFDBC", - "/lam.medi_jeem.medi_yeh.fina": "\uFDAC", - "/lam.medi_khah.medi_meem.fina": "\uFD85", - "/lam.medi_meem.fina": "\uFC85", - "/lam.medi_meem.medi": "\uFCED", - "/lam.medi_meem.medi_hah.fina": "\uFD87", - "/lam.medi_meem.medi_yeh.fina": "\uFDAD", - "/lam.medi_yeh.fina": "\uFC87", - "/lamBar": "\u076A", - "/lamVabove": "\u06B5", - "/lamalefabove": "\u06D9", - "/lamaleffinalarabic": "\uFEFC", - "/lamalefhamzaabovefinalarabic": "\uFEF8", - "/lamalefhamzaaboveisolatedarabic": "\uFEF7", - "/lamalefhamzabelowfinalarabic": "\uFEFA", - "/lamalefhamzabelowisolatedarabic": "\uFEF9", - "/lamalefisolatedarabic": "\uFEFB", - "/lamalefmaddaabovefinalarabic": "\uFEF6", - "/lamalefmaddaaboveisolatedarabic": "\uFEF5", - "/lamarabic": "\u0644", - "/lambda": "\u03BB", - "/lambdastroke": "\u019B", - "/lamdotabove": "\u06B6", - "/lamed": "\u05DC", - "/lamed:hb": "\u05DC", - "/lameddagesh": "\uFB3C", - "/lameddageshhebrew": "\uFB3C", - "/lamedhebrew": "\u05DC", - "/lamedholam": "\u05DC", - "/lamedholamdagesh": "\u05DC", - "/lamedholamdageshhebrew": "\u05DC", - "/lamedholamhebrew": "\u05DC", - "/lamedwide:hb": "\uFB25", - "/lamedwithdagesh:hb": "\uFB3C", - "/lamfinalarabic": "\uFEDE", - "/lamhahinitialarabic": "\uFCCA", - "/laminitialarabic": "\uFEDF", - "/lamjeeminitialarabic": "\uFCC9", - "/lamkhahinitialarabic": "\uFCCB", - "/lamlamhehisolatedarabic": "\uFDF2", - "/lammedialarabic": "\uFEE0", - "/lammeemhahinitialarabic": "\uFD88", - "/lammeeminitialarabic": "\uFCCC", - "/lammeemjeeminitialarabic": "\uFEDF", - "/lammeemkhahinitialarabic": "\uFEDF", - "/lamthreedotsabove": "\u06B7", - "/lamthreedotsbelow": "\u06B8", - "/lanemergeleftblack": "\u26D8", - "/lanemergeleftwhite": "\u26D9", - "/largeBlueCircle": "\u1F535", - "/largeBlueDiamond": "\u1F537", - "/largeOrangeDiamond": "\u1F536", - "/largeRedCircle": "\u1F534", - "/largecircle": "\u25EF", - "/largetackdown": "\u27D9", - "/largetackup": "\u27D8", - "/lari": "\u20BE", - "/lastQuarterMoon": "\u1F317", - "/lastQuarterMoonFace": "\u1F31C", - "/lastquartermoon": "\u263E", - "/layar": "\uA982", - "/lazysinverted": "\u223E", - "/lbar": "\u019A", - "/lbbar": "\u2114", - "/lbelt": "\u026C", - "/lbeltretroflex": "\uA78E", - "/lbopomofo": "\u310C", - "/lbroken": "\uA747", - "/lcaron": "\u013E", - "/lcedilla": "\u013C", - "/lcircle": "\u24DB", - "/lcircumflexbelow": "\u1E3D", - "/lcommaaccent": "\u013C", - "/lcurl": "\u0234", - "/ldblbar": "\u2C61", - "/ldot": "\u0140", - "/ldotaccent": "\u0140", - "/ldotbelow": "\u1E37", - "/ldotbelowmacron": "\u1E39", - "/leafFlutteringInWind": "\u1F343", - "/ledger": "\u1F4D2", - "/left-pointingMagnifyingGlass": "\u1F50D", - "/leftAngerBubble": "\u1F5EE", - "/leftFiveEighthsBlock": "\u258B", - "/leftHalfBlock": "\u258C", - "/leftHandTelephoneReceiver": "\u1F57B", - "/leftLuggage": "\u1F6C5", - "/leftOneEighthBlock": "\u258F", - "/leftOneQuarterBlock": "\u258E", - "/leftSevenEighthsBlock": "\u2589", - "/leftSpeechBubble": "\u1F5E8", - "/leftThoughtBubble": "\u1F5EC", - "/leftThreeEighthsBlock": "\u258D", - "/leftThreeQuartersBlock": "\u258A", - "/leftWritingHand": "\u1F58E", - "/leftangleabovecmb": "\u031A", - "/leftarrowoverrightarrow": "\u21C6", - "/leftdnheavyrightuplight": "\u2545", - "/leftharpoonoverrightharpoon": "\u21CB", - "/leftheavyrightdnlight": "\u252D", - "/leftheavyrightuplight": "\u2535", - "/leftheavyrightvertlight": "\u253D", - "/leftideographiccircled": "\u32A7", - "/leftlightrightdnheavy": "\u2532", - "/leftlightrightupheavy": "\u253A", - "/leftlightrightvertheavy": "\u254A", - "/lefttackbelowcmb": "\u0318", - "/lefttorightembed": "\u202A", - "/lefttorightisolate": "\u2066", - "/lefttorightmark": "\u200E", - "/lefttorightoverride": "\u202D", - "/leftupheavyrightdnlight": "\u2543", - "/lemon": "\u1F34B", - "/lenis": "\u1FBF", - "/lenisacute": "\u1FCE", - "/lenisgrave": "\u1FCD", - "/lenistilde": "\u1FCF", - "/leo": "\u264C", - "/leopard": "\u1F406", - "/less": "\u003C", - "/lessbutnotequal": "\u2268", - "/lessbutnotequivalent": "\u22E6", - "/lessdot": "\u22D6", - "/lessequal": "\u2264", - "/lessequalorgreater": "\u22DA", - "/lessmonospace": "\uFF1C", - "/lessorequivalent": "\u2272", - "/lessorgreater": "\u2276", - "/lessoverequal": "\u2266", - "/lesssmall": "\uFE64", - "/levelSlider": "\u1F39A", - "/lezh": "\u026E", - "/lfblock": "\u258C", - "/lhacyr": "\u0515", - "/lhookretroflex": "\u026D", - "/libra": "\u264E", - "/ligaturealeflamed:hb": "\uFB4F", - "/ligatureoemod": "\uA7F9", - "/lightCheckMark": "\u1F5F8", - "/lightRail": "\u1F688", - "/lightShade": "\u2591", - "/lightarcdnleft": "\u256E", - "/lightarcdnright": "\u256D", - "/lightarcupleft": "\u256F", - "/lightarcupright": "\u2570", - "/lightdbldashhorz": "\u254C", - "/lightdbldashvert": "\u254E", - "/lightdiagcross": "\u2573", - "/lightdiagupleftdnright": "\u2572", - "/lightdiaguprightdnleft": "\u2571", - "/lightdn": "\u2577", - "/lightdnhorz": "\u252C", - "/lightdnleft": "\u2510", - "/lightdnright": "\u250C", - "/lighthorz": "\u2500", - "/lightleft": "\u2574", - "/lightleftheavyright": "\u257C", - "/lightning": "\u2607", - "/lightningMood": "\u1F5F2", - "/lightningMoodBubble": "\u1F5F1", - "/lightquaddashhorz": "\u2508", - "/lightquaddashvert": "\u250A", - "/lightright": "\u2576", - "/lighttrpldashhorz": "\u2504", - "/lighttrpldashvert": "\u2506", - "/lightup": "\u2575", - "/lightupheavydn": "\u257D", - "/lightuphorz": "\u2534", - "/lightupleft": "\u2518", - "/lightupright": "\u2514", - "/lightvert": "\u2502", - "/lightverthorz": "\u253C", - "/lightvertleft": "\u2524", - "/lightvertright": "\u251C", - "/lineextensionhorizontal": "\u23AF", - "/lineextensionvertical": "\u23D0", - "/linemiddledotvertical": "\u237F", - "/lineseparator": "\u2028", - "/lingsapada": "\uA9C8", - "/link": "\u1F517", - "/linkedPaperclips": "\u1F587", - "/lips": "\u1F5E2", - "/lipstick": "\u1F484", - "/lira": "\u20A4", - "/litre": "\u2113", - "/livretournois": "\u20B6", - "/liwnarmenian": "\u056C", - "/lj": "\u01C9", - "/ljecyr": "\u0459", - "/ljecyrillic": "\u0459", - "/ljekomicyr": "\u0509", - "/ll": "\uF6C0", - "/lladeva": "\u0933", - "/llagujarati": "\u0AB3", - "/llinebelow": "\u1E3B", - "/llladeva": "\u0934", - "/llvocalicbengali": "\u09E1", - "/llvocalicdeva": "\u0961", - "/llvocalicvowelsignbengali": "\u09E3", - "/llvocalicvowelsigndeva": "\u0963", - "/llwelsh": "\u1EFB", - "/lmacrondot": "\u1E39", - "/lmfullwidth": "\u33D0", - "/lmiddletilde": "\u026B", - "/lmonospace": "\uFF4C", - "/lmsquare": "\u33D0", - "/lnfullwidth": "\u33D1", - "/lochulathai": "\u0E2C", - "/lock": "\u1F512", - "/lockInkPen": "\u1F50F", - "/logfullwidth": "\u33D2", - "/logicaland": "\u2227", - "/logicalandarray": "\u22C0", - "/logicalnot": "\u00AC", - "/logicalnotreversed": "\u2310", - "/logicalor": "\u2228", - "/logicalorarray": "\u22C1", - "/lolingthai": "\u0E25", - "/lollipop": "\u1F36D", - "/longdivision": "\u27CC", - "/longovershortmetrical": "\u23D2", - "/longovertwoshortsmetrical": "\u23D4", - "/longs": "\u017F", - "/longs_t": "\uFB05", - "/longsdot": "\u1E9B", - "/longswithdiagonalstroke": "\u1E9C", - "/longswithhighstroke": "\u1E9D", - "/longtackleft": "\u27DE", - "/longtackright": "\u27DD", - "/losslesssquare": "\u1F1A9", - "/loudlyCryingFace": "\u1F62D", - "/loveHotel": "\u1F3E9", - "/loveLetter": "\u1F48C", - "/lowBrightness": "\u1F505", - "/lowasterisk": "\u204E", - "/lowerFiveEighthsBlock": "\u2585", - "/lowerHalfBlock": "\u2584", - "/lowerLeftBallpointPen": "\u1F58A", - "/lowerLeftCrayon": "\u1F58D", - "/lowerLeftFountainPen": "\u1F58B", - "/lowerLeftPaintbrush": "\u1F58C", - "/lowerLeftPencil": "\u1F589", - "/lowerOneEighthBlock": "\u2581", - "/lowerOneQuarterBlock": "\u2582", - "/lowerRightShadowedWhiteCircle": "\u1F53E", - "/lowerSevenEighthsBlock": "\u2587", - "/lowerThreeEighthsBlock": "\u2583", - "/lowerThreeQuartersBlock": "\u2586", - "/lowercornerdotright": "\u27D3", - "/lowerhalfcircle": "\u25E1", - "/lowerhalfcircleinversewhite": "\u25DB", - "/lowerquadrantcirculararcleft": "\u25DF", - "/lowerquadrantcirculararcright": "\u25DE", - "/lowertriangleleft": "\u25FA", - "/lowertriangleleftblack": "\u25E3", - "/lowertriangleright": "\u25FF", - "/lowertrianglerightblack": "\u25E2", - "/lowideographiccircled": "\u32A6", - "/lowlinecenterline": "\uFE4E", - "/lowlinecmb": "\u0332", - "/lowlinedashed": "\uFE4D", - "/lownumeralsign": "\u0375", - "/lowquotedblprime": "\u301F", - "/lozenge": "\u25CA", - "/lozengedividedbyrulehorizontal": "\u27E0", - "/lozengesquare": "\u2311", - "/lparen": "\u24A7", - "/lparenthesized": "\u24A7", - "/lretroflex": "\u026D", - "/ls": "\u02AA", - "/lslash": "\u0142", - "/lsquare": "\u2113", - "/lstroke": "\uA749", - "/lsuperior": "\uF6EE", - "/lsupmod": "\u02E1", - "/lt:Alpha": "\u2C6D", - "/lt:Alphaturned": "\u2C70", - "/lt:Beta": "\uA7B4", - "/lt:Chi": "\uA7B3", - "/lt:Gamma": "\u0194", - "/lt:Iota": "\u0196", - "/lt:Omega": "\uA7B6", - "/lt:Upsilon": "\u01B1", - "/lt:beta": "\uA7B5", - "/lt:delta": "\u1E9F", - "/lt:omega": "\uA7B7", - "/ltshade": "\u2591", - "/lttr:bet": "\u2136", - "/lttr:dalet": "\u2138", - "/lttr:gimel": "\u2137", - "/lttr:gscript": "\u210A", - "/lturned": "\uA781", - "/ltypeopencircuit": "\u2390", - "/luhurpada": "\uA9C5", - "/lum": "\uA772", - "/lungsipada": "\uA9C9", - "/luthai": "\u0E26", - "/lvocalicbengali": "\u098C", - "/lvocalicdeva": "\u090C", - "/lvocalicvowelsignbengali": "\u09E2", - "/lvocalicvowelsigndeva": "\u0962", - "/lxfullwidth": "\u33D3", - "/lxsquare": "\u33D3", - "/lzed": "\u02AB", - "/m": "\u006D", - "/m.inferior": "\u2098", - "/m2fullwidth": "\u33A1", - "/m3fullwidth": "\u33A5", - "/mabengali": "\u09AE", - "/macirclekatakana": "\u32EE", - "/macron": "\u00AF", - "/macronbelowcmb": "\u0331", - "/macroncmb": "\u0304", - "/macronlowmod": "\u02CD", - "/macronmod": "\u02C9", - "/macronmonospace": "\uFFE3", - "/macute": "\u1E3F", - "/madda": "\u0653", - "/maddaabove": "\u06E4", - "/madeva": "\u092E", - "/madyapada": "\uA9C4", - "/mafullwidth": "\u3383", - "/magujarati": "\u0AAE", - "/magurmukhi": "\u0A2E", - "/mahapakhhebrew": "\u05A4", - "/mahapakhlefthebrew": "\u05A4", - "/mahhasquare": "\u3345", - "/mahiragana": "\u307E", - "/mahpach:hb": "\u05A4", - "/maichattawalowleftthai": "\uF895", - "/maichattawalowrightthai": "\uF894", - "/maichattawathai": "\u0E4B", - "/maichattawaupperleftthai": "\uF893", - "/maieklowleftthai": "\uF88C", - "/maieklowrightthai": "\uF88B", - "/maiekthai": "\u0E48", - "/maiekupperleftthai": "\uF88A", - "/maihanakatleftthai": "\uF884", - "/maihanakatthai": "\u0E31", - "/maikurosquare": "\u3343", - "/mairusquare": "\u3344", - "/maitaikhuleftthai": "\uF889", - "/maitaikhuthai": "\u0E47", - "/maitholowleftthai": "\uF88F", - "/maitholowrightthai": "\uF88E", - "/maithothai": "\u0E49", - "/maithoupperleftthai": "\uF88D", - "/maitrilowleftthai": "\uF892", - "/maitrilowrightthai": "\uF891", - "/maitrithai": "\u0E4A", - "/maitriupperleftthai": "\uF890", - "/maiyamokthai": "\u0E46", - "/makatakana": "\u30DE", - "/makatakanahalfwidth": "\uFF8F", - "/male": "\u2642", - "/malefemale": "\u26A5", - "/maleideographiccircled": "\u329A", - "/malestroke": "\u26A6", - "/malestrokemalefemale": "\u26A7", - "/man": "\u1F468", - "/manAndWomanHoldingHands": "\u1F46B", - "/manDancing": "\u1F57A", - "/manGuaPiMao": "\u1F472", - "/manInBusinessSuitLevitating": "\u1F574", - "/manTurban": "\u1F473", - "/manat": "\u20BC", - "/mansShoe": "\u1F45E", - "/mansyonsquare": "\u3347", - "/mantelpieceClock": "\u1F570", - "/mapleLeaf": "\u1F341", - "/maplighthouse": "\u26EF", - "/maqaf:hb": "\u05BE", - "/maqafhebrew": "\u05BE", - "/marchtelegraph": "\u32C2", - "/mark": "\u061C", - "/markerdottedraisedinterpolation": "\u2E07", - "/markerdottedtransposition": "\u2E08", - "/markerraisedinterpolation": "\u2E06", - "/marknoonghunna": "\u0658", - "/marksChapter": "\u1F545", - "/marriage": "\u26AD", - "/mars": "\u2642", - "/marukusquare": "\u3346", - "/masoraCircle:hb": "\u05AF", - "/masoracirclehebrew": "\u05AF", - "/masquare": "\u3383", - "/masumark": "\u303C", - "/math:bowtie": "\u22C8", - "/math:cuberoot": "\u221B", - "/math:fourthroot": "\u221C", - "/maximize": "\u1F5D6", - "/maytelegraph": "\u32C4", - "/mbfullwidth": "\u3386", - "/mbopomofo": "\u3107", - "/mbsmallfullwidth": "\u33D4", - "/mbsquare": "\u33D4", - "/mcircle": "\u24DC", - "/mcubedsquare": "\u33A5", - "/mdot": "\u1E41", - "/mdotaccent": "\u1E41", - "/mdotbelow": "\u1E43", - "/measuredangle": "\u2221", - "/measuredby": "\u225E", - "/meatOnBone": "\u1F356", - "/mecirclekatakana": "\u32F1", - "/medicineideographiccircled": "\u32A9", - "/mediumShade": "\u2592", - "/mediumcircleblack": "\u26AB", - "/mediumcirclewhite": "\u26AA", - "/mediummathematicalspace": "\u205F", - "/mediumsmallcirclewhite": "\u26AC", - "/meem": "\u0645", - "/meem.fina": "\uFEE2", - "/meem.init": "\uFEE3", - "/meem.init_alefmaksura.fina": "\uFC49", - "/meem.init_hah.fina": "\uFC46", - "/meem.init_hah.medi": "\uFCCF", - "/meem.init_hah.medi_jeem.medi": "\uFD89", - "/meem.init_hah.medi_meem.medi": "\uFD8A", - "/meem.init_jeem.fina": "\uFC45", - "/meem.init_jeem.medi": "\uFCCE", - "/meem.init_jeem.medi_hah.medi": "\uFD8C", - "/meem.init_jeem.medi_khah.medi": "\uFD92", - "/meem.init_jeem.medi_meem.medi": "\uFD8D", - "/meem.init_khah.fina": "\uFC47", - "/meem.init_khah.medi": "\uFCD0", - "/meem.init_khah.medi_jeem.medi": "\uFD8E", - "/meem.init_khah.medi_meem.medi": "\uFD8F", - "/meem.init_meem.fina": "\uFC48", - "/meem.init_meem.medi": "\uFCD1", - "/meem.init_yeh.fina": "\uFC4A", - "/meem.isol": "\uFEE1", - "/meem.medi": "\uFEE4", - "/meem.medi_alef.fina": "\uFC88", - "/meem.medi_hah.medi_yeh.fina": "\uFD8B", - "/meem.medi_jeem.medi_yeh.fina": "\uFDC0", - "/meem.medi_khah.medi_yeh.fina": "\uFDB9", - "/meem.medi_meem.fina": "\uFC89", - "/meem.medi_meem.medi_yeh.fina": "\uFDB1", - "/meemDotAbove": "\u0765", - "/meemDotBelow": "\u0766", - "/meemabove": "\u06E2", - "/meemabove.init": "\u06D8", - "/meemarabic": "\u0645", - "/meembelow": "\u06ED", - "/meemfinalarabic": "\uFEE2", - "/meeminitialarabic": "\uFEE3", - "/meemmedialarabic": "\uFEE4", - "/meemmeeminitialarabic": "\uFCD1", - "/meemmeemisolatedarabic": "\uFC48", - "/meetorusquare": "\u334D", - "/megasquare": "\u334B", - "/megatonsquare": "\u334C", - "/mehiragana": "\u3081", - "/meizierasquare": "\u337E", - "/mekatakana": "\u30E1", - "/mekatakanahalfwidth": "\uFF92", - "/melon": "\u1F348", - "/mem": "\u05DE", - "/mem:hb": "\u05DE", - "/memdagesh": "\uFB3E", - "/memdageshhebrew": "\uFB3E", - "/memhebrew": "\u05DE", - "/memo": "\u1F4DD", - "/memwithdagesh:hb": "\uFB3E", - "/menarmenian": "\u0574", - "/menorahNineBranches": "\u1F54E", - "/menpostSindhi": "\u06FE", - "/mens": "\u1F6B9", - "/mepigraphicinverted": "\uA7FD", - "/mercha:hb": "\u05A5", - "/merchaKefulah:hb": "\u05A6", - "/mercury": "\u263F", - "/merkhahebrew": "\u05A5", - "/merkhakefulahebrew": "\u05A6", - "/merkhakefulalefthebrew": "\u05A6", - "/merkhalefthebrew": "\u05A5", - "/metalideographiccircled": "\u328E", - "/metalideographicparen": "\u322E", - "/meteg:hb": "\u05BD", - "/metro": "\u1F687", - "/mgfullwidth": "\u338E", - "/mhook": "\u0271", - "/mhzfullwidth": "\u3392", - "/mhzsquare": "\u3392", - "/micirclekatakana": "\u32EF", - "/microphone": "\u1F3A4", - "/microscope": "\u1F52C", - "/middledotkatakanahalfwidth": "\uFF65", - "/middot": "\u00B7", - "/mieumacirclekorean": "\u3272", - "/mieumaparenkorean": "\u3212", - "/mieumcirclekorean": "\u3264", - "/mieumkorean": "\u3141", - "/mieumpansioskorean": "\u3170", - "/mieumparenkorean": "\u3204", - "/mieumpieupkorean": "\u316E", - "/mieumsioskorean": "\u316F", - "/mihiragana": "\u307F", - "/mikatakana": "\u30DF", - "/mikatakanahalfwidth": "\uFF90", - "/mikuronsquare": "\u3348", - "/milfullwidth": "\u33D5", - "/militaryMedal": "\u1F396", - "/milkyWay": "\u1F30C", - "/mill": "\u20A5", - "/millionscmbcyr": "\u0489", - "/millisecond": "\u2034", - "/millisecondreversed": "\u2037", - "/minibus": "\u1F690", - "/minidisc": "\u1F4BD", - "/minimize": "\u1F5D5", - "/minus": "\u2212", - "/minus.inferior": "\u208B", - "/minus.superior": "\u207B", - "/minusbelowcmb": "\u0320", - "/minuscircle": "\u2296", - "/minusmod": "\u02D7", - "/minusplus": "\u2213", - "/minussignmod": "\u02D7", - "/minustilde": "\u2242", - "/minute": "\u2032", - "/minutereversed": "\u2035", - "/miribaarusquare": "\u334A", - "/mirisquare": "\u3349", - "/misc:baby": "\u1F476", - "/misc:bell": "\u1F514", - "/misc:dash": "\u1F4A8", - "/misc:decimalseparator": "\u2396", - "/misc:diamondblack": "\u2666", - "/misc:diamondwhite": "\u2662", - "/misc:ear": "\u1F442", - "/misc:om": "\u1F549", - "/misc:ring": "\u1F48D", - "/misra": "\u060F", - "/mlfullwidth": "\u3396", - "/mlonglegturned": "\u0270", - "/mlsquare": "\u3396", - "/mlym:a": "\u0D05", - "/mlym:aa": "\u0D06", - "/mlym:aasign": "\u0D3E", - "/mlym:ai": "\u0D10", - "/mlym:aisign": "\u0D48", - "/mlym:anusvarasign": "\u0D02", - "/mlym:archaicii": "\u0D5F", - "/mlym:au": "\u0D14", - "/mlym:aulength": "\u0D57", - "/mlym:ausign": "\u0D4C", - "/mlym:avagrahasign": "\u0D3D", - "/mlym:ba": "\u0D2C", - "/mlym:bha": "\u0D2D", - "/mlym:ca": "\u0D1A", - "/mlym:candrabindusign": "\u0D01", - "/mlym:cha": "\u0D1B", - "/mlym:circularviramasign": "\u0D3C", - "/mlym:combininganusvaraabovesign": "\u0D00", - "/mlym:da": "\u0D26", - "/mlym:date": "\u0D79", - "/mlym:dda": "\u0D21", - "/mlym:ddha": "\u0D22", - "/mlym:dha": "\u0D27", - "/mlym:dotreph": "\u0D4E", - "/mlym:e": "\u0D0E", - "/mlym:ee": "\u0D0F", - "/mlym:eesign": "\u0D47", - "/mlym:eight": "\u0D6E", - "/mlym:esign": "\u0D46", - "/mlym:five": "\u0D6B", - "/mlym:four": "\u0D6A", - "/mlym:ga": "\u0D17", - "/mlym:gha": "\u0D18", - "/mlym:ha": "\u0D39", - "/mlym:i": "\u0D07", - "/mlym:ii": "\u0D08", - "/mlym:iisign": "\u0D40", - "/mlym:isign": "\u0D3F", - "/mlym:ja": "\u0D1C", - "/mlym:jha": "\u0D1D", - "/mlym:ka": "\u0D15", - "/mlym:kchillu": "\u0D7F", - "/mlym:kha": "\u0D16", - "/mlym:la": "\u0D32", - "/mlym:lchillu": "\u0D7D", - "/mlym:lla": "\u0D33", - "/mlym:llchillu": "\u0D7E", - "/mlym:llla": "\u0D34", - "/mlym:lllchillu": "\u0D56", - "/mlym:llvocal": "\u0D61", - "/mlym:llvocalsign": "\u0D63", - "/mlym:lvocal": "\u0D0C", - "/mlym:lvocalsign": "\u0D62", - "/mlym:ma": "\u0D2E", - "/mlym:mchillu": "\u0D54", - "/mlym:na": "\u0D28", - "/mlym:nchillu": "\u0D7B", - "/mlym:nga": "\u0D19", - "/mlym:nine": "\u0D6F", - "/mlym:nna": "\u0D23", - "/mlym:nnchillu": "\u0D7A", - "/mlym:nnna": "\u0D29", - "/mlym:nya": "\u0D1E", - "/mlym:o": "\u0D12", - "/mlym:one": "\u0D67", - "/mlym:oneeighth": "\u0D77", - "/mlym:onefifth": "\u0D5E", - "/mlym:onefortieth": "\u0D59", - "/mlym:onehalf": "\u0D74", - "/mlym:onehundred": "\u0D71", - "/mlym:oneone-hundred-and-sixtieth": "\u0D58", - "/mlym:onequarter": "\u0D73", - "/mlym:onesixteenth": "\u0D76", - "/mlym:onetenth": "\u0D5C", - "/mlym:onethousand": "\u0D72", - "/mlym:onetwentieth": "\u0D5B", - "/mlym:oo": "\u0D13", - "/mlym:oosign": "\u0D4B", - "/mlym:osign": "\u0D4A", - "/mlym:pa": "\u0D2A", - "/mlym:parasign": "\u0D4F", - "/mlym:pha": "\u0D2B", - "/mlym:ra": "\u0D30", - "/mlym:rra": "\u0D31", - "/mlym:rrchillu": "\u0D7C", - "/mlym:rrvocal": "\u0D60", - "/mlym:rrvocalsign": "\u0D44", - "/mlym:rvocal": "\u0D0B", - "/mlym:rvocalsign": "\u0D43", - "/mlym:sa": "\u0D38", - "/mlym:seven": "\u0D6D", - "/mlym:sha": "\u0D36", - "/mlym:six": "\u0D6C", - "/mlym:ssa": "\u0D37", - "/mlym:ta": "\u0D24", - "/mlym:ten": "\u0D70", - "/mlym:tha": "\u0D25", - "/mlym:three": "\u0D69", - "/mlym:threeeightieths": "\u0D5A", - "/mlym:threequarters": "\u0D75", - "/mlym:threesixteenths": "\u0D78", - "/mlym:threetwentieths": "\u0D5D", - "/mlym:tta": "\u0D1F", - "/mlym:ttha": "\u0D20", - "/mlym:ttta": "\u0D3A", - "/mlym:two": "\u0D68", - "/mlym:u": "\u0D09", - "/mlym:usign": "\u0D41", - "/mlym:uu": "\u0D0A", - "/mlym:uusign": "\u0D42", - "/mlym:va": "\u0D35", - "/mlym:verticalbarviramasign": "\u0D3B", - "/mlym:viramasign": "\u0D4D", - "/mlym:visargasign": "\u0D03", - "/mlym:ya": "\u0D2F", - "/mlym:ychillu": "\u0D55", - "/mlym:zero": "\u0D66", - "/mm2fullwidth": "\u339F", - "/mm3fullwidth": "\u33A3", - "/mmcubedsquare": "\u33A3", - "/mmfullwidth": "\u339C", - "/mmonospace": "\uFF4D", - "/mmsquaredsquare": "\u339F", - "/mobilePhone": "\u1F4F1", - "/mobilePhoneOff": "\u1F4F4", - "/mobilePhoneRightwardsArrowAtLeft": "\u1F4F2", - "/mocirclekatakana": "\u32F2", - "/models": "\u22A7", - "/mohiragana": "\u3082", - "/mohmfullwidth": "\u33C1", - "/mohmsquare": "\u33C1", - "/mokatakana": "\u30E2", - "/mokatakanahalfwidth": "\uFF93", - "/molfullwidth": "\u33D6", - "/molsquare": "\u33D6", - "/momathai": "\u0E21", - "/moneyBag": "\u1F4B0", - "/moneyWings": "\u1F4B8", - "/mong:a": "\u1820", - "/mong:aaligali": "\u1887", - "/mong:ahaligali": "\u1897", - "/mong:ang": "\u1829", - "/mong:angsibe": "\u1862", - "/mong:angtodo": "\u184A", - "/mong:anusvaraonealigali": "\u1880", - "/mong:ba": "\u182A", - "/mong:baludaaligali": "\u1885", - "/mong:baludaaligalithree": "\u1886", - "/mong:batodo": "\u184B", - "/mong:bhamanchualigali": "\u18A8", - "/mong:birga": "\u1800", - "/mong:caaligali": "\u188B", - "/mong:camanchualigali": "\u189C", - "/mong:cha": "\u1834", - "/mong:chasibe": "\u1871", - "/mong:chatodo": "\u1852", - "/mong:chi": "\u1842", - "/mong:colon": "\u1804", - "/mong:comma": "\u1802", - "/mong:commamanchu": "\u1808", - "/mong:cyamanchualigali": "\u18A3", - "/mong:da": "\u1833", - "/mong:daaligali": "\u1891", - "/mong:dagalgaaligali": "\u18A9", - "/mong:damarualigali": "\u1882", - "/mong:dasibe": "\u1869", - "/mong:datodo": "\u1851", - "/mong:ddaaligali": "\u188E", - "/mong:ddhamanchualigali": "\u189F", - "/mong:dhamanchualigali": "\u18A1", - "/mong:dzatodo": "\u185C", - "/mong:e": "\u1821", - "/mong:ee": "\u1827", - "/mong:eight": "\u1818", - "/mong:ellipsis": "\u1801", - "/mong:esibe": "\u185D", - "/mong:etodo": "\u1844", - "/mong:fa": "\u1839", - "/mong:famanchu": "\u1876", - "/mong:fasibe": "\u186B", - "/mong:five": "\u1815", - "/mong:four": "\u1814", - "/mong:fourdots": "\u1805", - "/mong:freevariationselectorone": "\u180B", - "/mong:freevariationselectorthree": "\u180D", - "/mong:freevariationselectortwo": "\u180C", - "/mong:ga": "\u182D", - "/mong:gaasibe": "\u186C", - "/mong:gaatodo": "\u1858", - "/mong:gasibe": "\u1864", - "/mong:gatodo": "\u184E", - "/mong:ghamanchualigali": "\u189A", - "/mong:haa": "\u183E", - "/mong:haasibe": "\u186D", - "/mong:haatodo": "\u1859", - "/mong:hasibe": "\u1865", - "/mong:i": "\u1822", - "/mong:ialigali": "\u1888", - "/mong:imanchu": "\u1873", - "/mong:isibe": "\u185E", - "/mong:itodo": "\u1845", - "/mong:iysibe": "\u185F", - "/mong:ja": "\u1835", - "/mong:jasibe": "\u186A", - "/mong:jatodo": "\u1853", - "/mong:jhamanchualigali": "\u189D", - "/mong:jiatodo": "\u185A", - "/mong:ka": "\u183A", - "/mong:kaaligali": "\u1889", - "/mong:kamanchu": "\u1874", - "/mong:kasibe": "\u1863", - "/mong:katodo": "\u1857", - "/mong:kha": "\u183B", - "/mong:la": "\u182F", - "/mong:lha": "\u1840", - "/mong:lhamanchualigali": "\u18AA", - "/mong:longvowelsigntodo": "\u1843", - "/mong:ma": "\u182E", - "/mong:matodo": "\u184F", - "/mong:na": "\u1828", - "/mong:ngaaligali": "\u188A", - "/mong:ngamanchualigali": "\u189B", - "/mong:niatodo": "\u185B", - "/mong:nine": "\u1819", - "/mong:nirugu": "\u180A", - "/mong:nnaaligali": "\u188F", - "/mong:o": "\u1823", - "/mong:oe": "\u1825", - "/mong:oetodo": "\u1848", - "/mong:one": "\u1811", - "/mong:otodo": "\u1846", - "/mong:pa": "\u182B", - "/mong:paaligali": "\u1892", - "/mong:pasibe": "\u1866", - "/mong:patodo": "\u184C", - "/mong:period": "\u1803", - "/mong:periodmanchu": "\u1809", - "/mong:phaaligali": "\u1893", - "/mong:qa": "\u182C", - "/mong:qatodo": "\u184D", - "/mong:ra": "\u1837", - "/mong:raasibe": "\u1870", - "/mong:ramanchu": "\u1875", - "/mong:sa": "\u1830", - "/mong:seven": "\u1817", - "/mong:sha": "\u1831", - "/mong:shasibe": "\u1867", - "/mong:six": "\u1816", - "/mong:softhyphentodo": "\u1806", - "/mong:ssaaligali": "\u1894", - "/mong:ssamanchualigali": "\u18A2", - "/mong:syllableboundarymarkersibe": "\u1807", - "/mong:ta": "\u1832", - "/mong:taaligali": "\u1890", - "/mong:tamanchualigali": "\u18A0", - "/mong:tasibe": "\u1868", - "/mong:tatodo": "\u1850", - "/mong:tatodoaligali": "\u1898", - "/mong:three": "\u1813", - "/mong:tsa": "\u183C", - "/mong:tsasibe": "\u186E", - "/mong:tsatodo": "\u1854", - "/mong:ttaaligali": "\u188C", - "/mong:ttamanchualigali": "\u189E", - "/mong:tthaaligali": "\u188D", - "/mong:two": "\u1812", - "/mong:u": "\u1824", - "/mong:ualigalihalf": "\u18A6", - "/mong:ubadamaaligali": "\u1883", - "/mong:ubadamaaligaliinverted": "\u1884", - "/mong:ue": "\u1826", - "/mong:uesibe": "\u1860", - "/mong:uetodo": "\u1849", - "/mong:usibe": "\u1861", - "/mong:utodo": "\u1847", - "/mong:visargaonealigali": "\u1881", - "/mong:vowelseparator": "\u180E", - "/mong:wa": "\u1838", - "/mong:watodo": "\u1856", - "/mong:ya": "\u1836", - "/mong:yaaligalihalf": "\u18A7", - "/mong:yatodo": "\u1855", - "/mong:za": "\u183D", - "/mong:zaaligali": "\u1896", - "/mong:zamanchualigali": "\u18A5", - "/mong:zasibe": "\u186F", - "/mong:zero": "\u1810", - "/mong:zhaaligali": "\u1895", - "/mong:zhamanchu": "\u1877", - "/mong:zhamanchualigali": "\u18A4", - "/mong:zhasibe": "\u1872", - "/mong:zhatodoaligali": "\u1899", - "/mong:zhi": "\u1841", - "/mong:zra": "\u183F", - "/monkey": "\u1F412", - "/monkeyFace": "\u1F435", - "/monogramyang": "\u268A", - "/monogramyin": "\u268B", - "/monorail": "\u1F69D", - "/monostable": "\u238D", - "/moodBubble": "\u1F5F0", - "/moonViewingCeremony": "\u1F391", - "/moonideographiccircled": "\u328A", - "/moonideographicparen": "\u322A", - "/moonlilithblack": "\u26B8", - "/mosque": "\u1F54C", - "/motorBoat": "\u1F6E5", - "/motorScooter": "\u1F6F5", - "/motorway": "\u1F6E3", - "/mountFuji": "\u1F5FB", - "/mountain": "\u26F0", - "/mountainBicyclist": "\u1F6B5", - "/mountainCableway": "\u1F6A0", - "/mountainRailway": "\u1F69E", - "/mouse": "\u1F401", - "/mouseFace": "\u1F42D", - "/mouth": "\u1F444", - "/movers2fullwidth": "\u33A8", - "/moversfullwidth": "\u33A7", - "/moverssquare": "\u33A7", - "/moverssquaredsquare": "\u33A8", - "/movieCamera": "\u1F3A5", - "/moyai": "\u1F5FF", - "/mpafullwidth": "\u33AB", - "/mparen": "\u24A8", - "/mparenthesized": "\u24A8", - "/mpasquare": "\u33AB", - "/msfullwidth": "\u33B3", - "/mssquare": "\u33B3", - "/msuperior": "\uF6EF", - "/mturned": "\u026F", - "/mu": "\u00B5", - "/mu.math": "\u00B5", - "/mu1": "\u00B5", - "/muafullwidth": "\u3382", - "/muasquare": "\u3382", - "/muchgreater": "\u226B", - "/muchless": "\u226A", - "/mucirclekatakana": "\u32F0", - "/muffullwidth": "\u338C", - "/mufsquare": "\u338C", - "/mugfullwidth": "\u338D", - "/mugreek": "\u03BC", - "/mugsquare": "\u338D", - "/muhiragana": "\u3080", - "/mukatakana": "\u30E0", - "/mukatakanahalfwidth": "\uFF91", - "/mulfullwidth": "\u3395", - "/mulsquare": "\u3395", - "/multimap": "\u22B8", - "/multimapleft": "\u27DC", - "/multipleMusicalNotes": "\u1F3B6", - "/multiply": "\u00D7", - "/multiset": "\u228C", - "/multisetmultiplication": "\u228D", - "/multisetunion": "\u228E", - "/mum": "\uA773", - "/mumfullwidth": "\u339B", - "/mumsquare": "\u339B", - "/munach:hb": "\u05A3", - "/munahhebrew": "\u05A3", - "/munahlefthebrew": "\u05A3", - "/musfullwidth": "\u33B2", - "/mushroom": "\u1F344", - "/musicalKeyboard": "\u1F3B9", - "/musicalKeyboardJacks": "\u1F398", - "/musicalNote": "\u1F3B5", - "/musicalScore": "\u1F3BC", - "/musicalnote": "\u266A", - "/musicalnotedbl": "\u266B", - "/musicflat": "\u266D", - "/musicflatsign": "\u266D", - "/musicnatural": "\u266E", - "/musicsharp": "\u266F", - "/musicsharpsign": "\u266F", - "/mussquare": "\u33B2", - "/muvfullwidth": "\u33B6", - "/muvsquare": "\u33B6", - "/muwfullwidth": "\u33BC", - "/muwsquare": "\u33BC", - "/mvfullwidth": "\u33B7", - "/mvmegafullwidth": "\u33B9", - "/mvmegasquare": "\u33B9", - "/mvsquare": "\u33B7", - "/mwfullwidth": "\u33BD", - "/mwmegafullwidth": "\u33BF", - "/mwmegasquare": "\u33BF", - "/mwsquare": "\u33BD", - "/n": "\u006E", - "/n.inferior": "\u2099", - "/n.superior": "\u207F", - "/nabengali": "\u09A8", - "/nabla": "\u2207", - "/nacirclekatakana": "\u32E4", - "/nacute": "\u0144", - "/nadeva": "\u0928", - "/nafullwidth": "\u3381", - "/nagujarati": "\u0AA8", - "/nagurmukhi": "\u0A28", - "/nahiragana": "\u306A", - "/nailPolish": "\u1F485", - "/naira": "\u20A6", - "/nakatakana": "\u30CA", - "/nakatakanahalfwidth": "\uFF85", - "/nameBadge": "\u1F4DB", - "/nameideographiccircled": "\u3294", - "/nameideographicparen": "\u3234", - "/namurda": "\uA99F", - "/nand": "\u22BC", - "/nanosquare": "\u3328", - "/napostrophe": "\u0149", - "/narrownobreakspace": "\u202F", - "/nasquare": "\u3381", - "/nationalPark": "\u1F3DE", - "/nationaldigitshapes": "\u206E", - "/nbopomofo": "\u310B", - "/nbspace": "\u00A0", - "/ncaron": "\u0148", - "/ncedilla": "\u0146", - "/ncircle": "\u24DD", - "/ncircumflexbelow": "\u1E4B", - "/ncommaaccent": "\u0146", - "/ncurl": "\u0235", - "/ndescender": "\uA791", - "/ndot": "\u1E45", - "/ndotaccent": "\u1E45", - "/ndotbelow": "\u1E47", - "/necirclekatakana": "\u32E7", - "/necktie": "\u1F454", - "/negatedturnstiledblverticalbarright": "\u22AF", - "/nehiragana": "\u306D", - "/neirapproximatelynoractuallyequal": "\u2247", - "/neirasersetnorequalup": "\u2289", - "/neirasubsetnorequal": "\u2288", - "/neirgreaternorequal": "\u2271", - "/neirgreaternorequivalent": "\u2275", - "/neirgreaternorless": "\u2279", - "/neirlessnorequal": "\u2270", - "/neirlessnorequivalent": "\u2274", - "/neirlessnorgreater": "\u2278", - "/nekatakana": "\u30CD", - "/nekatakanahalfwidth": "\uFF88", - "/neptune": "\u2646", - "/neuter": "\u26B2", - "/neutralFace": "\u1F610", - "/newMoon": "\u1F311", - "/newMoonFace": "\u1F31A", - "/newsheqel": "\u20AA", - "/newsheqelsign": "\u20AA", - "/newspaper": "\u1F4F0", - "/newsquare": "\u1F195", - "/nextpage": "\u2398", - "/nffullwidth": "\u338B", - "/nfsquare": "\u338B", - "/ng.fina": "\uFBD4", - "/ng.init": "\uFBD5", - "/ng.isol": "\uFBD3", - "/ng.medi": "\uFBD6", - "/ngabengali": "\u0999", - "/ngadeva": "\u0919", - "/ngagujarati": "\u0A99", - "/ngagurmukhi": "\u0A19", - "/ngalelet": "\uA98A", - "/ngaleletraswadi": "\uA98B", - "/ngoeh": "\u06B1", - "/ngoeh.fina": "\uFB9B", - "/ngoeh.init": "\uFB9C", - "/ngoeh.isol": "\uFB9A", - "/ngoeh.medi": "\uFB9D", - "/ngonguthai": "\u0E07", - "/ngrave": "\u01F9", - "/ngsquare": "\u1F196", - "/nhiragana": "\u3093", - "/nhookleft": "\u0272", - "/nhookretroflex": "\u0273", - "/nicirclekatakana": "\u32E5", - "/nieunacirclekorean": "\u326F", - "/nieunaparenkorean": "\u320F", - "/nieuncieuckorean": "\u3135", - "/nieuncirclekorean": "\u3261", - "/nieunhieuhkorean": "\u3136", - "/nieunkorean": "\u3134", - "/nieunpansioskorean": "\u3168", - "/nieunparenkorean": "\u3201", - "/nieunsioskorean": "\u3167", - "/nieuntikeutkorean": "\u3166", - "/nightStars": "\u1F303", - "/nightideographiccircled": "\u32B0", - "/nihiragana": "\u306B", - "/nikatakana": "\u30CB", - "/nikatakanahalfwidth": "\uFF86", - "/nikhahitleftthai": "\uF899", - "/nikhahitthai": "\u0E4D", - "/nine": "\u0039", - "/nine.inferior": "\u2089", - "/nine.roman": "\u2168", - "/nine.romansmall": "\u2178", - "/nine.superior": "\u2079", - "/ninearabic": "\u0669", - "/ninebengali": "\u09EF", - "/ninecircle": "\u2468", - "/ninecircledbl": "\u24FD", - "/ninecircleinversesansserif": "\u2792", - "/ninecomma": "\u1F10A", - "/ninedeva": "\u096F", - "/ninefar": "\u06F9", - "/ninegujarati": "\u0AEF", - "/ninegurmukhi": "\u0A6F", - "/ninehackarabic": "\u0669", - "/ninehangzhou": "\u3029", - "/nineideographiccircled": "\u3288", - "/nineideographicparen": "\u3228", - "/nineinferior": "\u2089", - "/ninemonospace": "\uFF19", - "/nineoldstyle": "\uF739", - "/nineparen": "\u247C", - "/nineparenthesized": "\u247C", - "/nineperiod": "\u2490", - "/ninepersian": "\u06F9", - "/nineroman": "\u2178", - "/ninesuperior": "\u2079", - "/nineteencircle": "\u2472", - "/nineteencircleblack": "\u24F3", - "/nineteenparen": "\u2486", - "/nineteenparenthesized": "\u2486", - "/nineteenperiod": "\u249A", - "/ninethai": "\u0E59", - "/nj": "\u01CC", - "/njecyr": "\u045A", - "/njecyrillic": "\u045A", - "/njekomicyr": "\u050B", - "/nkatakana": "\u30F3", - "/nkatakanahalfwidth": "\uFF9D", - "/nlegrightlong": "\u019E", - "/nlinebelow": "\u1E49", - "/nlongrightleg": "\u019E", - "/nmbr:oneeighth": "\u215B", - "/nmbr:onefifth": "\u2155", - "/nmbr:onetenth": "\u2152", - "/nmfullwidth": "\u339A", - "/nmonospace": "\uFF4E", - "/nmsquare": "\u339A", - "/nnabengali": "\u09A3", - "/nnadeva": "\u0923", - "/nnagujarati": "\u0AA3", - "/nnagurmukhi": "\u0A23", - "/nnnadeva": "\u0929", - "/noBicycles": "\u1F6B3", - "/noEntrySign": "\u1F6AB", - "/noMobilePhones": "\u1F4F5", - "/noOneUnderEighteen": "\u1F51E", - "/noPedestrians": "\u1F6B7", - "/noPiracy": "\u1F572", - "/noSmoking": "\u1F6AD", - "/nobliquestroke": "\uA7A5", - "/nocirclekatakana": "\u32E8", - "/nodeascending": "\u260A", - "/nodedescending": "\u260B", - "/noentry": "\u26D4", - "/nohiragana": "\u306E", - "/nokatakana": "\u30CE", - "/nokatakanahalfwidth": "\uFF89", - "/nominaldigitshapes": "\u206F", - "/nonPotableWater": "\u1F6B1", - "/nonbreakinghyphen": "\u2011", - "/nonbreakingspace": "\u00A0", - "/nonenthai": "\u0E13", - "/nonuthai": "\u0E19", - "/noon": "\u0646", - "/noon.fina": "\uFEE6", - "/noon.init": "\uFEE7", - "/noon.init_alefmaksura.fina": "\uFC4F", - "/noon.init_hah.fina": "\uFC4C", - "/noon.init_hah.medi": "\uFCD3", - "/noon.init_hah.medi_meem.medi": "\uFD95", - "/noon.init_heh.medi": "\uFCD6", - "/noon.init_jeem.fina": "\uFC4B", - "/noon.init_jeem.medi": "\uFCD2", - "/noon.init_jeem.medi_hah.medi": "\uFDB8", - "/noon.init_jeem.medi_meem.medi": "\uFD98", - "/noon.init_khah.fina": "\uFC4D", - "/noon.init_khah.medi": "\uFCD4", - "/noon.init_meem.fina": "\uFC4E", - "/noon.init_meem.medi": "\uFCD5", - "/noon.init_yeh.fina": "\uFC50", - "/noon.isol": "\uFEE5", - "/noon.medi": "\uFEE8", - "/noon.medi_alefmaksura.fina": "\uFC8E", - "/noon.medi_hah.medi_alefmaksura.fina": "\uFD96", - "/noon.medi_hah.medi_yeh.fina": "\uFDB3", - "/noon.medi_heh.medi": "\uFCEF", - "/noon.medi_jeem.medi_alefmaksura.fina": "\uFD99", - "/noon.medi_jeem.medi_hah.fina": "\uFDBD", - "/noon.medi_jeem.medi_meem.fina": "\uFD97", - "/noon.medi_jeem.medi_yeh.fina": "\uFDC7", - "/noon.medi_meem.fina": "\uFC8C", - "/noon.medi_meem.medi": "\uFCEE", - "/noon.medi_meem.medi_alefmaksura.fina": "\uFD9B", - "/noon.medi_meem.medi_yeh.fina": "\uFD9A", - "/noon.medi_noon.fina": "\uFC8D", - "/noon.medi_reh.fina": "\uFC8A", - "/noon.medi_yeh.fina": "\uFC8F", - "/noon.medi_zain.fina": "\uFC8B", - "/noonSmallTah": "\u0768", - "/noonSmallV": "\u0769", - "/noonTwoDotsBelow": "\u0767", - "/noonabove": "\u06E8", - "/noonarabic": "\u0646", - "/noondotbelow": "\u06B9", - "/noonfinalarabic": "\uFEE6", - "/noonghunna": "\u06BA", - "/noonghunna.fina": "\uFB9F", - "/noonghunna.isol": "\uFB9E", - "/noonghunnaarabic": "\u06BA", - "/noonghunnafinalarabic": "\uFB9F", - "/noonhehinitialarabic": "\uFEE7", - "/nooninitialarabic": "\uFEE7", - "/noonjeeminitialarabic": "\uFCD2", - "/noonjeemisolatedarabic": "\uFC4B", - "/noonmedialarabic": "\uFEE8", - "/noonmeeminitialarabic": "\uFCD5", - "/noonmeemisolatedarabic": "\uFC4E", - "/noonnoonfinalarabic": "\uFC8D", - "/noonring": "\u06BC", - "/noonthreedotsabove": "\u06BD", - "/nor": "\u22BD", - "/nordicmark": "\u20BB", - "/normalfacrsemidirectproductleft": "\u22C9", - "/normalfacrsemidirectproductright": "\u22CA", - "/normalsubgroorequalup": "\u22B4", - "/normalsubgroup": "\u22B2", - "/northeastPointingAirplane": "\u1F6EA", - "/nose": "\u1F443", - "/notalmostequal": "\u2249", - "/notasersetup": "\u2285", - "/notasympticallyequal": "\u2244", - "/notcheckmark": "\u237B", - "/notchedLeftSemicircleThreeDots": "\u1F543", - "/notchedRightSemicircleThreeDots": "\u1F544", - "/notcontains": "\u220C", - "/note": "\u1F5C8", - "/notePad": "\u1F5CA", - "/notePage": "\u1F5C9", - "/notebook": "\u1F4D3", - "/notebookDecorativeCover": "\u1F4D4", - "/notelement": "\u2209", - "/notelementof": "\u2209", - "/notequal": "\u2260", - "/notequivalent": "\u226D", - "/notexistential": "\u2204", - "/notgreater": "\u226F", - "/notgreaternorequal": "\u2271", - "/notgreaternorless": "\u2279", - "/notidentical": "\u2262", - "/notless": "\u226E", - "/notlessnorequal": "\u2270", - "/notnormalsubgroorequalup": "\u22EC", - "/notnormalsubgroup": "\u22EA", - "/notparallel": "\u2226", - "/notprecedes": "\u2280", - "/notsignturned": "\u2319", - "/notsquareimageorequal": "\u22E2", - "/notsquareoriginalorequal": "\u22E3", - "/notsubset": "\u2284", - "/notsucceeds": "\u2281", - "/notsuperset": "\u2285", - "/nottilde": "\u2241", - "/nottosquare": "\u3329", - "/nottrue": "\u22AD", - "/novembertelegraph": "\u32CA", - "/nowarmenian": "\u0576", - "/nparen": "\u24A9", - "/nparenthesized": "\u24A9", - "/nretroflex": "\u0273", - "/nsfullwidth": "\u33B1", - "/nssquare": "\u33B1", - "/nsuperior": "\u207F", - "/ntilde": "\u00F1", - "/nu": "\u03BD", - "/nucirclekatakana": "\u32E6", - "/nuhiragana": "\u306C", - "/nukatakana": "\u30CC", - "/nukatakanahalfwidth": "\uFF87", - "/nuktabengali": "\u09BC", - "/nuktadeva": "\u093C", - "/nuktagujarati": "\u0ABC", - "/nuktagurmukhi": "\u0A3C", - "/num": "\uA774", - "/numbermarkabove": "\u0605", - "/numbersign": "\u0023", - "/numbersignmonospace": "\uFF03", - "/numbersignsmall": "\uFE5F", - "/numeralsign": "\u0374", - "/numeralsigngreek": "\u0374", - "/numeralsignlowergreek": "\u0375", - "/numero": "\u2116", - "/nun": "\u05E0", - "/nun:hb": "\u05E0", - "/nunHafukha:hb": "\u05C6", - "/nundagesh": "\uFB40", - "/nundageshhebrew": "\uFB40", - "/nunhebrew": "\u05E0", - "/nunwithdagesh:hb": "\uFB40", - "/nutAndBolt": "\u1F529", - "/nvfullwidth": "\u33B5", - "/nvsquare": "\u33B5", - "/nwfullwidth": "\u33BB", - "/nwsquare": "\u33BB", - "/nyabengali": "\u099E", - "/nyadeva": "\u091E", - "/nyagujarati": "\u0A9E", - "/nyagurmukhi": "\u0A1E", - "/nyamurda": "\uA998", - "/nyeh": "\u0683", - "/nyeh.fina": "\uFB77", - "/nyeh.init": "\uFB78", - "/nyeh.isol": "\uFB76", - "/nyeh.medi": "\uFB79", - "/o": "\u006F", - "/o.inferior": "\u2092", - "/oacute": "\u00F3", - "/oangthai": "\u0E2D", - "/obarcyr": "\u04E9", - "/obardieresiscyr": "\u04EB", - "/obarred": "\u0275", - "/obarredcyrillic": "\u04E9", - "/obarreddieresiscyrillic": "\u04EB", - "/obelosdotted": "\u2E13", - "/obengali": "\u0993", - "/obopomofo": "\u311B", - "/obreve": "\u014F", - "/observereye": "\u23FF", - "/ocandradeva": "\u0911", - "/ocandragujarati": "\u0A91", - "/ocandravowelsigndeva": "\u0949", - "/ocandravowelsigngujarati": "\u0AC9", - "/ocaron": "\u01D2", - "/ocircle": "\u24DE", - "/ocirclekatakana": "\u32D4", - "/ocircumflex": "\u00F4", - "/ocircumflexacute": "\u1ED1", - "/ocircumflexdotbelow": "\u1ED9", - "/ocircumflexgrave": "\u1ED3", - "/ocircumflexhoi": "\u1ED5", - "/ocircumflexhookabove": "\u1ED5", - "/ocircumflextilde": "\u1ED7", - "/ocr:bowtie": "\u2445", - "/ocr:dash": "\u2448", - "/octagonalSign": "\u1F6D1", - "/octobertelegraph": "\u32C9", - "/octopus": "\u1F419", - "/ocyr": "\u043E", - "/ocyrillic": "\u043E", - "/odblacute": "\u0151", - "/odblgrave": "\u020D", - "/oden": "\u1F362", - "/odeva": "\u0913", - "/odieresis": "\u00F6", - "/odieresiscyr": "\u04E7", - "/odieresiscyrillic": "\u04E7", - "/odieresismacron": "\u022B", - "/odot": "\u022F", - "/odotbelow": "\u1ECD", - "/odotmacron": "\u0231", - "/oe": "\u0153", - "/oe.fina": "\uFBDA", - "/oe.isol": "\uFBD9", - "/oekirghiz": "\u06C5", - "/oekirghiz.fina": "\uFBE1", - "/oekirghiz.isol": "\uFBE0", - "/oekorean": "\u315A", - "/officeBuilding": "\u1F3E2", - "/ogonek": "\u02DB", - "/ogonekcmb": "\u0328", - "/ograve": "\u00F2", - "/ogravedbl": "\u020D", - "/ogujarati": "\u0A93", - "/oharmenian": "\u0585", - "/ohiragana": "\u304A", - "/ohm": "\u2126", - "/ohminverted": "\u2127", - "/ohoi": "\u1ECF", - "/ohookabove": "\u1ECF", - "/ohorn": "\u01A1", - "/ohornacute": "\u1EDB", - "/ohorndotbelow": "\u1EE3", - "/ohorngrave": "\u1EDD", - "/ohornhoi": "\u1EDF", - "/ohornhookabove": "\u1EDF", - "/ohorntilde": "\u1EE1", - "/ohungarumlaut": "\u0151", - "/ohuparen": "\u321E", - "/oi": "\u01A3", - "/oilDrum": "\u1F6E2", - "/oinvertedbreve": "\u020F", - "/ojeonparen": "\u321D", - "/okHandSign": "\u1F44C", - "/okatakana": "\u30AA", - "/okatakanahalfwidth": "\uFF75", - "/okorean": "\u3157", - "/oksquare": "\u1F197", - "/oldKey": "\u1F5DD", - "/oldPersonalComputer": "\u1F5B3", - "/olderMan": "\u1F474", - "/olderWoman": "\u1F475", - "/ole:hb": "\u05AB", - "/olehebrew": "\u05AB", - "/oloop": "\uA74D", - "/olowringinside": "\u2C7A", - "/omacron": "\u014D", - "/omacronacute": "\u1E53", - "/omacrongrave": "\u1E51", - "/omdeva": "\u0950", - "/omega": "\u03C9", - "/omega1": "\u03D6", - "/omegaacute": "\u1F7D", - "/omegaacuteiotasub": "\u1FF4", - "/omegaasper": "\u1F61", - "/omegaasperacute": "\u1F65", - "/omegaasperacuteiotasub": "\u1FA5", - "/omegaaspergrave": "\u1F63", - "/omegaaspergraveiotasub": "\u1FA3", - "/omegaasperiotasub": "\u1FA1", - "/omegaaspertilde": "\u1F67", - "/omegaaspertildeiotasub": "\u1FA7", - "/omegaclosed": "\u0277", - "/omegacyr": "\u0461", - "/omegacyrillic": "\u0461", - "/omegafunc": "\u2375", - "/omegagrave": "\u1F7C", - "/omegagraveiotasub": "\u1FF2", - "/omegaiotasub": "\u1FF3", - "/omegalatinclosed": "\u0277", - "/omegalenis": "\u1F60", - "/omegalenisacute": "\u1F64", - "/omegalenisacuteiotasub": "\u1FA4", - "/omegalenisgrave": "\u1F62", - "/omegalenisgraveiotasub": "\u1FA2", - "/omegalenisiotasub": "\u1FA0", - "/omegalenistilde": "\u1F66", - "/omegalenistildeiotasub": "\u1FA6", - "/omegaroundcyr": "\u047B", - "/omegaroundcyrillic": "\u047B", - "/omegatilde": "\u1FF6", - "/omegatildeiotasub": "\u1FF7", - "/omegatitlocyr": "\u047D", - "/omegatitlocyrillic": "\u047D", - "/omegatonos": "\u03CE", - "/omegaunderlinefunc": "\u2379", - "/omgujarati": "\u0AD0", - "/omicron": "\u03BF", - "/omicronacute": "\u1F79", - "/omicronasper": "\u1F41", - "/omicronasperacute": "\u1F45", - "/omicronaspergrave": "\u1F43", - "/omicrongrave": "\u1F78", - "/omicronlenis": "\u1F40", - "/omicronlenisacute": "\u1F44", - "/omicronlenisgrave": "\u1F42", - "/omicrontonos": "\u03CC", - "/omonospace": "\uFF4F", - "/onExclamationMarkLeftRightArrowAbove": "\u1F51B", - "/oncomingAutomobile": "\u1F698", - "/oncomingBus": "\u1F68D", - "/oncomingFireEngine": "\u1F6F1", - "/oncomingPoliceCar": "\u1F694", - "/oncomingTaxi": "\u1F696", - "/one": "\u0031", - "/one.inferior": "\u2081", - "/one.roman": "\u2160", - "/one.romansmall": "\u2170", - "/oneButtonMouse": "\u1F5AF", - "/onearabic": "\u0661", - "/onebengali": "\u09E7", - "/onecircle": "\u2460", - "/onecircledbl": "\u24F5", - "/onecircleinversesansserif": "\u278A", - "/onecomma": "\u1F102", - "/onedeva": "\u0967", - "/onedotenleader": "\u2024", - "/onedotovertwodots": "\u2E2B", - "/oneeighth": "\u215B", - "/onefar": "\u06F1", - "/onefitted": "\uF6DC", - "/onefraction": "\u215F", - "/onegujarati": "\u0AE7", - "/onegurmukhi": "\u0A67", - "/onehackarabic": "\u0661", - "/onehalf": "\u00BD", - "/onehangzhou": "\u3021", - "/onehundred.roman": "\u216D", - "/onehundred.romansmall": "\u217D", - "/onehundredthousand.roman": "\u2188", - "/onehundredtwentypsquare": "\u1F1A4", - "/oneideographiccircled": "\u3280", - "/oneideographicparen": "\u3220", - "/oneinferior": "\u2081", - "/onemonospace": "\uFF11", - "/oneninth": "\u2151", - "/onenumeratorbengali": "\u09F4", - "/oneoldstyle": "\uF731", - "/oneparen": "\u2474", - "/oneparenthesized": "\u2474", - "/oneperiod": "\u2488", - "/onepersian": "\u06F1", - "/onequarter": "\u00BC", - "/oneroman": "\u2170", - "/oneseventh": "\u2150", - "/onesixth": "\u2159", - "/onesuperior": "\u00B9", - "/onethai": "\u0E51", - "/onethird": "\u2153", - "/onethousand.roman": "\u216F", - "/onethousand.romansmall": "\u217F", - "/onethousandcd.roman": "\u2180", - "/onsusquare": "\u3309", - "/oo": "\uA74F", - "/oogonek": "\u01EB", - "/oogonekmacron": "\u01ED", - "/oogurmukhi": "\u0A13", - "/oomatragurmukhi": "\u0A4B", - "/oomusquare": "\u330A", - "/oopen": "\u0254", - "/oparen": "\u24AA", - "/oparenthesized": "\u24AA", - "/openBook": "\u1F4D6", - "/openFileFolder": "\u1F4C2", - "/openFolder": "\u1F5C1", - "/openHandsSign": "\u1F450", - "/openLock": "\u1F513", - "/openMailboxLoweredFlag": "\u1F4ED", - "/openMailboxRaisedFlag": "\u1F4EC", - "/openbullet": "\u25E6", - "/openheadarrowleft": "\u21FD", - "/openheadarrowleftright": "\u21FF", - "/openheadarrowright": "\u21FE", - "/opensubset": "\u27C3", - "/opensuperset": "\u27C4", - "/ophiuchus": "\u26CE", - "/opposition": "\u260D", - "/opticalDisc": "\u1F4BF", - "/opticalDiscIcon": "\u1F5B8", - "/option": "\u2325", - "/orangeBook": "\u1F4D9", - "/ordfeminine": "\u00AA", - "/ordmasculine": "\u00BA", - "/ordotinside": "\u27C7", - "/original": "\u22B6", - "/ornateleftparenthesis": "\uFD3E", - "/ornaterightparenthesis": "\uFD3F", - "/orthodoxcross": "\u2626", - "/orthogonal": "\u221F", - "/orya:a": "\u0B05", - "/orya:aa": "\u0B06", - "/orya:aasign": "\u0B3E", - "/orya:ai": "\u0B10", - "/orya:ailengthmark": "\u0B56", - "/orya:aisign": "\u0B48", - "/orya:anusvara": "\u0B02", - "/orya:au": "\u0B14", - "/orya:aulengthmark": "\u0B57", - "/orya:ausign": "\u0B4C", - "/orya:avagraha": "\u0B3D", - "/orya:ba": "\u0B2C", - "/orya:bha": "\u0B2D", - "/orya:ca": "\u0B1A", - "/orya:candrabindu": "\u0B01", - "/orya:cha": "\u0B1B", - "/orya:da": "\u0B26", - "/orya:dda": "\u0B21", - "/orya:ddha": "\u0B22", - "/orya:dha": "\u0B27", - "/orya:e": "\u0B0F", - "/orya:eight": "\u0B6E", - "/orya:esign": "\u0B47", - "/orya:five": "\u0B6B", - "/orya:four": "\u0B6A", - "/orya:fractiononeeighth": "\u0B76", - "/orya:fractiononehalf": "\u0B73", - "/orya:fractiononequarter": "\u0B72", - "/orya:fractiononesixteenth": "\u0B75", - "/orya:fractionthreequarters": "\u0B74", - "/orya:fractionthreesixteenths": "\u0B77", - "/orya:ga": "\u0B17", - "/orya:gha": "\u0B18", - "/orya:ha": "\u0B39", - "/orya:i": "\u0B07", - "/orya:ii": "\u0B08", - "/orya:iisign": "\u0B40", - "/orya:isign": "\u0B3F", - "/orya:isshar": "\u0B70", - "/orya:ja": "\u0B1C", - "/orya:jha": "\u0B1D", - "/orya:ka": "\u0B15", - "/orya:kha": "\u0B16", - "/orya:la": "\u0B32", - "/orya:lla": "\u0B33", - "/orya:llvocal": "\u0B61", - "/orya:llvocalsign": "\u0B63", - "/orya:lvocal": "\u0B0C", - "/orya:lvocalsign": "\u0B62", - "/orya:ma": "\u0B2E", - "/orya:na": "\u0B28", - "/orya:nga": "\u0B19", - "/orya:nine": "\u0B6F", - "/orya:nna": "\u0B23", - "/orya:nukta": "\u0B3C", - "/orya:nya": "\u0B1E", - "/orya:o": "\u0B13", - "/orya:one": "\u0B67", - "/orya:osign": "\u0B4B", - "/orya:pa": "\u0B2A", - "/orya:pha": "\u0B2B", - "/orya:ra": "\u0B30", - "/orya:rha": "\u0B5D", - "/orya:rra": "\u0B5C", - "/orya:rrvocal": "\u0B60", - "/orya:rrvocalsign": "\u0B44", - "/orya:rvocal": "\u0B0B", - "/orya:rvocalsign": "\u0B43", - "/orya:sa": "\u0B38", - "/orya:seven": "\u0B6D", - "/orya:sha": "\u0B36", - "/orya:six": "\u0B6C", - "/orya:ssa": "\u0B37", - "/orya:ta": "\u0B24", - "/orya:tha": "\u0B25", - "/orya:three": "\u0B69", - "/orya:tta": "\u0B1F", - "/orya:ttha": "\u0B20", - "/orya:two": "\u0B68", - "/orya:u": "\u0B09", - "/orya:usign": "\u0B41", - "/orya:uu": "\u0B0A", - "/orya:uusign": "\u0B42", - "/orya:va": "\u0B35", - "/orya:virama": "\u0B4D", - "/orya:visarga": "\u0B03", - "/orya:wa": "\u0B71", - "/orya:ya": "\u0B2F", - "/orya:yya": "\u0B5F", - "/orya:zero": "\u0B66", - "/oscript": "\u2134", - "/oshortdeva": "\u0912", - "/oshortvowelsigndeva": "\u094A", - "/oslash": "\u00F8", - "/oslashacute": "\u01FF", - "/osmallhiragana": "\u3049", - "/osmallkatakana": "\u30A9", - "/osmallkatakanahalfwidth": "\uFF6B", - "/ostroke": "\uA74B", - "/ostrokeacute": "\u01FF", - "/osuperior": "\uF6F0", - "/otcyr": "\u047F", - "/otcyrillic": "\u047F", - "/otilde": "\u00F5", - "/otildeacute": "\u1E4D", - "/otildedieresis": "\u1E4F", - "/otildemacron": "\u022D", - "/ou": "\u0223", - "/oubopomofo": "\u3121", - "/ounce": "\u2125", - "/outboxTray": "\u1F4E4", - "/outerjoinfull": "\u27D7", - "/outerjoinleft": "\u27D5", - "/outerjoinright": "\u27D6", - "/outputpassiveup": "\u2392", - "/overlap": "\u1F5D7", - "/overline": "\u203E", - "/overlinecenterline": "\uFE4A", - "/overlinecmb": "\u0305", - "/overlinedashed": "\uFE49", - "/overlinedblwavy": "\uFE4C", - "/overlinewavy": "\uFE4B", - "/overscore": "\u00AF", - "/ovfullwidth": "\u3375", - "/ovowelsignbengali": "\u09CB", - "/ovowelsigndeva": "\u094B", - "/ovowelsigngujarati": "\u0ACB", - "/ox": "\u1F402", - "/p": "\u0070", - "/p.inferior": "\u209A", - "/paampsfullwidth": "\u3380", - "/paampssquare": "\u3380", - "/paasentosquare": "\u332B", - "/paatusquare": "\u332C", - "/pabengali": "\u09AA", - "/pacerek": "\uA989", - "/package": "\u1F4E6", - "/pacute": "\u1E55", - "/padeva": "\u092A", - "/pafullwidth": "\u33A9", - "/page": "\u1F5CF", - "/pageCircledText": "\u1F5DF", - "/pageCurl": "\u1F4C3", - "/pageFacingUp": "\u1F4C4", - "/pagedown": "\u21DF", - "/pager": "\u1F4DF", - "/pages": "\u1F5D0", - "/pageup": "\u21DE", - "/pagoda": "\u1F6D4", - "/pagujarati": "\u0AAA", - "/pagurmukhi": "\u0A2A", - "/pahiragana": "\u3071", - "/paiyannoithai": "\u0E2F", - "/pakatakana": "\u30D1", - "/palatalizationcyrilliccmb": "\u0484", - "/palatcmbcyr": "\u0484", - "/pallas": "\u26B4", - "/palmTree": "\u1F334", - "/palmbranch": "\u2E19", - "/palochkacyr": "\u04CF", - "/palochkacyrillic": "\u04C0", - "/pamurda": "\uA9A6", - "/pandaFace": "\u1F43C", - "/pangkatpada": "\uA9C7", - "/pangkon": "\uA9C0", - "/pangrangkep": "\uA9CF", - "/pansioskorean": "\u317F", - "/panyangga": "\uA980", - "/paperclip": "\u1F4CE", - "/paragraph": "\u00B6", - "/paragraphos": "\u2E0F", - "/paragraphosforked": "\u2E10", - "/paragraphosforkedreversed": "\u2E11", - "/paragraphseparator": "\u2029", - "/parallel": "\u2225", - "/parallelogramblack": "\u25B0", - "/parallelogramwhite": "\u25B1", - "/parenbottom": "\u23DD", - "/parendblleft": "\u2E28", - "/parendblright": "\u2E29", - "/parenextensionleft": "\u239C", - "/parenextensionright": "\u239F", - "/parenflatleft": "\u27EE", - "/parenflatright": "\u27EF", - "/parenhookupleft": "\u239B", - "/parenhookupright": "\u239E", - "/parenleft": "\u0028", - "/parenleft.inferior": "\u208D", - "/parenleft.superior": "\u207D", - "/parenleftaltonearabic": "\uFD3E", - "/parenleftbt": "\uF8ED", - "/parenleftex": "\uF8EC", - "/parenleftinferior": "\u208D", - "/parenleftmonospace": "\uFF08", - "/parenleftsmall": "\uFE59", - "/parenleftsuperior": "\u207D", - "/parenlefttp": "\uF8EB", - "/parenleftvertical": "\uFE35", - "/parenlowerhookleft": "\u239D", - "/parenlowerhookright": "\u23A0", - "/parenright": "\u0029", - "/parenright.inferior": "\u208E", - "/parenright.superior": "\u207E", - "/parenrightaltonearabic": "\uFD3F", - "/parenrightbt": "\uF8F8", - "/parenrightex": "\uF8F7", - "/parenrightinferior": "\u208E", - "/parenrightmonospace": "\uFF09", - "/parenrightsmall": "\uFE5A", - "/parenrightsuperior": "\u207E", - "/parenrighttp": "\uF8F6", - "/parenrightvertical": "\uFE36", - "/parentop": "\u23DC", - "/partalternationmark": "\u303D", - "/partialdiff": "\u2202", - "/partnership": "\u3250", - "/partyPopper": "\u1F389", - "/paseq:hb": "\u05C0", - "/paseqhebrew": "\u05C0", - "/pashta:hb": "\u0599", - "/pashtahebrew": "\u0599", - "/pasquare": "\u33A9", - "/passengerShip": "\u1F6F3", - "/passivedown": "\u2391", - "/passportControl": "\u1F6C2", - "/patah": "\u05B7", - "/patah11": "\u05B7", - "/patah1d": "\u05B7", - "/patah2a": "\u05B7", - "/patah:hb": "\u05B7", - "/patahhebrew": "\u05B7", - "/patahnarrowhebrew": "\u05B7", - "/patahquarterhebrew": "\u05B7", - "/patahwidehebrew": "\u05B7", - "/pawPrints": "\u1F43E", - "/pawnblack": "\u265F", - "/pawnwhite": "\u2659", - "/pazer:hb": "\u05A1", - "/pazerhebrew": "\u05A1", - "/pbopomofo": "\u3106", - "/pcfullwidth": "\u3376", - "/pcircle": "\u24DF", - "/pdot": "\u1E57", - "/pdotaccent": "\u1E57", - "/pe": "\u05E4", - "/pe:hb": "\u05E4", - "/peace": "\u262E", - "/peach": "\u1F351", - "/pear": "\u1F350", - "/pecyr": "\u043F", - "/pecyrillic": "\u043F", - "/pedagesh": "\uFB44", - "/pedageshhebrew": "\uFB44", - "/pedestrian": "\u1F6B6", - "/peezisquare": "\u333B", - "/pefinaldageshhebrew": "\uFB43", - "/peh.fina": "\uFB57", - "/peh.init": "\uFB58", - "/peh.isol": "\uFB56", - "/peh.medi": "\uFB59", - "/peharabic": "\u067E", - "/peharmenian": "\u057A", - "/pehebrew": "\u05E4", - "/peheh": "\u06A6", - "/peheh.fina": "\uFB6F", - "/peheh.init": "\uFB70", - "/peheh.isol": "\uFB6E", - "/peheh.medi": "\uFB71", - "/pehfinalarabic": "\uFB57", - "/pehinitialarabic": "\uFB58", - "/pehiragana": "\u307A", - "/pehmedialarabic": "\uFB59", - "/pehookcyr": "\u04A7", - "/pekatakana": "\u30DA", - "/pemiddlehookcyrillic": "\u04A7", - "/penOverStampedEnvelope": "\u1F586", - "/pengkalconsonant": "\uA9BE", - "/penguin": "\u1F427", - "/penihisquare": "\u3338", - "/pensiveFace": "\u1F614", - "/pensusquare": "\u333A", - "/pentagram": "\u26E4", - "/pentasememetrical": "\u23D9", - "/pepetvowel": "\uA9BC", - "/per": "\u214C", - "/perafehebrew": "\uFB4E", - "/percent": "\u0025", - "/percentarabic": "\u066A", - "/percentmonospace": "\uFF05", - "/percentsmall": "\uFE6A", - "/percussivebidental": "\u02AD", - "/percussivebilabial": "\u02AC", - "/performingArts": "\u1F3AD", - "/period": "\u002E", - "/periodarmenian": "\u0589", - "/periodcentered": "\u00B7", - "/periodhalfwidth": "\uFF61", - "/periodinferior": "\uF6E7", - "/periodmonospace": "\uFF0E", - "/periodsmall": "\uFE52", - "/periodsuperior": "\uF6E8", - "/periodurdu": "\u06D4", - "/perispomenigreekcmb": "\u0342", - "/permanentpaper": "\u267E", - "/permille": "\u0609", - "/perpendicular": "\u22A5", - "/perseveringFace": "\u1F623", - "/personBlondHair": "\u1F471", - "/personBowingDeeply": "\u1F647", - "/personFrowning": "\u1F64D", - "/personRaisingBothHandsInCelebration": "\u1F64C", - "/personWithFoldedHands": "\u1F64F", - "/personWithPoutingFace": "\u1F64E", - "/personalComputer": "\u1F4BB", - "/personball": "\u26F9", - "/perspective": "\u2306", - "/pertenthousandsign": "\u2031", - "/perthousand": "\u2030", - "/peseta": "\u20A7", - "/peso": "\u20B1", - "/pesosquare": "\u3337", - "/petailcyr": "\u0525", - "/pewithdagesh:hb": "\uFB44", - "/pewithrafe:hb": "\uFB4E", - "/pffullwidth": "\u338A", - "/pflourish": "\uA753", - "/pfsquare": "\u338A", - "/phabengali": "\u09AB", - "/phadeva": "\u092B", - "/phagujarati": "\u0AAB", - "/phagurmukhi": "\u0A2B", - "/pharyngealvoicedfricative": "\u0295", - "/phfullwidth": "\u33D7", - "/phi": "\u03C6", - "/phi.math": "\u03D5", - "/phi1": "\u03D5", - "/phieuphacirclekorean": "\u327A", - "/phieuphaparenkorean": "\u321A", - "/phieuphcirclekorean": "\u326C", - "/phieuphkorean": "\u314D", - "/phieuphparenkorean": "\u320C", - "/philatin": "\u0278", - "/phinthuthai": "\u0E3A", - "/phisymbolgreek": "\u03D5", - "/phitailless": "\u2C77", - "/phon:AEsmall": "\u1D01", - "/phon:Aemod": "\u1D2D", - "/phon:Amod": "\u1D2C", - "/phon:Asmall": "\u1D00", - "/phon:Bbarmod": "\u1D2F", - "/phon:Bbarsmall": "\u1D03", - "/phon:Bmod": "\u1D2E", - "/phon:Csmall": "\u1D04", - "/phon:Dmod": "\u1D30", - "/phon:Dsmall": "\u1D05", - "/phon:ENcyrmod": "\u1D78", - "/phon:Elsmallcyr": "\u1D2B", - "/phon:Emod": "\u1D31", - "/phon:Ereversedmod": "\u1D32", - "/phon:Esmall": "\u1D07", - "/phon:Ethsmall": "\u1D06", - "/phon:Ezhsmall": "\u1D23", - "/phon:Gmod": "\u1D33", - "/phon:Hmod": "\u1D34", - "/phon:Imod": "\u1D35", - "/phon:Ismallmod": "\u1DA6", - "/phon:Ismallstroke": "\u1D7B", - "/phon:Istrokesmallmod": "\u1DA7", - "/phon:Jmod": "\u1D36", - "/phon:Jsmall": "\u1D0A", - "/phon:Kmod": "\u1D37", - "/phon:Ksmall": "\u1D0B", - "/phon:Lmod": "\u1D38", - "/phon:Lsmallmod": "\u1DAB", - "/phon:Lsmallstroke": "\u1D0C", - "/phon:Mmod": "\u1D39", - "/phon:Msmall": "\u1D0D", - "/phon:Nmod": "\u1D3A", - "/phon:Nreversedmod": "\u1D3B", - "/phon:Nsmallmod": "\u1DB0", - "/phon:Nsmallreversed": "\u1D0E", - "/phon:OUsmall": "\u1D15", - "/phon:Omod": "\u1D3C", - "/phon:Oopensmall": "\u1D10", - "/phon:Osmall": "\u1D0F", - "/phon:Oumod": "\u1D3D", - "/phon:Pmod": "\u1D3E", - "/phon:Psmall": "\u1D18", - "/phon:Rmod": "\u1D3F", - "/phon:Rsmallreversed": "\u1D19", - "/phon:Rsmallturned": "\u1D1A", - "/phon:Tmod": "\u1D40", - "/phon:Tsmall": "\u1D1B", - "/phon:Umod": "\u1D41", - "/phon:Usmall": "\u1D1C", - "/phon:Usmallmod": "\u1DB8", - "/phon:Usmallstroke": "\u1D7E", - "/phon:Vsmall": "\u1D20", - "/phon:Wmod": "\u1D42", - "/phon:Wsmall": "\u1D21", - "/phon:Zsmall": "\u1D22", - "/phon:aeturned": "\u1D02", - "/phon:aeturnedmod": "\u1D46", - "/phon:ain": "\u1D25", - "/phon:ainmod": "\u1D5C", - "/phon:alphamod": "\u1D45", - "/phon:alpharetroflexhook": "\u1D90", - "/phon:alphaturnedmod": "\u1D9B", - "/phon:amod": "\u1D43", - "/phon:aretroflexhook": "\u1D8F", - "/phon:aturnedmod": "\u1D44", - "/phon:betamod": "\u1D5D", - "/phon:bmiddletilde": "\u1D6C", - "/phon:bmod": "\u1D47", - "/phon:bpalatalhook": "\u1D80", - "/phon:ccurlmod": "\u1D9D", - "/phon:chimod": "\u1D61", - "/phon:cmod": "\u1D9C", - "/phon:deltamod": "\u1D5F", - "/phon:dhooktail": "\u1D91", - "/phon:dmiddletilde": "\u1D6D", - "/phon:dmod": "\u1D48", - "/phon:dotlessjstrokemod": "\u1DA1", - "/phon:dpalatalhook": "\u1D81", - "/phon:emod": "\u1D49", - "/phon:engmod": "\u1D51", - "/phon:eopenmod": "\u1D4B", - "/phon:eopenretroflexhook": "\u1D93", - "/phon:eopenreversedmod": "\u1D9F", - "/phon:eopenreversedretroflexhook": "\u1D94", - "/phon:eopenturned": "\u1D08", - "/phon:eopenturnedmod": "\u1D4C", - "/phon:eretroflexhook": "\u1D92", - "/phon:eshmod": "\u1DB4", - "/phon:eshpalatalhook": "\u1D8B", - "/phon:eshretroflexhook": "\u1D98", - "/phon:ethmod": "\u1D9E", - "/phon:ezhmod": "\u1DBE", - "/phon:ezhretroflexhook": "\u1D9A", - "/phon:fmiddletilde": "\u1D6E", - "/phon:fmod": "\u1DA0", - "/phon:fpalatalhook": "\u1D82", - "/phon:ginsular": "\u1D79", - "/phon:gmod": "\u1D4D", - "/phon:gpalatalhook": "\u1D83", - "/phon:gr:Gammasmall": "\u1D26", - "/phon:gr:Lambdasmall": "\u1D27", - "/phon:gr:Pismall": "\u1D28", - "/phon:gr:Psismall": "\u1D2A", - "/phon:gr:RsmallHO": "\u1D29", - "/phon:gr:betasubscript": "\u1D66", - "/phon:gr:chisubscript": "\u1D6A", - "/phon:gr:gammamod": "\u1D5E", - "/phon:gr:gammasubscript": "\u1D67", - "/phon:gr:phimod": "\u1D60", - "/phon:gr:phisubscript": "\u1D69", - "/phon:gr:rhosubscript": "\u1D68", - "/phon:gscriptmod": "\u1DA2", - "/phon:gturned": "\u1D77", - "/phon:hturnedmod": "\u1DA3", - "/phon:iotamod": "\u1DA5", - "/phon:iotastroke": "\u1D7C", - "/phon:iretroflexhook": "\u1D96", - "/phon:istrokemod": "\u1DA4", - "/phon:isubscript": "\u1D62", - "/phon:iturned": "\u1D09", - "/phon:iturnedmod": "\u1D4E", - "/phon:jcrossedtailmod": "\u1DA8", - "/phon:kmod": "\u1D4F", - "/phon:kpalatalhook": "\u1D84", - "/phon:lpalatalhook": "\u1D85", - "/phon:lpalatalhookmod": "\u1DAA", - "/phon:lretroflexhookmod": "\u1DA9", - "/phon:mhookmod": "\u1DAC", - "/phon:mlonglegturnedmod": "\u1DAD", - "/phon:mmiddletilde": "\u1D6F", - "/phon:mmod": "\u1D50", - "/phon:mpalatalhook": "\u1D86", - "/phon:mturnedmod": "\u1D5A", - "/phon:mturnedsideways": "\u1D1F", - "/phon:nlefthookmod": "\u1DAE", - "/phon:nmiddletilde": "\u1D70", - "/phon:npalatalhook": "\u1D87", - "/phon:nretroflexhookmod": "\u1DAF", - "/phon:obarmod": "\u1DB1", - "/phon:obottomhalf": "\u1D17", - "/phon:obottomhalfmod": "\u1D55", - "/phon:oeturned": "\u1D14", - "/phon:omod": "\u1D52", - "/phon:oopenmod": "\u1D53", - "/phon:oopenretroflexhook": "\u1D97", - "/phon:oopensideways": "\u1D12", - "/phon:osideways": "\u1D11", - "/phon:ostrokesideways": "\u1D13", - "/phon:otophalf": "\u1D16", - "/phon:otophalfmod": "\u1D54", - "/phon:phimod": "\u1DB2", - "/phon:pmiddletilde": "\u1D71", - "/phon:pmod": "\u1D56", - "/phon:ppalatalhook": "\u1D88", - "/phon:pstroke": "\u1D7D", - "/phon:rfishmiddletilde": "\u1D73", - "/phon:rmiddletilde": "\u1D72", - "/phon:rpalatalhook": "\u1D89", - "/phon:rsubscript": "\u1D63", - "/phon:schwamod": "\u1D4A", - "/phon:schwaretroflexhook": "\u1D95", - "/phon:shookmod": "\u1DB3", - "/phon:smiddletilde": "\u1D74", - "/phon:spalatalhook": "\u1D8A", - "/phon:spirantvoicedlaryngeal": "\u1D24", - "/phon:thetamod": "\u1DBF", - "/phon:thstrike": "\u1D7A", - "/phon:tmiddletilde": "\u1D75", - "/phon:tmod": "\u1D57", - "/phon:tpalatalhookmod": "\u1DB5", - "/phon:ubarmod": "\u1DB6", - "/phon:ue": "\u1D6B", - "/phon:umod": "\u1D58", - "/phon:upsilonmod": "\u1DB7", - "/phon:upsilonstroke": "\u1D7F", - "/phon:uretroflexhook": "\u1D99", - "/phon:usideways": "\u1D1D", - "/phon:usidewaysdieresised": "\u1D1E", - "/phon:usidewaysmod": "\u1D59", - "/phon:usubscript": "\u1D64", - "/phon:vhookmod": "\u1DB9", - "/phon:vmod": "\u1D5B", - "/phon:vpalatalhook": "\u1D8C", - "/phon:vsubscript": "\u1D65", - "/phon:vturnedmod": "\u1DBA", - "/phon:xpalatalhook": "\u1D8D", - "/phon:zcurlmod": "\u1DBD", - "/phon:zmiddletilde": "\u1D76", - "/phon:zmod": "\u1DBB", - "/phon:zpalatalhook": "\u1D8E", - "/phon:zretroflexhookmod": "\u1DBC", - "/phook": "\u01A5", - "/phophanthai": "\u0E1E", - "/phophungthai": "\u0E1C", - "/phosamphaothai": "\u0E20", - "/pi": "\u03C0", - "/pi.math": "\u03D6", - "/piasutorusquare": "\u332E", - "/pick": "\u26CF", - "/pidblstruck": "\u213C", - "/pieupacirclekorean": "\u3273", - "/pieupaparenkorean": "\u3213", - "/pieupcieuckorean": "\u3176", - "/pieupcirclekorean": "\u3265", - "/pieupkiyeokkorean": "\u3172", - "/pieupkorean": "\u3142", - "/pieupparenkorean": "\u3205", - "/pieupsioskiyeokkorean": "\u3174", - "/pieupsioskorean": "\u3144", - "/pieupsiostikeutkorean": "\u3175", - "/pieupthieuthkorean": "\u3177", - "/pieuptikeutkorean": "\u3173", - "/pig": "\u1F416", - "/pigFace": "\u1F437", - "/pigNose": "\u1F43D", - "/pihiragana": "\u3074", - "/pikatakana": "\u30D4", - "/pikosquare": "\u3330", - "/pikurusquare": "\u332F", - "/pilcrowsignreversed": "\u204B", - "/pileOfPoo": "\u1F4A9", - "/pill": "\u1F48A", - "/pineDecoration": "\u1F38D", - "/pineapple": "\u1F34D", - "/pisces": "\u2653", - "/piselehpada": "\uA9CC", - "/pistol": "\u1F52B", - "/pisymbolgreek": "\u03D6", - "/pitchfork": "\u22D4", - "/piwrarmenian": "\u0583", - "/placeOfWorship": "\u1F6D0", - "/placeofinterestsign": "\u2318", - "/planck": "\u210E", - "/plancktwopi": "\u210F", - "/plus": "\u002B", - "/plus.inferior": "\u208A", - "/plus.superior": "\u207A", - "/plusbelowcmb": "\u031F", - "/pluscircle": "\u2295", - "/plusminus": "\u00B1", - "/plusmod": "\u02D6", - "/plusmonospace": "\uFF0B", - "/plussignalt:hb": "\uFB29", - "/plussignmod": "\u02D6", - "/plussmall": "\uFE62", - "/plussuperior": "\u207A", - "/pluto": "\u2647", - "/pmfullwidth": "\u33D8", - "/pmonospace": "\uFF50", - "/pmsquare": "\u33D8", - "/pocketCalculator": "\u1F5A9", - "/poeticverse": "\u060E", - "/pohiragana": "\u307D", - "/pointerleftblack": "\u25C4", - "/pointerleftwhite": "\u25C5", - "/pointerrightblack": "\u25BA", - "/pointerrightwhite": "\u25BB", - "/pointingindexdownwhite": "\u261F", - "/pointingindexleftblack": "\u261A", - "/pointingindexleftwhite": "\u261C", - "/pointingindexrightblack": "\u261B", - "/pointingindexrightwhite": "\u261E", - "/pointingindexupwhite": "\u261D", - "/pointingtriangledownheavywhite": "\u26DB", - "/pointosquare": "\u333D", - "/pointring": "\u2E30", - "/pokatakana": "\u30DD", - "/pokrytiecmbcyr": "\u0487", - "/policeCar": "\u1F693", - "/policeCarsRevolvingLight": "\u1F6A8", - "/policeOfficer": "\u1F46E", - "/pondosquare": "\u3340", - "/poodle": "\u1F429", - "/popcorn": "\u1F37F", - "/popdirectionalformatting": "\u202C", - "/popdirectionalisolate": "\u2069", - "/poplathai": "\u0E1B", - "/portableStereo": "\u1F4FE", - "/positionindicator": "\u2316", - "/postalHorn": "\u1F4EF", - "/postalmark": "\u3012", - "/postalmarkface": "\u3020", - "/postbox": "\u1F4EE", - "/potOfFood": "\u1F372", - "/potableWater": "\u1F6B0", - "/pouch": "\u1F45D", - "/poultryLeg": "\u1F357", - "/poutingCatFace": "\u1F63E", - "/poutingFace": "\u1F621", - "/power": "\u23FB", - "/poweron": "\u23FD", - "/poweronoff": "\u23FC", - "/powersleep": "\u23FE", - "/pparen": "\u24AB", - "/pparenthesized": "\u24AB", - "/ppmfullwidth": "\u33D9", - "/prayerBeads": "\u1F4FF", - "/precedes": "\u227A", - "/precedesbutnotequivalent": "\u22E8", - "/precedesorequal": "\u227C", - "/precedesorequivalent": "\u227E", - "/precedesunderrelation": "\u22B0", - "/prescription": "\u211E", - "/preversedepigraphic": "\uA7FC", - "/previouspage": "\u2397", - "/prfullwidth": "\u33DA", - "/primedblmod": "\u02BA", - "/primemod": "\u02B9", - "/primereversed": "\u2035", - "/princess": "\u1F478", - "/printer": "\u1F5A8", - "/printerIcon": "\u1F5B6", - "/printideographiccircled": "\u329E", - "/printscreen": "\u2399", - "/product": "\u220F", - "/prohibitedSign": "\u1F6C7", - "/projective": "\u2305", - "/prolongedkana": "\u30FC", - "/propellor": "\u2318", - "/propersubset": "\u2282", - "/propersuperset": "\u2283", - "/propertyline": "\u214A", - "/proportion": "\u2237", - "/proportional": "\u221D", - "/psfullwidth": "\u33B0", - "/psi": "\u03C8", - "/psicyr": "\u0471", - "/psicyrillic": "\u0471", - "/psilicmbcyr": "\u0486", - "/psilipneumatacyrilliccmb": "\u0486", - "/pssquare": "\u33B0", - "/pstrokedescender": "\uA751", - "/ptail": "\uA755", - "/publicAddressLoudspeaker": "\u1F4E2", - "/puhiragana": "\u3077", - "/pukatakana": "\u30D7", - "/punctuationspace": "\u2008", - "/purpleHeart": "\u1F49C", - "/purse": "\u1F45B", - "/pushpin": "\u1F4CC", - "/putLitterInItsPlace": "\u1F6AE", - "/pvfullwidth": "\u33B4", - "/pvsquare": "\u33B4", - "/pwfullwidth": "\u33BA", - "/pwsquare": "\u33BA", - "/q": "\u0071", - "/qacyr": "\u051B", - "/qadeva": "\u0958", - "/qadma:hb": "\u05A8", - "/qadmahebrew": "\u05A8", - "/qaf": "\u0642", - "/qaf.fina": "\uFED6", - "/qaf.init": "\uFED7", - "/qaf.init_alefmaksura.fina": "\uFC35", - "/qaf.init_hah.fina": "\uFC33", - "/qaf.init_hah.medi": "\uFCC2", - "/qaf.init_meem.fina": "\uFC34", - "/qaf.init_meem.medi": "\uFCC3", - "/qaf.init_meem.medi_hah.medi": "\uFDB4", - "/qaf.init_yeh.fina": "\uFC36", - "/qaf.isol": "\uFED5", - "/qaf.medi": "\uFED8", - "/qaf.medi_alefmaksura.fina": "\uFC7E", - "/qaf.medi_meem.medi_hah.fina": "\uFD7E", - "/qaf.medi_meem.medi_meem.fina": "\uFD7F", - "/qaf.medi_meem.medi_yeh.fina": "\uFDB2", - "/qaf.medi_yeh.fina": "\uFC7F", - "/qaf_lam_alefmaksuraabove": "\u06D7", - "/qafarabic": "\u0642", - "/qafdotabove": "\u06A7", - "/qaffinalarabic": "\uFED6", - "/qafinitialarabic": "\uFED7", - "/qafmedialarabic": "\uFED8", - "/qafthreedotsabove": "\u06A8", - "/qamats": "\u05B8", - "/qamats10": "\u05B8", - "/qamats1a": "\u05B8", - "/qamats1c": "\u05B8", - "/qamats27": "\u05B8", - "/qamats29": "\u05B8", - "/qamats33": "\u05B8", - "/qamats:hb": "\u05B8", - "/qamatsQatan:hb": "\u05C7", - "/qamatsde": "\u05B8", - "/qamatshebrew": "\u05B8", - "/qamatsnarrowhebrew": "\u05B8", - "/qamatsqatanhebrew": "\u05B8", - "/qamatsqatannarrowhebrew": "\u05B8", - "/qamatsqatanquarterhebrew": "\u05B8", - "/qamatsqatanwidehebrew": "\u05B8", - "/qamatsquarterhebrew": "\u05B8", - "/qamatswidehebrew": "\u05B8", - "/qarneFarah:hb": "\u059F", - "/qarneyparahebrew": "\u059F", - "/qbopomofo": "\u3111", - "/qcircle": "\u24E0", - "/qdiagonalstroke": "\uA759", - "/qhook": "\u02A0", - "/qhooktail": "\u024B", - "/qmonospace": "\uFF51", - "/qof": "\u05E7", - "/qof:hb": "\u05E7", - "/qofdagesh": "\uFB47", - "/qofdageshhebrew": "\uFB47", - "/qofhatafpatah": "\u05E7", - "/qofhatafpatahhebrew": "\u05E7", - "/qofhatafsegol": "\u05E7", - "/qofhatafsegolhebrew": "\u05E7", - "/qofhebrew": "\u05E7", - "/qofhiriq": "\u05E7", - "/qofhiriqhebrew": "\u05E7", - "/qofholam": "\u05E7", - "/qofholamhebrew": "\u05E7", - "/qofpatah": "\u05E7", - "/qofpatahhebrew": "\u05E7", - "/qofqamats": "\u05E7", - "/qofqamatshebrew": "\u05E7", - "/qofqubuts": "\u05E7", - "/qofqubutshebrew": "\u05E7", - "/qofsegol": "\u05E7", - "/qofsegolhebrew": "\u05E7", - "/qofsheva": "\u05E7", - "/qofshevahebrew": "\u05E7", - "/qoftsere": "\u05E7", - "/qoftserehebrew": "\u05E7", - "/qofwithdagesh:hb": "\uFB47", - "/qparen": "\u24AC", - "/qparenthesized": "\u24AC", - "/qpdigraph": "\u0239", - "/qstrokedescender": "\uA757", - "/quadarrowdownfunc": "\u2357", - "/quadarrowleftfunc": "\u2347", - "/quadarrowrightfunc": "\u2348", - "/quadarrowupfunc": "\u2350", - "/quadbackslashfunc": "\u2342", - "/quadcaretdownfunc": "\u234C", - "/quadcaretupfunc": "\u2353", - "/quadcirclefunc": "\u233C", - "/quadcolonfunc": "\u2360", - "/quaddelfunc": "\u2354", - "/quaddeltafunc": "\u234D", - "/quaddiamondfunc": "\u233A", - "/quaddividefunc": "\u2339", - "/quadequalfunc": "\u2338", - "/quadfunc": "\u2395", - "/quadgreaterfunc": "\u2344", - "/quadjotfunc": "\u233B", - "/quadlessfunc": "\u2343", - "/quadnotequalfunc": "\u236F", - "/quadquestionfunc": "\u2370", - "/quadrantLowerLeft": "\u2596", - "/quadrantLowerRight": "\u2597", - "/quadrantUpperLeft": "\u2598", - "/quadrantUpperLeftAndLowerLeftAndLowerRight": "\u2599", - "/quadrantUpperLeftAndLowerRight": "\u259A", - "/quadrantUpperLeftAndUpperRightAndLowerLeft": "\u259B", - "/quadrantUpperLeftAndUpperRightAndLowerRight": "\u259C", - "/quadrantUpperRight": "\u259D", - "/quadrantUpperRightAndLowerLeft": "\u259E", - "/quadrantUpperRightAndLowerLeftAndLowerRight": "\u259F", - "/quadrupleminute": "\u2057", - "/quadslashfunc": "\u2341", - "/quarternote": "\u2669", - "/qubuts": "\u05BB", - "/qubuts18": "\u05BB", - "/qubuts25": "\u05BB", - "/qubuts31": "\u05BB", - "/qubuts:hb": "\u05BB", - "/qubutshebrew": "\u05BB", - "/qubutsnarrowhebrew": "\u05BB", - "/qubutsquarterhebrew": "\u05BB", - "/qubutswidehebrew": "\u05BB", - "/queenblack": "\u265B", - "/queenwhite": "\u2655", - "/question": "\u003F", - "/questionarabic": "\u061F", - "/questionarmenian": "\u055E", - "/questiondbl": "\u2047", - "/questiondown": "\u00BF", - "/questiondownsmall": "\uF7BF", - "/questionedequal": "\u225F", - "/questionexclamationmark": "\u2048", - "/questiongreek": "\u037E", - "/questionideographiccircled": "\u3244", - "/questionmonospace": "\uFF1F", - "/questionreversed": "\u2E2E", - "/questionsmall": "\uF73F", - "/quincunx": "\u26BB", - "/quotedbl": "\u0022", - "/quotedblbase": "\u201E", - "/quotedblleft": "\u201C", - "/quotedbllowreversed": "\u2E42", - "/quotedblmonospace": "\uFF02", - "/quotedblprime": "\u301E", - "/quotedblprimereversed": "\u301D", - "/quotedblreversed": "\u201F", - "/quotedblright": "\u201D", - "/quoteleft": "\u2018", - "/quoteleftreversed": "\u201B", - "/quotequadfunc": "\u235E", - "/quotereversed": "\u201B", - "/quoteright": "\u2019", - "/quoterightn": "\u0149", - "/quotesinglbase": "\u201A", - "/quotesingle": "\u0027", - "/quotesinglemonospace": "\uFF07", - "/quoteunderlinefunc": "\u2358", - "/r": "\u0072", - "/raagung": "\uA9AC", - "/raarmenian": "\u057C", - "/rabbit": "\u1F407", - "/rabbitFace": "\u1F430", - "/rabengali": "\u09B0", - "/racingCar": "\u1F3CE", - "/racingMotorcycle": "\u1F3CD", - "/racirclekatakana": "\u32F6", - "/racute": "\u0155", - "/radeva": "\u0930", - "/radfullwidth": "\u33AD", - "/radical": "\u221A", - "/radicalbottom": "\u23B7", - "/radicalex": "\uF8E5", - "/radio": "\u1F4FB", - "/radioButton": "\u1F518", - "/radioactive": "\u2622", - "/radovers2fullwidth": "\u33AF", - "/radoversfullwidth": "\u33AE", - "/radoverssquare": "\u33AE", - "/radoverssquaredsquare": "\u33AF", - "/radsquare": "\u33AD", - "/rafe": "\u05BF", - "/rafe:hb": "\u05BF", - "/rafehebrew": "\u05BF", - "/ragujarati": "\u0AB0", - "/ragurmukhi": "\u0A30", - "/rahiragana": "\u3089", - "/railwayCar": "\u1F683", - "/railwayTrack": "\u1F6E4", - "/rain": "\u26C6", - "/rainbow": "\u1F308", - "/raisedHandFingersSplayed": "\u1F590", - "/raisedHandPartBetweenMiddleAndRingFingers": "\u1F596", - "/raisedmcsign": "\u1F16A", - "/raisedmdsign": "\u1F16B", - "/rakatakana": "\u30E9", - "/rakatakanahalfwidth": "\uFF97", - "/ralowerdiagonalbengali": "\u09F1", - "/ram": "\u1F40F", - "/ramiddlediagonalbengali": "\u09F0", - "/ramshorn": "\u0264", - "/rat": "\u1F400", - "/ratio": "\u2236", - "/ray": "\u0608", - "/rbopomofo": "\u3116", - "/rcaron": "\u0159", - "/rcedilla": "\u0157", - "/rcircle": "\u24E1", - "/rcommaaccent": "\u0157", - "/rdblgrave": "\u0211", - "/rdot": "\u1E59", - "/rdotaccent": "\u1E59", - "/rdotbelow": "\u1E5B", - "/rdotbelowmacron": "\u1E5D", - "/reachideographicparen": "\u3243", - "/recirclekatakana": "\u32F9", - "/recreationalVehicle": "\u1F699", - "/rectangleblack": "\u25AC", - "/rectangleverticalblack": "\u25AE", - "/rectangleverticalwhite": "\u25AF", - "/rectanglewhite": "\u25AD", - "/recycledpaper": "\u267C", - "/recyclefiveplastics": "\u2677", - "/recyclefourplastics": "\u2676", - "/recyclegeneric": "\u267A", - "/recycleoneplastics": "\u2673", - "/recyclepartiallypaper": "\u267D", - "/recyclesevenplastics": "\u2679", - "/recyclesixplastics": "\u2678", - "/recyclethreeplastics": "\u2675", - "/recycletwoplastics": "\u2674", - "/recycleuniversal": "\u2672", - "/recycleuniversalblack": "\u267B", - "/redApple": "\u1F34E", - "/redTriangleDOwn": "\u1F53B", - "/redTriangleUp": "\u1F53A", - "/referencemark": "\u203B", - "/reflexsubset": "\u2286", - "/reflexsuperset": "\u2287", - "/regionalindicatorsymbollettera": "\u1F1E6", - "/regionalindicatorsymbolletterb": "\u1F1E7", - "/regionalindicatorsymbolletterc": "\u1F1E8", - "/regionalindicatorsymbolletterd": "\u1F1E9", - "/regionalindicatorsymbollettere": "\u1F1EA", - "/regionalindicatorsymbolletterf": "\u1F1EB", - "/regionalindicatorsymbolletterg": "\u1F1EC", - "/regionalindicatorsymbolletterh": "\u1F1ED", - "/regionalindicatorsymbolletteri": "\u1F1EE", - "/regionalindicatorsymbolletterj": "\u1F1EF", - "/regionalindicatorsymbolletterk": "\u1F1F0", - "/regionalindicatorsymbolletterl": "\u1F1F1", - "/regionalindicatorsymbolletterm": "\u1F1F2", - "/regionalindicatorsymbollettern": "\u1F1F3", - "/regionalindicatorsymbollettero": "\u1F1F4", - "/regionalindicatorsymbolletterp": "\u1F1F5", - "/regionalindicatorsymbolletterq": "\u1F1F6", - "/regionalindicatorsymbolletterr": "\u1F1F7", - "/regionalindicatorsymbolletters": "\u1F1F8", - "/regionalindicatorsymbollettert": "\u1F1F9", - "/regionalindicatorsymbolletteru": "\u1F1FA", - "/regionalindicatorsymbolletterv": "\u1F1FB", - "/regionalindicatorsymbolletterw": "\u1F1FC", - "/regionalindicatorsymbolletterx": "\u1F1FD", - "/regionalindicatorsymbollettery": "\u1F1FE", - "/regionalindicatorsymbolletterz": "\u1F1FF", - "/registered": "\u00AE", - "/registersans": "\uF8E8", - "/registerserif": "\uF6DA", - "/reh.fina": "\uFEAE", - "/reh.init_superscriptalef.fina": "\uFC5C", - "/reh.isol": "\uFEAD", - "/rehHamzaAbove": "\u076C", - "/rehSmallTahTwoDots": "\u0771", - "/rehStroke": "\u075B", - "/rehTwoDotsVerticallyAbove": "\u076B", - "/rehVabove": "\u0692", - "/rehVbelow": "\u0695", - "/reharabic": "\u0631", - "/reharmenian": "\u0580", - "/rehdotbelow": "\u0694", - "/rehdotbelowdotabove": "\u0696", - "/rehfinalarabic": "\uFEAE", - "/rehfourdotsabove": "\u0699", - "/rehinvertedV": "\u06EF", - "/rehiragana": "\u308C", - "/rehring": "\u0693", - "/rehtwodotsabove": "\u0697", - "/rehyehaleflamarabic": "\u0631", - "/rekatakana": "\u30EC", - "/rekatakanahalfwidth": "\uFF9A", - "/relievedFace": "\u1F60C", - "/religionideographiccircled": "\u32AA", - "/reminderRibbon": "\u1F397", - "/remusquare": "\u3355", - "/rentogensquare": "\u3356", - "/replacementchar": "\uFFFD", - "/replacementcharobj": "\uFFFC", - "/representideographicparen": "\u3239", - "/rerengganleft": "\uA9C1", - "/rerengganright": "\uA9C2", - "/resh": "\u05E8", - "/resh:hb": "\u05E8", - "/reshdageshhebrew": "\uFB48", - "/reshhatafpatah": "\u05E8", - "/reshhatafpatahhebrew": "\u05E8", - "/reshhatafsegol": "\u05E8", - "/reshhatafsegolhebrew": "\u05E8", - "/reshhebrew": "\u05E8", - "/reshhiriq": "\u05E8", - "/reshhiriqhebrew": "\u05E8", - "/reshholam": "\u05E8", - "/reshholamhebrew": "\u05E8", - "/reshpatah": "\u05E8", - "/reshpatahhebrew": "\u05E8", - "/reshqamats": "\u05E8", - "/reshqamatshebrew": "\u05E8", - "/reshqubuts": "\u05E8", - "/reshqubutshebrew": "\u05E8", - "/reshsegol": "\u05E8", - "/reshsegolhebrew": "\u05E8", - "/reshsheva": "\u05E8", - "/reshshevahebrew": "\u05E8", - "/reshtsere": "\u05E8", - "/reshtserehebrew": "\u05E8", - "/reshwide:hb": "\uFB27", - "/reshwithdagesh:hb": "\uFB48", - "/resourceideographiccircled": "\u32AE", - "/resourceideographicparen": "\u323E", - "/response": "\u211F", - "/restideographiccircled": "\u32A1", - "/restideographicparen": "\u3241", - "/restrictedentryoneleft": "\u26E0", - "/restrictedentrytwoleft": "\u26E1", - "/restroom": "\u1F6BB", - "/return": "\u23CE", - "/reversedHandMiddleFingerExtended": "\u1F595", - "/reversedRaisedHandFingersSplayed": "\u1F591", - "/reversedThumbsDownSign": "\u1F593", - "/reversedThumbsUpSign": "\u1F592", - "/reversedVictoryHand": "\u1F594", - "/reversedonehundred.roman": "\u2183", - "/reversedtilde": "\u223D", - "/reversedzecyr": "\u0511", - "/revia:hb": "\u0597", - "/reviahebrew": "\u0597", - "/reviamugrashhebrew": "\u0597", - "/revlogicalnot": "\u2310", - "/revolvingHearts": "\u1F49E", - "/rfishhook": "\u027E", - "/rfishhookreversed": "\u027F", - "/rgravedbl": "\u0211", - "/rhabengali": "\u09DD", - "/rhacyr": "\u0517", - "/rhadeva": "\u095D", - "/rho": "\u03C1", - "/rhoasper": "\u1FE5", - "/rhofunc": "\u2374", - "/rholenis": "\u1FE4", - "/rhook": "\u027D", - "/rhookturned": "\u027B", - "/rhookturnedsuperior": "\u02B5", - "/rhookturnedsupmod": "\u02B5", - "/rhostrokesymbol": "\u03FC", - "/rhosymbol": "\u03F1", - "/rhosymbolgreek": "\u03F1", - "/rhotichookmod": "\u02DE", - "/rial": "\uFDFC", - "/ribbon": "\u1F380", - "/riceBall": "\u1F359", - "/riceCracker": "\u1F358", - "/ricirclekatakana": "\u32F7", - "/rieulacirclekorean": "\u3271", - "/rieulaparenkorean": "\u3211", - "/rieulcirclekorean": "\u3263", - "/rieulhieuhkorean": "\u3140", - "/rieulkiyeokkorean": "\u313A", - "/rieulkiyeoksioskorean": "\u3169", - "/rieulkorean": "\u3139", - "/rieulmieumkorean": "\u313B", - "/rieulpansioskorean": "\u316C", - "/rieulparenkorean": "\u3203", - "/rieulphieuphkorean": "\u313F", - "/rieulpieupkorean": "\u313C", - "/rieulpieupsioskorean": "\u316B", - "/rieulsioskorean": "\u313D", - "/rieulthieuthkorean": "\u313E", - "/rieultikeutkorean": "\u316A", - "/rieulyeorinhieuhkorean": "\u316D", - "/right-pointingMagnifyingGlass": "\u1F50E", - "/rightAngerBubble": "\u1F5EF", - "/rightHalfBlock": "\u2590", - "/rightHandTelephoneReceiver": "\u1F57D", - "/rightOneEighthBlock": "\u2595", - "/rightSpeaker": "\u1F568", - "/rightSpeakerOneSoundWave": "\u1F569", - "/rightSpeakerThreeSoundWaves": "\u1F56A", - "/rightSpeechBubble": "\u1F5E9", - "/rightThoughtBubble": "\u1F5ED", - "/rightangle": "\u221F", - "/rightarrowoverleftarrow": "\u21C4", - "/rightdnheavyleftuplight": "\u2546", - "/rightharpoonoverleftharpoon": "\u21CC", - "/rightheavyleftdnlight": "\u252E", - "/rightheavyleftuplight": "\u2536", - "/rightheavyleftvertlight": "\u253E", - "/rightideographiccircled": "\u32A8", - "/rightlightleftdnheavy": "\u2531", - "/rightlightleftupheavy": "\u2539", - "/rightlightleftvertheavy": "\u2549", - "/righttackbelowcmb": "\u0319", - "/righttoleftembed": "\u202B", - "/righttoleftisolate": "\u2067", - "/righttoleftmark": "\u200F", - "/righttoleftoverride": "\u202E", - "/righttriangle": "\u22BF", - "/rightupheavyleftdnlight": "\u2544", - "/rihiragana": "\u308A", - "/rikatakana": "\u30EA", - "/rikatakanahalfwidth": "\uFF98", - "/ring": "\u02DA", - "/ringbelowcmb": "\u0325", - "/ringcmb": "\u030A", - "/ringequal": "\u2257", - "/ringhalfleft": "\u02BF", - "/ringhalfleftarmenian": "\u0559", - "/ringhalfleftbelowcmb": "\u031C", - "/ringhalfleftcentered": "\u02D3", - "/ringhalfleftcentredmod": "\u02D3", - "/ringhalfleftmod": "\u02BF", - "/ringhalfright": "\u02BE", - "/ringhalfrightbelowcmb": "\u0339", - "/ringhalfrightcentered": "\u02D2", - "/ringhalfrightcentredmod": "\u02D2", - "/ringhalfrightmod": "\u02BE", - "/ringinequal": "\u2256", - "/ringingBell": "\u1F56D", - "/ringlowmod": "\u02F3", - "/ringoperator": "\u2218", - "/rinsular": "\uA783", - "/rinvertedbreve": "\u0213", - "/rirasquare": "\u3352", - "/risingdiagonal": "\u27CB", - "/rittorusquare": "\u3351", - "/rlinebelow": "\u1E5F", - "/rlongleg": "\u027C", - "/rlonglegturned": "\u027A", - "/rmacrondot": "\u1E5D", - "/rmonospace": "\uFF52", - "/rnoon": "\u06BB", - "/rnoon.fina": "\uFBA1", - "/rnoon.init": "\uFBA2", - "/rnoon.isol": "\uFBA0", - "/rnoon.medi": "\uFBA3", - "/roastedSweetPotato": "\u1F360", - "/robliquestroke": "\uA7A7", - "/rocirclekatakana": "\u32FA", - "/rocket": "\u1F680", - "/rohiragana": "\u308D", - "/rokatakana": "\u30ED", - "/rokatakanahalfwidth": "\uFF9B", - "/rolled-upNewspaper": "\u1F5DE", - "/rollerCoaster": "\u1F3A2", - "/rookblack": "\u265C", - "/rookwhite": "\u2656", - "/rooster": "\u1F413", - "/roruathai": "\u0E23", - "/rose": "\u1F339", - "/rosette": "\u1F3F5", - "/roundPushpin": "\u1F4CD", - "/roundedzeroabove": "\u06DF", - "/rowboat": "\u1F6A3", - "/rparen": "\u24AD", - "/rparenthesized": "\u24AD", - "/rrabengali": "\u09DC", - "/rradeva": "\u0931", - "/rragurmukhi": "\u0A5C", - "/rreh": "\u0691", - "/rreh.fina": "\uFB8D", - "/rreh.isol": "\uFB8C", - "/rreharabic": "\u0691", - "/rrehfinalarabic": "\uFB8D", - "/rrotunda": "\uA75B", - "/rrvocalicbengali": "\u09E0", - "/rrvocalicdeva": "\u0960", - "/rrvocalicgujarati": "\u0AE0", - "/rrvocalicvowelsignbengali": "\u09C4", - "/rrvocalicvowelsigndeva": "\u0944", - "/rrvocalicvowelsigngujarati": "\u0AC4", - "/rstroke": "\u024D", - "/rsuperior": "\uF6F1", - "/rsupmod": "\u02B3", - "/rtailturned": "\u2C79", - "/rtblock": "\u2590", - "/rturned": "\u0279", - "/rturnedsuperior": "\u02B4", - "/rturnedsupmod": "\u02B4", - "/ruble": "\u20BD", - "/rucirclekatakana": "\u32F8", - "/rugbyFootball": "\u1F3C9", - "/ruhiragana": "\u308B", - "/rukatakana": "\u30EB", - "/rukatakanahalfwidth": "\uFF99", - "/rum": "\uA775", - "/rumrotunda": "\uA75D", - "/runner": "\u1F3C3", - "/runningShirtSash": "\u1F3BD", - "/rupeemarkbengali": "\u09F2", - "/rupeesignbengali": "\u09F3", - "/rupiah": "\uF6DD", - "/rupiisquare": "\u3353", - "/ruthai": "\u0E24", - "/ruuburusquare": "\u3354", - "/rvocalicbengali": "\u098B", - "/rvocalicdeva": "\u090B", - "/rvocalicgujarati": "\u0A8B", - "/rvocalicvowelsignbengali": "\u09C3", - "/rvocalicvowelsigndeva": "\u0943", - "/rvocalicvowelsigngujarati": "\u0AC3", - "/s": "\u0073", - "/s.inferior": "\u209B", - "/s_t": "\uFB06", - "/sabengali": "\u09B8", - "/sacirclekatakana": "\u32DA", - "/sacute": "\u015B", - "/sacutedotaccent": "\u1E65", - "/sad": "\u0635", - "/sad.fina": "\uFEBA", - "/sad.init": "\uFEBB", - "/sad.init_alefmaksura.fina": "\uFD05", - "/sad.init_hah.fina": "\uFC20", - "/sad.init_hah.medi": "\uFCB1", - "/sad.init_hah.medi_hah.medi": "\uFD65", - "/sad.init_khah.medi": "\uFCB2", - "/sad.init_meem.fina": "\uFC21", - "/sad.init_meem.medi": "\uFCB3", - "/sad.init_meem.medi_meem.medi": "\uFDC5", - "/sad.init_reh.fina": "\uFD0F", - "/sad.init_yeh.fina": "\uFD06", - "/sad.isol": "\uFEB9", - "/sad.medi": "\uFEBC", - "/sad.medi_alefmaksura.fina": "\uFD21", - "/sad.medi_hah.medi_hah.fina": "\uFD64", - "/sad.medi_hah.medi_yeh.fina": "\uFDA9", - "/sad.medi_meem.medi_meem.fina": "\uFD66", - "/sad.medi_reh.fina": "\uFD2B", - "/sad.medi_yeh.fina": "\uFD22", - "/sad_lam_alefmaksuraabove": "\u06D6", - "/sadarabic": "\u0635", - "/sadeva": "\u0938", - "/sadfinalarabic": "\uFEBA", - "/sadinitialarabic": "\uFEBB", - "/sadmedialarabic": "\uFEBC", - "/sadthreedotsabove": "\u069E", - "/sadtwodotsbelow": "\u069D", - "/sagittarius": "\u2650", - "/sagujarati": "\u0AB8", - "/sagurmukhi": "\u0A38", - "/sahiragana": "\u3055", - "/saikurusquare": "\u331F", - "/sailboat": "\u26F5", - "/sakatakana": "\u30B5", - "/sakatakanahalfwidth": "\uFF7B", - "/sakeBottleAndCup": "\u1F376", - "/sallallahoualayhewasallamarabic": "\uFDFA", - "/saltillo": "\uA78C", - "/saltire": "\u2613", - "/samahaprana": "\uA9B0", - "/samekh": "\u05E1", - "/samekh:hb": "\u05E1", - "/samekhdagesh": "\uFB41", - "/samekhdageshhebrew": "\uFB41", - "/samekhhebrew": "\u05E1", - "/samekhwithdagesh:hb": "\uFB41", - "/sampi": "\u03E1", - "/sampiarchaic": "\u0373", - "/samurda": "\uA9AF", - "/samvat": "\u0604", - "/san": "\u03FB", - "/santiimusquare": "\u3320", - "/saraaathai": "\u0E32", - "/saraaethai": "\u0E41", - "/saraaimaimalaithai": "\u0E44", - "/saraaimaimuanthai": "\u0E43", - "/saraamthai": "\u0E33", - "/saraathai": "\u0E30", - "/saraethai": "\u0E40", - "/saraiileftthai": "\uF886", - "/saraiithai": "\u0E35", - "/saraileftthai": "\uF885", - "/saraithai": "\u0E34", - "/saraothai": "\u0E42", - "/saraueeleftthai": "\uF888", - "/saraueethai": "\u0E37", - "/saraueleftthai": "\uF887", - "/sarauethai": "\u0E36", - "/sarauthai": "\u0E38", - "/sarauuthai": "\u0E39", - "/satellite": "\u1F6F0", - "/satelliteAntenna": "\u1F4E1", - "/saturn": "\u2644", - "/saxophone": "\u1F3B7", - "/sbopomofo": "\u3119", - "/scales": "\u2696", - "/scanninehorizontal": "\u23BD", - "/scanonehorizontal": "\u23BA", - "/scansevenhorizontal": "\u23BC", - "/scanthreehorizontal": "\u23BB", - "/scaron": "\u0161", - "/scarondot": "\u1E67", - "/scarondotaccent": "\u1E67", - "/scedilla": "\u015F", - "/school": "\u1F3EB", - "/schoolSatchel": "\u1F392", - "/schoolideographiccircled": "\u3246", - "/schwa": "\u0259", - "/schwa.inferior": "\u2094", - "/schwacyr": "\u04D9", - "/schwacyrillic": "\u04D9", - "/schwadieresiscyr": "\u04DB", - "/schwadieresiscyrillic": "\u04DB", - "/schwahook": "\u025A", - "/scircle": "\u24E2", - "/scircumflex": "\u015D", - "/scommaaccent": "\u0219", - "/scooter": "\u1F6F4", - "/scorpius": "\u264F", - "/screen": "\u1F5B5", - "/scroll": "\u1F4DC", - "/scruple": "\u2108", - "/sdot": "\u1E61", - "/sdotaccent": "\u1E61", - "/sdotbelow": "\u1E63", - "/sdotbelowdotabove": "\u1E69", - "/sdotbelowdotaccent": "\u1E69", - "/seagullbelowcmb": "\u033C", - "/seat": "\u1F4BA", - "/secirclekatakana": "\u32DD", - "/second": "\u2033", - "/secondreversed": "\u2036", - "/secondscreensquare": "\u1F19C", - "/secondtonechinese": "\u02CA", - "/secretideographiccircled": "\u3299", - "/section": "\u00A7", - "/sectionsignhalftop": "\u2E39", - "/sector": "\u2314", - "/seeNoEvilMonkey": "\u1F648", - "/seedling": "\u1F331", - "/seen": "\u0633", - "/seen.fina": "\uFEB2", - "/seen.init": "\uFEB3", - "/seen.init_alefmaksura.fina": "\uFCFB", - "/seen.init_hah.fina": "\uFC1D", - "/seen.init_hah.medi": "\uFCAE", - "/seen.init_hah.medi_jeem.medi": "\uFD5C", - "/seen.init_heh.medi": "\uFD31", - "/seen.init_jeem.fina": "\uFC1C", - "/seen.init_jeem.medi": "\uFCAD", - "/seen.init_jeem.medi_hah.medi": "\uFD5D", - "/seen.init_khah.fina": "\uFC1E", - "/seen.init_khah.medi": "\uFCAF", - "/seen.init_meem.fina": "\uFC1F", - "/seen.init_meem.medi": "\uFCB0", - "/seen.init_meem.medi_hah.medi": "\uFD60", - "/seen.init_meem.medi_jeem.medi": "\uFD61", - "/seen.init_meem.medi_meem.medi": "\uFD63", - "/seen.init_reh.fina": "\uFD0E", - "/seen.init_yeh.fina": "\uFCFC", - "/seen.isol": "\uFEB1", - "/seen.medi": "\uFEB4", - "/seen.medi_alefmaksura.fina": "\uFD17", - "/seen.medi_hah.medi": "\uFD35", - "/seen.medi_heh.medi": "\uFCE8", - "/seen.medi_jeem.medi": "\uFD34", - "/seen.medi_jeem.medi_alefmaksura.fina": "\uFD5E", - "/seen.medi_khah.medi": "\uFD36", - "/seen.medi_khah.medi_alefmaksura.fina": "\uFDA8", - "/seen.medi_khah.medi_yeh.fina": "\uFDC6", - "/seen.medi_meem.medi": "\uFCE7", - "/seen.medi_meem.medi_hah.fina": "\uFD5F", - "/seen.medi_meem.medi_meem.fina": "\uFD62", - "/seen.medi_reh.fina": "\uFD2A", - "/seen.medi_yeh.fina": "\uFD18", - "/seenDigitFourAbove": "\u077D", - "/seenFourDotsAbove": "\u075C", - "/seenInvertedV": "\u077E", - "/seenSmallTahTwoDots": "\u0770", - "/seenTwoDotsVerticallyAbove": "\u076D", - "/seenabove": "\u06DC", - "/seenarabic": "\u0633", - "/seendotbelowdotabove": "\u069A", - "/seenfinalarabic": "\uFEB2", - "/seeninitialarabic": "\uFEB3", - "/seenlow": "\u06E3", - "/seenmedialarabic": "\uFEB4", - "/seenthreedotsbelow": "\u069B", - "/seenthreedotsbelowthreedotsabove": "\u069C", - "/segment": "\u2313", - "/segol": "\u05B6", - "/segol13": "\u05B6", - "/segol1f": "\u05B6", - "/segol2c": "\u05B6", - "/segol:hb": "\u05B6", - "/segolhebrew": "\u05B6", - "/segolnarrowhebrew": "\u05B6", - "/segolquarterhebrew": "\u05B6", - "/segolta:hb": "\u0592", - "/segoltahebrew": "\u0592", - "/segolwidehebrew": "\u05B6", - "/seharmenian": "\u057D", - "/sehiragana": "\u305B", - "/sekatakana": "\u30BB", - "/sekatakanahalfwidth": "\uFF7E", - "/selfideographicparen": "\u3242", - "/semicolon": "\u003B", - "/semicolonarabic": "\u061B", - "/semicolonmonospace": "\uFF1B", - "/semicolonreversed": "\u204F", - "/semicolonsmall": "\uFE54", - "/semicolonunderlinefunc": "\u236E", - "/semidirectproductleft": "\u22CB", - "/semidirectproductright": "\u22CC", - "/semisextile": "\u26BA", - "/semisoftcyr": "\u048D", - "/semivoicedmarkkana": "\u309C", - "/semivoicedmarkkanahalfwidth": "\uFF9F", - "/sentisquare": "\u3322", - "/sentosquare": "\u3323", - "/septembertelegraph": "\u32C8", - "/sersetdblup": "\u22D1", - "/sersetnotequalup": "\u228B", - "/servicemark": "\u2120", - "/sesamedot": "\uFE45", - "/sesquiquadrate": "\u26BC", - "/setminus": "\u2216", - "/seven": "\u0037", - "/seven.inferior": "\u2087", - "/seven.roman": "\u2166", - "/seven.romansmall": "\u2176", - "/seven.superior": "\u2077", - "/sevenarabic": "\u0667", - "/sevenbengali": "\u09ED", - "/sevencircle": "\u2466", - "/sevencircledbl": "\u24FB", - "/sevencircleinversesansserif": "\u2790", - "/sevencomma": "\u1F108", - "/sevendeva": "\u096D", - "/seveneighths": "\u215E", - "/sevenfar": "\u06F7", - "/sevengujarati": "\u0AED", - "/sevengurmukhi": "\u0A6D", - "/sevenhackarabic": "\u0667", - "/sevenhangzhou": "\u3027", - "/sevenideographiccircled": "\u3286", - "/sevenideographicparen": "\u3226", - "/seveninferior": "\u2087", - "/sevenmonospace": "\uFF17", - "/sevenoldstyle": "\uF737", - "/sevenparen": "\u247A", - "/sevenparenthesized": "\u247A", - "/sevenperiod": "\u248E", - "/sevenpersian": "\u06F7", - "/sevenpointonesquare": "\u1F1A1", - "/sevenroman": "\u2176", - "/sevensuperior": "\u2077", - "/seventeencircle": "\u2470", - "/seventeencircleblack": "\u24F1", - "/seventeenparen": "\u2484", - "/seventeenparenthesized": "\u2484", - "/seventeenperiod": "\u2498", - "/seventhai": "\u0E57", - "/seventycirclesquare": "\u324E", - "/sextile": "\u26B9", - "/sfthyphen": "\u00AD", - "/shaarmenian": "\u0577", - "/shabengali": "\u09B6", - "/shacyr": "\u0448", - "/shacyrillic": "\u0448", - "/shaddaAlefIsol": "\uFC63", - "/shaddaDammaIsol": "\uFC61", - "/shaddaDammaMedi": "\uFCF3", - "/shaddaDammatanIsol": "\uFC5E", - "/shaddaFathaIsol": "\uFC60", - "/shaddaFathaMedi": "\uFCF2", - "/shaddaIsol": "\uFE7C", - "/shaddaKasraIsol": "\uFC62", - "/shaddaKasraMedi": "\uFCF4", - "/shaddaKasratanIsol": "\uFC5F", - "/shaddaMedi": "\uFE7D", - "/shaddaarabic": "\u0651", - "/shaddadammaarabic": "\uFC61", - "/shaddadammatanarabic": "\uFC5E", - "/shaddafathaarabic": "\uFC60", - "/shaddafathatanarabic": "\u0651", - "/shaddakasraarabic": "\uFC62", - "/shaddakasratanarabic": "\uFC5F", - "/shade": "\u2592", - "/shadedark": "\u2593", - "/shadelight": "\u2591", - "/shademedium": "\u2592", - "/shadeva": "\u0936", - "/shagujarati": "\u0AB6", - "/shagurmukhi": "\u0A36", - "/shalshelet:hb": "\u0593", - "/shalshelethebrew": "\u0593", - "/shamrock": "\u2618", - "/shavedIce": "\u1F367", - "/shbopomofo": "\u3115", - "/shchacyr": "\u0449", - "/shchacyrillic": "\u0449", - "/sheen": "\u0634", - "/sheen.fina": "\uFEB6", - "/sheen.init": "\uFEB7", - "/sheen.init_alefmaksura.fina": "\uFCFD", - "/sheen.init_hah.fina": "\uFD0A", - "/sheen.init_hah.medi": "\uFD2E", - "/sheen.init_hah.medi_meem.medi": "\uFD68", - "/sheen.init_heh.medi": "\uFD32", - "/sheen.init_jeem.fina": "\uFD09", - "/sheen.init_jeem.medi": "\uFD2D", - "/sheen.init_khah.fina": "\uFD0B", - "/sheen.init_khah.medi": "\uFD2F", - "/sheen.init_meem.fina": "\uFD0C", - "/sheen.init_meem.medi": "\uFD30", - "/sheen.init_meem.medi_khah.medi": "\uFD6B", - "/sheen.init_meem.medi_meem.medi": "\uFD6D", - "/sheen.init_reh.fina": "\uFD0D", - "/sheen.init_yeh.fina": "\uFCFE", - "/sheen.isol": "\uFEB5", - "/sheen.medi": "\uFEB8", - "/sheen.medi_alefmaksura.fina": "\uFD19", - "/sheen.medi_hah.fina": "\uFD26", - "/sheen.medi_hah.medi": "\uFD38", - "/sheen.medi_hah.medi_meem.fina": "\uFD67", - "/sheen.medi_hah.medi_yeh.fina": "\uFDAA", - "/sheen.medi_heh.medi": "\uFCEA", - "/sheen.medi_jeem.fina": "\uFD25", - "/sheen.medi_jeem.medi": "\uFD37", - "/sheen.medi_jeem.medi_yeh.fina": "\uFD69", - "/sheen.medi_khah.fina": "\uFD27", - "/sheen.medi_khah.medi": "\uFD39", - "/sheen.medi_meem.fina": "\uFD28", - "/sheen.medi_meem.medi": "\uFCE9", - "/sheen.medi_meem.medi_khah.fina": "\uFD6A", - "/sheen.medi_meem.medi_meem.fina": "\uFD6C", - "/sheen.medi_reh.fina": "\uFD29", - "/sheen.medi_yeh.fina": "\uFD1A", - "/sheenarabic": "\u0634", - "/sheendotbelow": "\u06FA", - "/sheenfinalarabic": "\uFEB6", - "/sheeninitialarabic": "\uFEB7", - "/sheenmedialarabic": "\uFEB8", - "/sheep": "\u1F411", - "/sheicoptic": "\u03E3", - "/shelfmod": "\u02FD", - "/shelfopenmod": "\u02FE", - "/sheqel": "\u20AA", - "/sheqelhebrew": "\u20AA", - "/sheva": "\u05B0", - "/sheva115": "\u05B0", - "/sheva15": "\u05B0", - "/sheva22": "\u05B0", - "/sheva2e": "\u05B0", - "/sheva:hb": "\u05B0", - "/shevahebrew": "\u05B0", - "/shevanarrowhebrew": "\u05B0", - "/shevaquarterhebrew": "\u05B0", - "/shevawidehebrew": "\u05B0", - "/shhacyr": "\u04BB", - "/shhacyrillic": "\u04BB", - "/shhatailcyr": "\u0527", - "/shield": "\u1F6E1", - "/shimacoptic": "\u03ED", - "/shin": "\u05E9", - "/shin:hb": "\u05E9", - "/shinDot:hb": "\u05C1", - "/shindagesh": "\uFB49", - "/shindageshhebrew": "\uFB49", - "/shindageshshindot": "\uFB2C", - "/shindageshshindothebrew": "\uFB2C", - "/shindageshsindot": "\uFB2D", - "/shindageshsindothebrew": "\uFB2D", - "/shindothebrew": "\u05C1", - "/shinhebrew": "\u05E9", - "/shinshindot": "\uFB2A", - "/shinshindothebrew": "\uFB2A", - "/shinsindot": "\uFB2B", - "/shinsindothebrew": "\uFB2B", - "/shintoshrine": "\u26E9", - "/shinwithdagesh:hb": "\uFB49", - "/shinwithdageshandshinDot:hb": "\uFB2C", - "/shinwithdageshandsinDot:hb": "\uFB2D", - "/shinwithshinDot:hb": "\uFB2A", - "/shinwithsinDot:hb": "\uFB2B", - "/ship": "\u1F6A2", - "/sho": "\u03F8", - "/shoejotupfunc": "\u235D", - "/shoestiledownfunc": "\u2366", - "/shoestileleftfunc": "\u2367", - "/shogipieceblack": "\u2617", - "/shogipiecewhite": "\u2616", - "/shook": "\u0282", - "/shootingStar": "\u1F320", - "/shoppingBags": "\u1F6CD", - "/shoppingTrolley": "\u1F6D2", - "/shortcake": "\u1F370", - "/shortequalsmod": "\uA78A", - "/shortoverlongmetrical": "\u23D3", - "/shoulderedopenbox": "\u237D", - "/shower": "\u1F6BF", - "/shvsquare": "\u1F1AA", - "/sicirclekatakana": "\u32DB", - "/sidewaysBlackDownPointingIndex": "\u1F5A1", - "/sidewaysBlackLeftPointingIndex": "\u1F59A", - "/sidewaysBlackRightPointingIndex": "\u1F59B", - "/sidewaysBlackUpPointingIndex": "\u1F5A0", - "/sidewaysWhiteDownPointingIndex": "\u1F59F", - "/sidewaysWhiteLeftPointingIndex": "\u1F598", - "/sidewaysWhiteRightPointingIndex": "\u1F599", - "/sidewaysWhiteUpPointingIndex": "\u1F59E", - "/sigma": "\u03C3", - "/sigma1": "\u03C2", - "/sigmafinal": "\u03C2", - "/sigmalunatedottedreversedsymbol": "\u037D", - "/sigmalunatedottedsymbol": "\u037C", - "/sigmalunatereversedsymbol": "\u037B", - "/sigmalunatesymbol": "\u03F2", - "/sigmalunatesymbolgreek": "\u03F2", - "/sihiragana": "\u3057", - "/sikatakana": "\u30B7", - "/sikatakanahalfwidth": "\uFF7C", - "/silhouetteOfJapan": "\u1F5FE", - "/siluqhebrew": "\u05BD", - "/siluqlefthebrew": "\u05BD", - "/similar": "\u223C", - "/sinDot:hb": "\u05C2", - "/sindothebrew": "\u05C2", - "/sinewave": "\u223F", - "/sinh:a": "\u0D85", - "/sinh:aa": "\u0D86", - "/sinh:aae": "\u0D88", - "/sinh:aaesign": "\u0DD1", - "/sinh:aasign": "\u0DCF", - "/sinh:ae": "\u0D87", - "/sinh:aesign": "\u0DD0", - "/sinh:ai": "\u0D93", - "/sinh:aisign": "\u0DDB", - "/sinh:anusvara": "\u0D82", - "/sinh:au": "\u0D96", - "/sinh:ausign": "\u0DDE", - "/sinh:ba": "\u0DB6", - "/sinh:bha": "\u0DB7", - "/sinh:ca": "\u0DA0", - "/sinh:cha": "\u0DA1", - "/sinh:da": "\u0DAF", - "/sinh:dda": "\u0DA9", - "/sinh:ddha": "\u0DAA", - "/sinh:dha": "\u0DB0", - "/sinh:e": "\u0D91", - "/sinh:ee": "\u0D92", - "/sinh:eesign": "\u0DDA", - "/sinh:esign": "\u0DD9", - "/sinh:fa": "\u0DC6", - "/sinh:ga": "\u0D9C", - "/sinh:gha": "\u0D9D", - "/sinh:ha": "\u0DC4", - "/sinh:i": "\u0D89", - "/sinh:ii": "\u0D8A", - "/sinh:iisign": "\u0DD3", - "/sinh:isign": "\u0DD2", - "/sinh:ja": "\u0DA2", - "/sinh:jha": "\u0DA3", - "/sinh:jnya": "\u0DA5", - "/sinh:ka": "\u0D9A", - "/sinh:kha": "\u0D9B", - "/sinh:kunddaliya": "\u0DF4", - "/sinh:la": "\u0DBD", - "/sinh:litheight": "\u0DEE", - "/sinh:lithfive": "\u0DEB", - "/sinh:lithfour": "\u0DEA", - "/sinh:lithnine": "\u0DEF", - "/sinh:lithone": "\u0DE7", - "/sinh:lithseven": "\u0DED", - "/sinh:lithsix": "\u0DEC", - "/sinh:liththree": "\u0DE9", - "/sinh:lithtwo": "\u0DE8", - "/sinh:lithzero": "\u0DE6", - "/sinh:lla": "\u0DC5", - "/sinh:llvocal": "\u0D90", - "/sinh:llvocalsign": "\u0DF3", - "/sinh:lvocal": "\u0D8F", - "/sinh:lvocalsign": "\u0DDF", - "/sinh:ma": "\u0DB8", - "/sinh:mba": "\u0DB9", - "/sinh:na": "\u0DB1", - "/sinh:nda": "\u0DB3", - "/sinh:nga": "\u0D9E", - "/sinh:nna": "\u0DAB", - "/sinh:nndda": "\u0DAC", - "/sinh:nnga": "\u0D9F", - "/sinh:nya": "\u0DA4", - "/sinh:nyja": "\u0DA6", - "/sinh:o": "\u0D94", - "/sinh:oo": "\u0D95", - "/sinh:oosign": "\u0DDD", - "/sinh:osign": "\u0DDC", - "/sinh:pa": "\u0DB4", - "/sinh:pha": "\u0DB5", - "/sinh:ra": "\u0DBB", - "/sinh:rrvocal": "\u0D8E", - "/sinh:rrvocalsign": "\u0DF2", - "/sinh:rvocal": "\u0D8D", - "/sinh:rvocalsign": "\u0DD8", - "/sinh:sa": "\u0DC3", - "/sinh:sha": "\u0DC1", - "/sinh:ssa": "\u0DC2", - "/sinh:ta": "\u0DAD", - "/sinh:tha": "\u0DAE", - "/sinh:tta": "\u0DA7", - "/sinh:ttha": "\u0DA8", - "/sinh:u": "\u0D8B", - "/sinh:usign": "\u0DD4", - "/sinh:uu": "\u0D8C", - "/sinh:uusign": "\u0DD6", - "/sinh:va": "\u0DC0", - "/sinh:virama": "\u0DCA", - "/sinh:visarga": "\u0D83", - "/sinh:ya": "\u0DBA", - "/sinologicaldot": "\uA78F", - "/sinsular": "\uA785", - "/siosacirclekorean": "\u3274", - "/siosaparenkorean": "\u3214", - "/sioscieuckorean": "\u317E", - "/sioscirclekorean": "\u3266", - "/sioskiyeokkorean": "\u317A", - "/sioskorean": "\u3145", - "/siosnieunkorean": "\u317B", - "/siosparenkorean": "\u3206", - "/siospieupkorean": "\u317D", - "/siostikeutkorean": "\u317C", - "/siringusquare": "\u3321", - "/six": "\u0036", - "/six.inferior": "\u2086", - "/six.roman": "\u2165", - "/six.romansmall": "\u2175", - "/six.superior": "\u2076", - "/sixPointedStarMiddleDot": "\u1F52F", - "/sixarabic": "\u0666", - "/sixbengali": "\u09EC", - "/sixcircle": "\u2465", - "/sixcircledbl": "\u24FA", - "/sixcircleinversesansserif": "\u278F", - "/sixcomma": "\u1F107", - "/sixdeva": "\u096C", - "/sixdotsvertical": "\u2E3D", - "/sixfar": "\u06F6", - "/sixgujarati": "\u0AEC", - "/sixgurmukhi": "\u0A6C", - "/sixhackarabic": "\u0666", - "/sixhangzhou": "\u3026", - "/sixideographiccircled": "\u3285", - "/sixideographicparen": "\u3225", - "/sixinferior": "\u2086", - "/sixlateform.roman": "\u2185", - "/sixmonospace": "\uFF16", - "/sixoldstyle": "\uF736", - "/sixparen": "\u2479", - "/sixparenthesized": "\u2479", - "/sixperemspace": "\u2006", - "/sixperiod": "\u248D", - "/sixpersian": "\u06F6", - "/sixroman": "\u2175", - "/sixsuperior": "\u2076", - "/sixteencircle": "\u246F", - "/sixteencircleblack": "\u24F0", - "/sixteencurrencydenominatorbengali": "\u09F9", - "/sixteenparen": "\u2483", - "/sixteenparenthesized": "\u2483", - "/sixteenperiod": "\u2497", - "/sixthai": "\u0E56", - "/sixtycirclesquare": "\u324D", - "/sixtypsquare": "\u1F1A3", - "/sjekomicyr": "\u050D", - "/skiAndSkiBoot": "\u1F3BF", - "/skier": "\u26F7", - "/skull": "\u1F480", - "/skullcrossbones": "\u2620", - "/slash": "\u002F", - "/slashbarfunc": "\u233F", - "/slashmonospace": "\uFF0F", - "/sled": "\u1F6F7", - "/sleeping": "\u1F4A4", - "/sleepingAccommodation": "\u1F6CC", - "/sleepingFace": "\u1F634", - "/sleepyFace": "\u1F62A", - "/sleuthOrSpy": "\u1F575", - "/sliceOfPizza": "\u1F355", - "/slightlyFrowningFace": "\u1F641", - "/slightlySmilingFace": "\u1F642", - "/slong": "\u017F", - "/slongdotaccent": "\u1E9B", - "/slope": "\u2333", - "/slotMachine": "\u1F3B0", - "/smallAirplane": "\u1F6E9", - "/smallBlueDiamond": "\u1F539", - "/smallOrangeDiamond": "\u1F538", - "/smallRedTriangleDOwn": "\u1F53D", - "/smallRedTriangleUp": "\u1F53C", - "/smile": "\u2323", - "/smileface": "\u263A", - "/smilingCatFaceWithHeartShapedEyes": "\u1F63B", - "/smilingCatFaceWithOpenMouth": "\u1F63A", - "/smilingFaceWithHalo": "\u1F607", - "/smilingFaceWithHeartShapedEyes": "\u1F60D", - "/smilingFaceWithHorns": "\u1F608", - "/smilingFaceWithOpenMouth": "\u1F603", - "/smilingFaceWithOpenMouthAndColdSweat": "\u1F605", - "/smilingFaceWithOpenMouthAndSmilingEyes": "\u1F604", - "/smilingFaceWithOpenMouthAndTightlyClosedEyes": "\u1F606", - "/smilingFaceWithSmilingEyes": "\u1F60A", - "/smilingFaceWithSunglasses": "\u1F60E", - "/smilingfaceblack": "\u263B", - "/smilingfacewhite": "\u263A", - "/smirkingFace": "\u1F60F", - "/smll:ampersand": "\uFE60", - "/smll:asterisk": "\uFE61", - "/smll:backslash": "\uFE68", - "/smll:braceleft": "\uFE5B", - "/smll:braceright": "\uFE5C", - "/smll:colon": "\uFE55", - "/smll:comma": "\uFE50", - "/smll:dollar": "\uFE69", - "/smll:emdash": "\uFE58", - "/smll:equal": "\uFE66", - "/smll:exclam": "\uFE57", - "/smll:greater": "\uFE65", - "/smll:hyphen": "\uFE63", - "/smll:ideographiccomma": "\uFE51", - "/smll:less": "\uFE64", - "/smll:numbersign": "\uFE5F", - "/smll:parenthesisleft": "\uFE59", - "/smll:parenthesisright": "\uFE5A", - "/smll:percent": "\uFE6A", - "/smll:period": "\uFE52", - "/smll:plus": "\uFE62", - "/smll:question": "\uFE56", - "/smll:semicolon": "\uFE54", - "/smll:tortoiseshellbracketleft": "\uFE5D", - "/smll:tortoiseshellbracketright": "\uFE5E", - "/smoking": "\u1F6AC", - "/smonospace": "\uFF53", - "/snail": "\u1F40C", - "/snake": "\u1F40D", - "/snowboarder": "\u1F3C2", - "/snowcappedMountain": "\u1F3D4", - "/snowman": "\u2603", - "/snowmanblack": "\u26C7", - "/snowmanoutsnow": "\u26C4", - "/sobliquestroke": "\uA7A9", - "/soccerball": "\u26BD", - "/societyideographiccircled": "\u3293", - "/societyideographicparen": "\u3233", - "/socirclekatakana": "\u32DE", - "/sofPasuq:hb": "\u05C3", - "/sofpasuqhebrew": "\u05C3", - "/softIceCream": "\u1F366", - "/softShellFloppyDisk": "\u1F5AC", - "/softcyr": "\u044C", - "/softhyphen": "\u00AD", - "/softsigncyrillic": "\u044C", - "/softwarefunction": "\u2394", - "/sohiragana": "\u305D", - "/sokatakana": "\u30BD", - "/sokatakanahalfwidth": "\uFF7F", - "/soliduslongoverlaycmb": "\u0338", - "/solidusshortoverlaycmb": "\u0337", - "/solidussubsetreversepreceding": "\u27C8", - "/solidussupersetpreceding": "\u27C9", - "/soonRightwardsArrowAbove": "\u1F51C", - "/sorusithai": "\u0E29", - "/sosalathai": "\u0E28", - "/sosothai": "\u0E0B", - "/sossquare": "\u1F198", - "/sosuathai": "\u0E2A", - "/soundcopyright": "\u2117", - "/space": "\u0020", - "/spacehackarabic": "\u0020", - "/spade": "\u2660", - "/spadeblack": "\u2660", - "/spadesuitblack": "\u2660", - "/spadesuitwhite": "\u2664", - "/spadewhite": "\u2664", - "/spaghetti": "\u1F35D", - "/sparen": "\u24AE", - "/sparenthesized": "\u24AE", - "/sparklingHeart": "\u1F496", - "/speakNoEvilMonkey": "\u1F64A", - "/speaker": "\u1F508", - "/speakerCancellationStroke": "\u1F507", - "/speakerOneSoundWave": "\u1F509", - "/speakerThreeSoundWaves": "\u1F50A", - "/speakingHeadInSilhouette": "\u1F5E3", - "/specialideographiccircled": "\u3295", - "/specialideographicparen": "\u3235", - "/speechBalloon": "\u1F4AC", - "/speedboat": "\u1F6A4", - "/spesmilo": "\u20B7", - "/sphericalangle": "\u2222", - "/spider": "\u1F577", - "/spiderWeb": "\u1F578", - "/spiralCalendarPad": "\u1F5D3", - "/spiralNotePad": "\u1F5D2", - "/spiralShell": "\u1F41A", - "/splashingSweat": "\u1F4A6", - "/sportsMedal": "\u1F3C5", - "/spoutingWhale": "\u1F433", - "/sppl:tildevertical": "\u2E2F", - "/squarebelowcmb": "\u033B", - "/squareblack": "\u25A0", - "/squarebracketleftvertical": "\uFE47", - "/squarebracketrightvertical": "\uFE48", - "/squarecap": "\u2293", - "/squarecc": "\u33C4", - "/squarecm": "\u339D", - "/squarecup": "\u2294", - "/squareddotoperator": "\u22A1", - "/squarediagonalcrosshatchfill": "\u25A9", - "/squaredj": "\u1F190", - "/squaredkey": "\u26BF", - "/squaredminus": "\u229F", - "/squaredplus": "\u229E", - "/squaredsaltire": "\u26DD", - "/squaredtimes": "\u22A0", - "/squarefourcorners": "\u26F6", - "/squarehalfleftblack": "\u25E7", - "/squarehalfrightblack": "\u25E8", - "/squarehorizontalfill": "\u25A4", - "/squareimage": "\u228F", - "/squareimageorequal": "\u2291", - "/squareimageornotequal": "\u22E4", - "/squarekg": "\u338F", - "/squarekm": "\u339E", - "/squarekmcapital": "\u33CE", - "/squareln": "\u33D1", - "/squarelog": "\u33D2", - "/squarelowerdiagonalhalfrightblack": "\u25EA", - "/squaremediumblack": "\u25FC", - "/squaremediumwhite": "\u25FB", - "/squaremg": "\u338E", - "/squaremil": "\u33D5", - "/squaremm": "\u339C", - "/squaremsquared": "\u33A1", - "/squareoriginal": "\u2290", - "/squareoriginalorequal": "\u2292", - "/squareoriginalornotequal": "\u22E5", - "/squareorthogonalcrosshatchfill": "\u25A6", - "/squareraised": "\u2E0B", - "/squaresmallblack": "\u25AA", - "/squaresmallmediumblack": "\u25FE", - "/squaresmallmediumwhite": "\u25FD", - "/squaresmallwhite": "\u25AB", - "/squareupperdiagonalhalfleftblack": "\u25E9", - "/squareupperlefttolowerrightfill": "\u25A7", - "/squareupperrighttolowerleftfill": "\u25A8", - "/squareverticalfill": "\u25A5", - "/squarewhite": "\u25A1", - "/squarewhitebisectinglinevertical": "\u25EB", - "/squarewhitelowerquadrantleft": "\u25F1", - "/squarewhitelowerquadrantright": "\u25F2", - "/squarewhiteround": "\u25A2", - "/squarewhiteupperquadrantleft": "\u25F0", - "/squarewhiteupperquadrantright": "\u25F3", - "/squarewhitewithsmallblack": "\u25A3", - "/squarewhitewithsquaresmallblack": "\u25A3", - "/squishquadfunc": "\u2337", - "/srfullwidth": "\u33DB", - "/srsquare": "\u33DB", - "/ssabengali": "\u09B7", - "/ssadeva": "\u0937", - "/ssagujarati": "\u0AB7", - "/ssangcieuckorean": "\u3149", - "/ssanghieuhkorean": "\u3185", - "/ssangieungkorean": "\u3180", - "/ssangkiyeokkorean": "\u3132", - "/ssangnieunkorean": "\u3165", - "/ssangpieupkorean": "\u3143", - "/ssangsioskorean": "\u3146", - "/ssangtikeutkorean": "\u3138", - "/ssuperior": "\uF6F2", - "/ssupmod": "\u02E2", - "/sswashtail": "\u023F", - "/stackedcommadbl": "\u2E49", - "/stadium": "\u1F3DF", - "/staffofaesculapius": "\u2695", - "/staffofhermes": "\u269A", - "/stampedEnvelope": "\u1F583", - "/star": "\u22C6", - "/starblack": "\u2605", - "/starcrescent": "\u262A", - "/stardiaeresisfunc": "\u2363", - "/starequals": "\u225B", - "/staroperator": "\u22C6", - "/staroutlinedwhite": "\u269D", - "/starwhite": "\u2606", - "/station": "\u1F689", - "/statueOfLiberty": "\u1F5FD", - "/steamLocomotive": "\u1F682", - "/steamingBowl": "\u1F35C", - "/stenographicfullstop": "\u2E3C", - "/sterling": "\u00A3", - "/sterlingmonospace": "\uFFE1", - "/stigma": "\u03DB", - "/stiletildefunc": "\u236D", - "/stockChart": "\u1F5E0", - "/stockideographiccircled": "\u3291", - "/stockideographicparen": "\u3231", - "/stopabove": "\u06EB", - "/stopbelow": "\u06EA", - "/straightRuler": "\u1F4CF", - "/straightness": "\u23E4", - "/strawberry": "\u1F353", - "/stresslowtonemod": "\uA721", - "/stresstonemod": "\uA720", - "/strictlyequivalent": "\u2263", - "/strokelongoverlaycmb": "\u0336", - "/strokeshortoverlaycmb": "\u0335", - "/studioMicrophone": "\u1F399", - "/studyideographiccircled": "\u32AB", - "/studyideographicparen": "\u323B", - "/stupa": "\u1F6D3", - "/subscriptalef": "\u0656", - "/subset": "\u2282", - "/subsetdbl": "\u22D0", - "/subsetnotequal": "\u228A", - "/subsetorequal": "\u2286", - "/succeeds": "\u227B", - "/succeedsbutnotequivalent": "\u22E9", - "/succeedsorequal": "\u227D", - "/succeedsorequivalent": "\u227F", - "/succeedsunderrelation": "\u22B1", - "/suchthat": "\u220B", - "/sucirclekatakana": "\u32DC", - "/suhiragana": "\u3059", - "/suitableideographiccircled": "\u329C", - "/sukatakana": "\u30B9", - "/sukatakanahalfwidth": "\uFF7D", - "/sukumendutvowel": "\uA9B9", - "/sukunIsol": "\uFE7E", - "/sukunMedi": "\uFE7F", - "/sukunarabic": "\u0652", - "/sukuvowel": "\uA9B8", - "/summation": "\u2211", - "/summationbottom": "\u23B3", - "/summationdblstruck": "\u2140", - "/summationtop": "\u23B2", - "/sun": "\u263C", - "/sunFace": "\u1F31E", - "/sunbehindcloud": "\u26C5", - "/sunflower": "\u1F33B", - "/sunideographiccircled": "\u3290", - "/sunideographicparen": "\u3230", - "/sunraysblack": "\u2600", - "/sunrayswhite": "\u263C", - "/sunrise": "\u1F305", - "/sunriseOverMountains": "\u1F304", - "/sunsetOverBuildings": "\u1F307", - "/superset": "\u2283", - "/supersetnotequal": "\u228B", - "/supersetorequal": "\u2287", - "/superviseideographiccircled": "\u32AC", - "/superviseideographicparen": "\u323C", - "/surfer": "\u1F3C4", - "/sushi": "\u1F363", - "/suspensionRailway": "\u1F69F", - "/suspensiondbl": "\u2E44", - "/svfullwidth": "\u33DC", - "/svsquare": "\u33DC", - "/swatchtop": "\u23F1", - "/swimmer": "\u1F3CA", - "/swungdash": "\u2053", - "/symbolabovethreedotsabove": "\uFBB6", - "/symbolbelowthreedotsabove": "\uFBB7", - "/symboldotabove": "\uFBB2", - "/symboldotbelow": "\uFBB3", - "/symboldoubleverticalbarbelow": "\uFBBC", - "/symbolfourdotsabove": "\uFBBA", - "/symbolfourdotsbelow": "\uFBBB", - "/symbolpointingabovedownthreedotsabove": "\uFBB8", - "/symbolpointingbelowdownthreedotsabove": "\uFBB9", - "/symbolring": "\uFBBF", - "/symboltahabovesmall": "\uFBC0", - "/symboltahbelowsmall": "\uFBC1", - "/symboltwodotsabove": "\uFBB4", - "/symboltwodotsbelow": "\uFBB5", - "/symboltwodotsverticallyabove": "\uFBBD", - "/symboltwodotsverticallybelow": "\uFBBE", - "/symmetry": "\u232F", - "/synagogue": "\u1F54D", - "/syouwaerasquare": "\u337C", - "/syringe": "\u1F489", - "/t": "\u0074", - "/t-shirt": "\u1F455", - "/t.inferior": "\u209C", - "/tabengali": "\u09A4", - "/tableTennisPaddleAndBall": "\u1F3D3", - "/tacirclekatakana": "\u32DF", - "/tackcircleaboveup": "\u27DF", - "/tackdiaeresisupfunc": "\u2361", - "/tackdown": "\u22A4", - "/tackdownmod": "\u02D5", - "/tackjotdownfunc": "\u234E", - "/tackjotupfunc": "\u2355", - "/tackleft": "\u22A3", - "/tackleftright": "\u27DB", - "/tackoverbarupfunc": "\u2351", - "/tackright": "\u22A2", - "/tackunderlinedownfunc": "\u234A", - "/tackup": "\u22A5", - "/tackupmod": "\u02D4", - "/taco": "\u1F32E", - "/tadeva": "\u0924", - "/tagujarati": "\u0AA4", - "/tagurmukhi": "\u0A24", - "/tah": "\u0637", - "/tah.fina": "\uFEC2", - "/tah.init": "\uFEC3", - "/tah.init_alefmaksura.fina": "\uFCF5", - "/tah.init_hah.fina": "\uFC26", - "/tah.init_hah.medi": "\uFCB8", - "/tah.init_meem.fina": "\uFC27", - "/tah.init_meem.medi": "\uFD33", - "/tah.init_meem.medi_hah.medi": "\uFD72", - "/tah.init_meem.medi_meem.medi": "\uFD73", - "/tah.init_yeh.fina": "\uFCF6", - "/tah.isol": "\uFEC1", - "/tah.medi": "\uFEC4", - "/tah.medi_alefmaksura.fina": "\uFD11", - "/tah.medi_meem.medi": "\uFD3A", - "/tah.medi_meem.medi_hah.fina": "\uFD71", - "/tah.medi_meem.medi_yeh.fina": "\uFD74", - "/tah.medi_yeh.fina": "\uFD12", - "/tahabove": "\u0615", - "/taharabic": "\u0637", - "/tahfinalarabic": "\uFEC2", - "/tahinitialarabic": "\uFEC3", - "/tahiragana": "\u305F", - "/tahmedialarabic": "\uFEC4", - "/tahthreedotsabove": "\u069F", - "/taisyouerasquare": "\u337D", - "/takatakana": "\u30BF", - "/takatakanahalfwidth": "\uFF80", - "/takhallus": "\u0614", - "/talingvowel": "\uA9BA", - "/taml:a": "\u0B85", - "/taml:aa": "\u0B86", - "/taml:aasign": "\u0BBE", - "/taml:ai": "\u0B90", - "/taml:aisign": "\u0BC8", - "/taml:anusvarasign": "\u0B82", - "/taml:asabovesign": "\u0BF8", - "/taml:au": "\u0B94", - "/taml:aulengthmark": "\u0BD7", - "/taml:ausign": "\u0BCC", - "/taml:ca": "\u0B9A", - "/taml:creditsign": "\u0BF7", - "/taml:daysign": "\u0BF3", - "/taml:debitsign": "\u0BF6", - "/taml:e": "\u0B8E", - "/taml:ee": "\u0B8F", - "/taml:eesign": "\u0BC7", - "/taml:eight": "\u0BEE", - "/taml:esign": "\u0BC6", - "/taml:five": "\u0BEB", - "/taml:four": "\u0BEA", - "/taml:ha": "\u0BB9", - "/taml:i": "\u0B87", - "/taml:ii": "\u0B88", - "/taml:iisign": "\u0BC0", - "/taml:isign": "\u0BBF", - "/taml:ja": "\u0B9C", - "/taml:ka": "\u0B95", - "/taml:la": "\u0BB2", - "/taml:lla": "\u0BB3", - "/taml:llla": "\u0BB4", - "/taml:ma": "\u0BAE", - "/taml:monthsign": "\u0BF4", - "/taml:na": "\u0BA8", - "/taml:nga": "\u0B99", - "/taml:nine": "\u0BEF", - "/taml:nna": "\u0BA3", - "/taml:nnna": "\u0BA9", - "/taml:nya": "\u0B9E", - "/taml:o": "\u0B92", - "/taml:om": "\u0BD0", - "/taml:one": "\u0BE7", - "/taml:onehundred": "\u0BF1", - "/taml:onethousand": "\u0BF2", - "/taml:oo": "\u0B93", - "/taml:oosign": "\u0BCB", - "/taml:osign": "\u0BCA", - "/taml:pa": "\u0BAA", - "/taml:ra": "\u0BB0", - "/taml:rra": "\u0BB1", - "/taml:rupeesign": "\u0BF9", - "/taml:sa": "\u0BB8", - "/taml:seven": "\u0BED", - "/taml:sha": "\u0BB6", - "/taml:sign": "\u0BFA", - "/taml:six": "\u0BEC", - "/taml:ssa": "\u0BB7", - "/taml:ta": "\u0BA4", - "/taml:ten": "\u0BF0", - "/taml:three": "\u0BE9", - "/taml:tta": "\u0B9F", - "/taml:two": "\u0BE8", - "/taml:u": "\u0B89", - "/taml:usign": "\u0BC1", - "/taml:uu": "\u0B8A", - "/taml:uusign": "\u0BC2", - "/taml:va": "\u0BB5", - "/taml:viramasign": "\u0BCD", - "/taml:visargasign": "\u0B83", - "/taml:ya": "\u0BAF", - "/taml:yearsign": "\u0BF5", - "/taml:zero": "\u0BE6", - "/tamurda": "\uA9A1", - "/tanabataTree": "\u1F38B", - "/tangerine": "\u1F34A", - "/tapeCartridge": "\u1F5AD", - "/tarungvowel": "\uA9B4", - "/tatweelFathatanAbove": "\uFE71", - "/tatweelarabic": "\u0640", - "/tau": "\u03C4", - "/taurus": "\u2649", - "/tav": "\u05EA", - "/tav:hb": "\u05EA", - "/tavdages": "\uFB4A", - "/tavdagesh": "\uFB4A", - "/tavdageshhebrew": "\uFB4A", - "/tavhebrew": "\u05EA", - "/tavwide:hb": "\uFB28", - "/tavwithdagesh:hb": "\uFB4A", - "/taxi": "\u1F695", - "/tbar": "\u0167", - "/tbopomofo": "\u310A", - "/tcaron": "\u0165", - "/tccurl": "\u02A8", - "/tcedilla": "\u0163", - "/tcheh": "\u0686", - "/tcheh.fina": "\uFB7B", - "/tcheh.init": "\uFB7C", - "/tcheh.isol": "\uFB7A", - "/tcheh.medi": "\uFB7D", - "/tcheharabic": "\u0686", - "/tchehdotabove": "\u06BF", - "/tcheheh": "\u0687", - "/tcheheh.fina": "\uFB7F", - "/tcheheh.init": "\uFB80", - "/tcheheh.isol": "\uFB7E", - "/tcheheh.medi": "\uFB81", - "/tchehfinalarabic": "\uFB7B", - "/tchehinitialarabic": "\uFB7C", - "/tchehmedialarabic": "\uFB7D", - "/tchehmeeminitialarabic": "\uFB7C", - "/tcircle": "\u24E3", - "/tcircumflexbelow": "\u1E71", - "/tcommaaccent": "\u0163", - "/tcurl": "\u0236", - "/tdieresis": "\u1E97", - "/tdot": "\u1E6B", - "/tdotaccent": "\u1E6B", - "/tdotbelow": "\u1E6D", - "/teacupOutHandle": "\u1F375", - "/tear-offCalendar": "\u1F4C6", - "/tecirclekatakana": "\u32E2", - "/tecyr": "\u0442", - "/tecyrillic": "\u0442", - "/tedescendercyrillic": "\u04AD", - "/teh": "\u062A", - "/teh.fina": "\uFE96", - "/teh.init": "\uFE97", - "/teh.init_alefmaksura.fina": "\uFC0F", - "/teh.init_hah.fina": "\uFC0C", - "/teh.init_hah.medi": "\uFCA2", - "/teh.init_hah.medi_jeem.medi": "\uFD52", - "/teh.init_hah.medi_meem.medi": "\uFD53", - "/teh.init_heh.medi": "\uFCA5", - "/teh.init_jeem.fina": "\uFC0B", - "/teh.init_jeem.medi": "\uFCA1", - "/teh.init_jeem.medi_meem.medi": "\uFD50", - "/teh.init_khah.fina": "\uFC0D", - "/teh.init_khah.medi": "\uFCA3", - "/teh.init_khah.medi_meem.medi": "\uFD54", - "/teh.init_meem.fina": "\uFC0E", - "/teh.init_meem.medi": "\uFCA4", - "/teh.init_meem.medi_hah.medi": "\uFD56", - "/teh.init_meem.medi_jeem.medi": "\uFD55", - "/teh.init_meem.medi_khah.medi": "\uFD57", - "/teh.init_yeh.fina": "\uFC10", - "/teh.isol": "\uFE95", - "/teh.medi": "\uFE98", - "/teh.medi_alefmaksura.fina": "\uFC74", - "/teh.medi_hah.medi_jeem.fina": "\uFD51", - "/teh.medi_heh.medi": "\uFCE4", - "/teh.medi_jeem.medi_alefmaksura.fina": "\uFDA0", - "/teh.medi_jeem.medi_yeh.fina": "\uFD9F", - "/teh.medi_khah.medi_alefmaksura.fina": "\uFDA2", - "/teh.medi_khah.medi_yeh.fina": "\uFDA1", - "/teh.medi_meem.fina": "\uFC72", - "/teh.medi_meem.medi": "\uFCE3", - "/teh.medi_meem.medi_alefmaksura.fina": "\uFDA4", - "/teh.medi_meem.medi_yeh.fina": "\uFDA3", - "/teh.medi_noon.fina": "\uFC73", - "/teh.medi_reh.fina": "\uFC70", - "/teh.medi_yeh.fina": "\uFC75", - "/teh.medi_zain.fina": "\uFC71", - "/teharabic": "\u062A", - "/tehdownthreedotsabove": "\u067D", - "/teheh": "\u067F", - "/teheh.fina": "\uFB63", - "/teheh.init": "\uFB64", - "/teheh.isol": "\uFB62", - "/teheh.medi": "\uFB65", - "/tehfinalarabic": "\uFE96", - "/tehhahinitialarabic": "\uFCA2", - "/tehhahisolatedarabic": "\uFC0C", - "/tehinitialarabic": "\uFE97", - "/tehiragana": "\u3066", - "/tehjeeminitialarabic": "\uFCA1", - "/tehjeemisolatedarabic": "\uFC0B", - "/tehmarbuta": "\u0629", - "/tehmarbuta.fina": "\uFE94", - "/tehmarbuta.isol": "\uFE93", - "/tehmarbutaarabic": "\u0629", - "/tehmarbutafinalarabic": "\uFE94", - "/tehmarbutagoal": "\u06C3", - "/tehmedialarabic": "\uFE98", - "/tehmeeminitialarabic": "\uFCA4", - "/tehmeemisolatedarabic": "\uFC0E", - "/tehnoonfinalarabic": "\uFC73", - "/tehring": "\u067C", - "/tekatakana": "\u30C6", - "/tekatakanahalfwidth": "\uFF83", - "/telephone": "\u2121", - "/telephoneOnTopOfModem": "\u1F580", - "/telephoneReceiver": "\u1F4DE", - "/telephoneReceiverPage": "\u1F57C", - "/telephoneblack": "\u260E", - "/telephonerecorder": "\u2315", - "/telephonewhite": "\u260F", - "/telescope": "\u1F52D", - "/television": "\u1F4FA", - "/telishaGedolah:hb": "\u05A0", - "/telishaQetannah:hb": "\u05A9", - "/telishagedolahebrew": "\u05A0", - "/telishaqetanahebrew": "\u05A9", - "/telu:a": "\u0C05", - "/telu:aa": "\u0C06", - "/telu:aasign": "\u0C3E", - "/telu:ai": "\u0C10", - "/telu:ailengthmark": "\u0C56", - "/telu:aisign": "\u0C48", - "/telu:anusvarasign": "\u0C02", - "/telu:au": "\u0C14", - "/telu:ausign": "\u0C4C", - "/telu:avagrahasign": "\u0C3D", - "/telu:ba": "\u0C2C", - "/telu:bha": "\u0C2D", - "/telu:bindusigncandra": "\u0C01", - "/telu:ca": "\u0C1A", - "/telu:cha": "\u0C1B", - "/telu:combiningbinduabovesigncandra": "\u0C00", - "/telu:da": "\u0C26", - "/telu:dda": "\u0C21", - "/telu:ddha": "\u0C22", - "/telu:dha": "\u0C27", - "/telu:dza": "\u0C59", - "/telu:e": "\u0C0E", - "/telu:ee": "\u0C0F", - "/telu:eesign": "\u0C47", - "/telu:eight": "\u0C6E", - "/telu:esign": "\u0C46", - "/telu:five": "\u0C6B", - "/telu:four": "\u0C6A", - "/telu:fractiononeforevenpowersoffour": "\u0C7C", - "/telu:fractiononeforoddpowersoffour": "\u0C79", - "/telu:fractionthreeforevenpowersoffour": "\u0C7E", - "/telu:fractionthreeforoddpowersoffour": "\u0C7B", - "/telu:fractiontwoforevenpowersoffour": "\u0C7D", - "/telu:fractiontwoforoddpowersoffour": "\u0C7A", - "/telu:fractionzeroforoddpowersoffour": "\u0C78", - "/telu:ga": "\u0C17", - "/telu:gha": "\u0C18", - "/telu:ha": "\u0C39", - "/telu:i": "\u0C07", - "/telu:ii": "\u0C08", - "/telu:iisign": "\u0C40", - "/telu:isign": "\u0C3F", - "/telu:ja": "\u0C1C", - "/telu:jha": "\u0C1D", - "/telu:ka": "\u0C15", - "/telu:kha": "\u0C16", - "/telu:la": "\u0C32", - "/telu:lengthmark": "\u0C55", - "/telu:lla": "\u0C33", - "/telu:llla": "\u0C34", - "/telu:llsignvocal": "\u0C63", - "/telu:llvocal": "\u0C61", - "/telu:lsignvocal": "\u0C62", - "/telu:lvocal": "\u0C0C", - "/telu:ma": "\u0C2E", - "/telu:na": "\u0C28", - "/telu:nga": "\u0C19", - "/telu:nine": "\u0C6F", - "/telu:nna": "\u0C23", - "/telu:nya": "\u0C1E", - "/telu:o": "\u0C12", - "/telu:one": "\u0C67", - "/telu:oo": "\u0C13", - "/telu:oosign": "\u0C4B", - "/telu:osign": "\u0C4A", - "/telu:pa": "\u0C2A", - "/telu:pha": "\u0C2B", - "/telu:ra": "\u0C30", - "/telu:rra": "\u0C31", - "/telu:rrra": "\u0C5A", - "/telu:rrsignvocal": "\u0C44", - "/telu:rrvocal": "\u0C60", - "/telu:rsignvocal": "\u0C43", - "/telu:rvocal": "\u0C0B", - "/telu:sa": "\u0C38", - "/telu:seven": "\u0C6D", - "/telu:sha": "\u0C36", - "/telu:six": "\u0C6C", - "/telu:ssa": "\u0C37", - "/telu:ta": "\u0C24", - "/telu:tha": "\u0C25", - "/telu:three": "\u0C69", - "/telu:tsa": "\u0C58", - "/telu:tta": "\u0C1F", - "/telu:ttha": "\u0C20", - "/telu:tuumusign": "\u0C7F", - "/telu:two": "\u0C68", - "/telu:u": "\u0C09", - "/telu:usign": "\u0C41", - "/telu:uu": "\u0C0A", - "/telu:uusign": "\u0C42", - "/telu:va": "\u0C35", - "/telu:viramasign": "\u0C4D", - "/telu:visargasign": "\u0C03", - "/telu:ya": "\u0C2F", - "/telu:zero": "\u0C66", - "/ten.roman": "\u2169", - "/ten.romansmall": "\u2179", - "/tencircle": "\u2469", - "/tencircledbl": "\u24FE", - "/tencirclesquare": "\u3248", - "/tenge": "\u20B8", - "/tenhangzhou": "\u3038", - "/tenideographiccircled": "\u3289", - "/tenideographicparen": "\u3229", - "/tennisRacquetAndBall": "\u1F3BE", - "/tenparen": "\u247D", - "/tenparenthesized": "\u247D", - "/tenperiod": "\u2491", - "/tenroman": "\u2179", - "/tent": "\u26FA", - "/tenthousand.roman": "\u2182", - "/tesh": "\u02A7", - "/tet": "\u05D8", - "/tet:hb": "\u05D8", - "/tetailcyr": "\u04AD", - "/tetdagesh": "\uFB38", - "/tetdageshhebrew": "\uFB38", - "/tethebrew": "\u05D8", - "/tetrasememetrical": "\u23D8", - "/tetsecyr": "\u04B5", - "/tetsecyrillic": "\u04B5", - "/tetwithdagesh:hb": "\uFB38", - "/tevir:hb": "\u059B", - "/tevirhebrew": "\u059B", - "/tevirlefthebrew": "\u059B", - "/thabengali": "\u09A5", - "/thadeva": "\u0925", - "/thagujarati": "\u0AA5", - "/thagurmukhi": "\u0A25", - "/thai:angkhankhu": "\u0E5A", - "/thai:baht": "\u0E3F", - "/thai:bobaimai": "\u0E1A", - "/thai:chochan": "\u0E08", - "/thai:chochang": "\u0E0A", - "/thai:choching": "\u0E09", - "/thai:chochoe": "\u0E0C", - "/thai:dochada": "\u0E0E", - "/thai:dodek": "\u0E14", - "/thai:eight": "\u0E58", - "/thai:five": "\u0E55", - "/thai:fofa": "\u0E1D", - "/thai:fofan": "\u0E1F", - "/thai:fongman": "\u0E4F", - "/thai:four": "\u0E54", - "/thai:hohip": "\u0E2B", - "/thai:honokhuk": "\u0E2E", - "/thai:khokhai": "\u0E02", - "/thai:khokhon": "\u0E05", - "/thai:khokhuat": "\u0E03", - "/thai:khokhwai": "\u0E04", - "/thai:khomut": "\u0E5B", - "/thai:khorakhang": "\u0E06", - "/thai:kokai": "\u0E01", - "/thai:lakkhangyao": "\u0E45", - "/thai:lochula": "\u0E2C", - "/thai:loling": "\u0E25", - "/thai:lu": "\u0E26", - "/thai:maichattawa": "\u0E4B", - "/thai:maiek": "\u0E48", - "/thai:maihan-akat": "\u0E31", - "/thai:maitaikhu": "\u0E47", - "/thai:maitho": "\u0E49", - "/thai:maitri": "\u0E4A", - "/thai:maiyamok": "\u0E46", - "/thai:moma": "\u0E21", - "/thai:ngongu": "\u0E07", - "/thai:nikhahit": "\u0E4D", - "/thai:nine": "\u0E59", - "/thai:nonen": "\u0E13", - "/thai:nonu": "\u0E19", - "/thai:oang": "\u0E2D", - "/thai:one": "\u0E51", - "/thai:paiyannoi": "\u0E2F", - "/thai:phinthu": "\u0E3A", - "/thai:phophan": "\u0E1E", - "/thai:phophung": "\u0E1C", - "/thai:phosamphao": "\u0E20", - "/thai:popla": "\u0E1B", - "/thai:rorua": "\u0E23", - "/thai:ru": "\u0E24", - "/thai:saraa": "\u0E30", - "/thai:saraaa": "\u0E32", - "/thai:saraae": "\u0E41", - "/thai:saraaimaimalai": "\u0E44", - "/thai:saraaimaimuan": "\u0E43", - "/thai:saraam": "\u0E33", - "/thai:sarae": "\u0E40", - "/thai:sarai": "\u0E34", - "/thai:saraii": "\u0E35", - "/thai:sarao": "\u0E42", - "/thai:sarau": "\u0E38", - "/thai:saraue": "\u0E36", - "/thai:sarauee": "\u0E37", - "/thai:sarauu": "\u0E39", - "/thai:seven": "\u0E57", - "/thai:six": "\u0E56", - "/thai:sorusi": "\u0E29", - "/thai:sosala": "\u0E28", - "/thai:soso": "\u0E0B", - "/thai:sosua": "\u0E2A", - "/thai:thanthakhat": "\u0E4C", - "/thai:thonangmontho": "\u0E11", - "/thai:thophuthao": "\u0E12", - "/thai:thothahan": "\u0E17", - "/thai:thothan": "\u0E10", - "/thai:thothong": "\u0E18", - "/thai:thothung": "\u0E16", - "/thai:three": "\u0E53", - "/thai:topatak": "\u0E0F", - "/thai:totao": "\u0E15", - "/thai:two": "\u0E52", - "/thai:wowaen": "\u0E27", - "/thai:yamakkan": "\u0E4E", - "/thai:yoyak": "\u0E22", - "/thai:yoying": "\u0E0D", - "/thai:zero": "\u0E50", - "/thal": "\u0630", - "/thal.fina": "\uFEAC", - "/thal.init_superscriptalef.fina": "\uFC5B", - "/thal.isol": "\uFEAB", - "/thalarabic": "\u0630", - "/thalfinalarabic": "\uFEAC", - "/thanthakhatlowleftthai": "\uF898", - "/thanthakhatlowrightthai": "\uF897", - "/thanthakhatthai": "\u0E4C", - "/thanthakhatupperleftthai": "\uF896", - "/theh": "\u062B", - "/theh.fina": "\uFE9A", - "/theh.init": "\uFE9B", - "/theh.init_alefmaksura.fina": "\uFC13", - "/theh.init_jeem.fina": "\uFC11", - "/theh.init_meem.fina": "\uFC12", - "/theh.init_meem.medi": "\uFCA6", - "/theh.init_yeh.fina": "\uFC14", - "/theh.isol": "\uFE99", - "/theh.medi": "\uFE9C", - "/theh.medi_alefmaksura.fina": "\uFC7A", - "/theh.medi_heh.medi": "\uFCE6", - "/theh.medi_meem.fina": "\uFC78", - "/theh.medi_meem.medi": "\uFCE5", - "/theh.medi_noon.fina": "\uFC79", - "/theh.medi_reh.fina": "\uFC76", - "/theh.medi_yeh.fina": "\uFC7B", - "/theh.medi_zain.fina": "\uFC77", - "/theharabic": "\u062B", - "/thehfinalarabic": "\uFE9A", - "/thehinitialarabic": "\uFE9B", - "/thehmedialarabic": "\uFE9C", - "/thereexists": "\u2203", - "/therefore": "\u2234", - "/thermometer": "\u1F321", - "/theta": "\u03B8", - "/theta.math": "\u03D1", - "/theta1": "\u03D1", - "/thetasymbolgreek": "\u03D1", - "/thieuthacirclekorean": "\u3279", - "/thieuthaparenkorean": "\u3219", - "/thieuthcirclekorean": "\u326B", - "/thieuthkorean": "\u314C", - "/thieuthparenkorean": "\u320B", - "/thinspace": "\u2009", - "/thirteencircle": "\u246C", - "/thirteencircleblack": "\u24ED", - "/thirteenparen": "\u2480", - "/thirteenparenthesized": "\u2480", - "/thirteenperiod": "\u2494", - "/thirtycircle": "\u325A", - "/thirtycirclesquare": "\u324A", - "/thirtyeightcircle": "\u32B3", - "/thirtyfivecircle": "\u325F", - "/thirtyfourcircle": "\u325E", - "/thirtyhangzhou": "\u303A", - "/thirtyninecircle": "\u32B4", - "/thirtyonecircle": "\u325B", - "/thirtysevencircle": "\u32B2", - "/thirtysixcircle": "\u32B1", - "/thirtythreecircle": "\u325D", - "/thirtytwocircle": "\u325C", - "/thonangmonthothai": "\u0E11", - "/thook": "\u01AD", - "/thophuthaothai": "\u0E12", - "/thorn": "\u00FE", - "/thornstroke": "\uA765", - "/thornstrokedescender": "\uA767", - "/thothahanthai": "\u0E17", - "/thothanthai": "\u0E10", - "/thothongthai": "\u0E18", - "/thothungthai": "\u0E16", - "/thoughtBalloon": "\u1F4AD", - "/thousandcyrillic": "\u0482", - "/thousandscyr": "\u0482", - "/thousandsseparator": "\u066C", - "/thousandsseparatorarabic": "\u066C", - "/thousandsseparatorpersian": "\u066C", - "/three": "\u0033", - "/three.inferior": "\u2083", - "/three.roman": "\u2162", - "/three.romansmall": "\u2172", - "/threeButtonMouse": "\u1F5B1", - "/threeNetworkedComputers": "\u1F5A7", - "/threeRaysAbove": "\u1F5E4", - "/threeRaysBelow": "\u1F5E5", - "/threeRaysLeft": "\u1F5E6", - "/threeRaysRight": "\u1F5E7", - "/threeSpeechBubbles": "\u1F5EB", - "/threearabic": "\u0663", - "/threebengali": "\u09E9", - "/threecircle": "\u2462", - "/threecircledbl": "\u24F7", - "/threecircleinversesansserif": "\u278C", - "/threecomma": "\u1F104", - "/threedeva": "\u0969", - "/threedimensionalangle": "\u27C0", - "/threedotpunctuation": "\u2056", - "/threedotsaboveabove": "\u06DB", - "/threedsquare": "\u1F19B", - "/threeeighths": "\u215C", - "/threefar": "\u06F3", - "/threefifths": "\u2157", - "/threegujarati": "\u0AE9", - "/threegurmukhi": "\u0A69", - "/threehackarabic": "\u0663", - "/threehangzhou": "\u3023", - "/threeideographiccircled": "\u3282", - "/threeideographicparen": "\u3222", - "/threeinferior": "\u2083", - "/threelinesconvergingleft": "\u269F", - "/threelinesconvergingright": "\u269E", - "/threemonospace": "\uFF13", - "/threenumeratorbengali": "\u09F6", - "/threeoldstyle": "\uF733", - "/threeparen": "\u2476", - "/threeparenthesized": "\u2476", - "/threeperemspace": "\u2004", - "/threeperiod": "\u248A", - "/threepersian": "\u06F3", - "/threequarters": "\u00BE", - "/threequartersemdash": "\uF6DE", - "/threerightarrows": "\u21F6", - "/threeroman": "\u2172", - "/threesuperior": "\u00B3", - "/threethai": "\u0E53", - "/thumbsDownSign": "\u1F44E", - "/thumbsUpSign": "\u1F44D", - "/thundercloudrain": "\u26C8", - "/thunderstorm": "\u2608", - "/thzfullwidth": "\u3394", - "/thzsquare": "\u3394", - "/tibt:AA": "\u0F60", - "/tibt:a": "\u0F68", - "/tibt:aavowelsign": "\u0F71", - "/tibt:angkhanggyasmark": "\u0F3D", - "/tibt:angkhanggyonmark": "\u0F3C", - "/tibt:astrologicalkhyudpasign": "\u0F18", - "/tibt:astrologicalsdongtshugssign": "\u0F19", - "/tibt:astrologicalsgragcancharrtagssign": "\u0F17", - "/tibt:asubjoined": "\u0FB8", - "/tibt:ba": "\u0F56", - "/tibt:basubjoined": "\u0FA6", - "/tibt:bha": "\u0F57", - "/tibt:bhasubjoined": "\u0FA7", - "/tibt:bkashogyigmgomark": "\u0F0A", - "/tibt:brdarnyingyigmgomdunmainitialmark": "\u0FD3", - "/tibt:brdarnyingyigmgosgabmaclosingmark": "\u0FD4", - "/tibt:bsdusrtagsmark": "\u0F34", - "/tibt:bskashoggimgorgyanmark": "\u0FD0", - "/tibt:bskuryigmgomark": "\u0F09", - "/tibt:ca": "\u0F45", - "/tibt:cangteucantillationsign": "\u0FC2", - "/tibt:caretdzudrtagsbzhimigcanmark": "\u0F36", - "/tibt:caretdzudrtagsmelongcanmark": "\u0F13", - "/tibt:caretyigmgophurshadmamark": "\u0F06", - "/tibt:casubjoined": "\u0F95", - "/tibt:cha": "\u0F46", - "/tibt:chadrtagslogotypesign": "\u0F15", - "/tibt:chasubjoined": "\u0F96", - "/tibt:chemgomark": "\u0F38", - "/tibt:da": "\u0F51", - "/tibt:dasubjoined": "\u0FA1", - "/tibt:dda": "\u0F4C", - "/tibt:ddasubjoined": "\u0F9C", - "/tibt:ddha": "\u0F4D", - "/tibt:ddhasubjoined": "\u0F9D", - "/tibt:delimitertshegbstarmark": "\u0F0C", - "/tibt:dha": "\u0F52", - "/tibt:dhasubjoined": "\u0FA2", - "/tibt:drilbusymbol": "\u0FC4", - "/tibt:dza": "\u0F5B", - "/tibt:dzasubjoined": "\u0FAB", - "/tibt:dzha": "\u0F5C", - "/tibt:dzhasubjoined": "\u0FAC", - "/tibt:eevowelsign": "\u0F7B", - "/tibt:eight": "\u0F28", - "/tibt:evowelsign": "\u0F7A", - "/tibt:five": "\u0F25", - "/tibt:four": "\u0F24", - "/tibt:ga": "\u0F42", - "/tibt:gasubjoined": "\u0F92", - "/tibt:gha": "\u0F43", - "/tibt:ghasubjoined": "\u0F93", - "/tibt:grucanrgyingssign": "\u0F8A", - "/tibt:grumedrgyingssign": "\u0F8B", - "/tibt:gtertshegmark": "\u0F14", - "/tibt:gteryigmgotruncatedamark": "\u0F01", - "/tibt:gteryigmgoumgtertshegmamark": "\u0F03", - "/tibt:gteryigmgoumrnambcadmamark": "\u0F02", - "/tibt:gugrtagsgyasmark": "\u0F3B", - "/tibt:gugrtagsgyonmark": "\u0F3A", - "/tibt:ha": "\u0F67", - "/tibt:halantamark": "\u0F84", - "/tibt:halfeight": "\u0F31", - "/tibt:halffive": "\u0F2E", - "/tibt:halffour": "\u0F2D", - "/tibt:halfnine": "\u0F32", - "/tibt:halfone": "\u0F2A", - "/tibt:halfseven": "\u0F30", - "/tibt:halfsix": "\u0F2F", - "/tibt:halfthree": "\u0F2C", - "/tibt:halftwo": "\u0F2B", - "/tibt:halfzero": "\u0F33", - "/tibt:hasubjoined": "\u0FB7", - "/tibt:heavybeatcantillationsign": "\u0FC0", - "/tibt:iivowelsign": "\u0F73", - "/tibt:intersyllabictshegmark": "\u0F0B", - "/tibt:invertedmchucansign": "\u0F8C", - "/tibt:invertedmchucansubjoinedsign": "\u0F8F", - "/tibt:ivowelsign": "\u0F72", - "/tibt:ja": "\u0F47", - "/tibt:jasubjoined": "\u0F97", - "/tibt:ka": "\u0F40", - "/tibt:kasubjoined": "\u0F90", - "/tibt:kha": "\u0F41", - "/tibt:khasubjoined": "\u0F91", - "/tibt:kka": "\u0F6B", - "/tibt:kssa": "\u0F69", - "/tibt:kssasubjoined": "\u0FB9", - "/tibt:kurukha": "\u0FBE", - "/tibt:kurukhabzhimigcan": "\u0FBF", - "/tibt:la": "\u0F63", - "/tibt:lasubjoined": "\u0FB3", - "/tibt:lcetsacansign": "\u0F88", - "/tibt:lcetsacansubjoinedsign": "\u0F8D", - "/tibt:lcirtagssign": "\u0F86", - "/tibt:leadingmchanrtagsmark": "\u0FD9", - "/tibt:lhagrtagslogotypesign": "\u0F16", - "/tibt:lightbeatcantillationsign": "\u0FC1", - "/tibt:llvocalicvowelsign": "\u0F79", - "/tibt:lvocalicvowelsign": "\u0F78", - "/tibt:ma": "\u0F58", - "/tibt:martshessign": "\u0F3F", - "/tibt:masubjoined": "\u0FA8", - "/tibt:mchucansign": "\u0F89", - "/tibt:mchucansubjoinedsign": "\u0F8E", - "/tibt:mnyamyiggimgorgyanmark": "\u0FD1", - "/tibt:na": "\u0F53", - "/tibt:nasubjoined": "\u0FA3", - "/tibt:nga": "\u0F44", - "/tibt:ngasbzungnyizlamark": "\u0F35", - "/tibt:ngasbzungsgorrtagsmark": "\u0F37", - "/tibt:ngasubjoined": "\u0F94", - "/tibt:nine": "\u0F29", - "/tibt:nna": "\u0F4E", - "/tibt:nnasubjoined": "\u0F9E", - "/tibt:norbubzhikhyilsymbol": "\u0FCC", - "/tibt:norbugsumkhyilsymbol": "\u0FCB", - "/tibt:norbunyiskhyilsymbol": "\u0FCA", - "/tibt:norbusymbol": "\u0FC9", - "/tibt:nya": "\u0F49", - "/tibt:nyasubjoined": "\u0F99", - "/tibt:nyisshadmark": "\u0F0E", - "/tibt:nyistshegmark": "\u0FD2", - "/tibt:nyistshegshadmark": "\u0F10", - "/tibt:nyizlanaadasign": "\u0F82", - "/tibt:omsyllable": "\u0F00", - "/tibt:one": "\u0F21", - "/tibt:oovowelsign": "\u0F7D", - "/tibt:ovowelsign": "\u0F7C", - "/tibt:pa": "\u0F54", - "/tibt:padmagdansymbol": "\u0FC6", - "/tibt:palutamark": "\u0F85", - "/tibt:pasubjoined": "\u0FA4", - "/tibt:pha": "\u0F55", - "/tibt:phasubjoined": "\u0FA5", - "/tibt:phurpasymbol": "\u0FC8", - "/tibt:ra": "\u0F62", - "/tibt:rafixed": "\u0F6A", - "/tibt:rasubjoined": "\u0FB2", - "/tibt:rasubjoinedfixed": "\u0FBC", - "/tibt:rdeldkargcigsign": "\u0F1A", - "/tibt:rdeldkargnyissign": "\u0F1B", - "/tibt:rdeldkargsumsign": "\u0F1C", - "/tibt:rdeldkarrdelnagsign": "\u0F1F", - "/tibt:rdelnaggcigsign": "\u0F1D", - "/tibt:rdelnaggnyissign": "\u0F1E", - "/tibt:rdelnaggsumsign": "\u0FCF", - "/tibt:rdelnagrdeldkarsign": "\u0FCE", - "/tibt:rdorjergyagramsymbol": "\u0FC7", - "/tibt:rdorjesymbol": "\u0FC5", - "/tibt:reversediivowelsign": "\u0F81", - "/tibt:reversedivowelsign": "\u0F80", - "/tibt:rgyagramshadmark": "\u0F12", - "/tibt:rinchenspungsshadmark": "\u0F11", - "/tibt:rjessungarosign": "\u0F7E", - "/tibt:rnambcadsign": "\u0F7F", - "/tibt:rra": "\u0F6C", - "/tibt:rrvocalicvowelsign": "\u0F77", - "/tibt:rvocalicvowelsign": "\u0F76", - "/tibt:sa": "\u0F66", - "/tibt:sasubjoined": "\u0FB6", - "/tibt:sbrulshadmark": "\u0F08", - "/tibt:sbubchalcantillationsign": "\u0FC3", - "/tibt:seven": "\u0F27", - "/tibt:sha": "\u0F64", - "/tibt:shadmark": "\u0F0D", - "/tibt:shasubjoined": "\u0FB4", - "/tibt:six": "\u0F26", - "/tibt:snaldansign": "\u0F83", - "/tibt:ssa": "\u0F65", - "/tibt:ssasubjoined": "\u0FB5", - "/tibt:subjoinedAA": "\u0FB0", - "/tibt:svastileft": "\u0FD6", - "/tibt:svastileftdot": "\u0FD8", - "/tibt:svastiright": "\u0FD5", - "/tibt:svastirightdot": "\u0FD7", - "/tibt:ta": "\u0F4F", - "/tibt:tasubjoined": "\u0F9F", - "/tibt:tha": "\u0F50", - "/tibt:thasubjoined": "\u0FA0", - "/tibt:three": "\u0F23", - "/tibt:trailingmchanrtagsmark": "\u0FDA", - "/tibt:tsa": "\u0F59", - "/tibt:tsaphrumark": "\u0F39", - "/tibt:tsasubjoined": "\u0FA9", - "/tibt:tsha": "\u0F5A", - "/tibt:tshasubjoined": "\u0FAA", - "/tibt:tshegshadmark": "\u0F0F", - "/tibt:tta": "\u0F4A", - "/tibt:ttasubjoined": "\u0F9A", - "/tibt:ttha": "\u0F4B", - "/tibt:tthasubjoined": "\u0F9B", - "/tibt:two": "\u0F22", - "/tibt:uuvowelsign": "\u0F75", - "/tibt:uvowelsign": "\u0F74", - "/tibt:wa": "\u0F5D", - "/tibt:wasubjoined": "\u0FAD", - "/tibt:wasubjoinedfixed": "\u0FBA", - "/tibt:ya": "\u0F61", - "/tibt:yangrtagssign": "\u0F87", - "/tibt:yartshessign": "\u0F3E", - "/tibt:yasubjoined": "\u0FB1", - "/tibt:yasubjoinedfixed": "\u0FBB", - "/tibt:yigmgomdunmainitialmark": "\u0F04", - "/tibt:yigmgosgabmaclosingmark": "\u0F05", - "/tibt:yigmgotshegshadmamark": "\u0F07", - "/tibt:za": "\u0F5F", - "/tibt:zasubjoined": "\u0FAF", - "/tibt:zero": "\u0F20", - "/tibt:zha": "\u0F5E", - "/tibt:zhasubjoined": "\u0FAE", - "/ticirclekatakana": "\u32E0", - "/tickconvavediamondleftwhite": "\u27E2", - "/tickconvavediamondrightwhite": "\u27E3", - "/ticket": "\u1F3AB", - "/tickleftwhitesquare": "\u27E4", - "/tickrightwhitesquare": "\u27E5", - "/tifcha:hb": "\u0596", - "/tiger": "\u1F405", - "/tigerFace": "\u1F42F", - "/tihiragana": "\u3061", - "/tikatakana": "\u30C1", - "/tikatakanahalfwidth": "\uFF81", - "/tikeutacirclekorean": "\u3270", - "/tikeutaparenkorean": "\u3210", - "/tikeutcirclekorean": "\u3262", - "/tikeutkorean": "\u3137", - "/tikeutparenkorean": "\u3202", - "/tilde": "\u02DC", - "/tildebelowcmb": "\u0330", - "/tildecmb": "\u0303", - "/tildecomb": "\u0303", - "/tildediaeresisfunc": "\u2368", - "/tildedotaccent": "\u2E1E", - "/tildedotbelow": "\u2E1F", - "/tildedoublecmb": "\u0360", - "/tildeequalsreversed": "\u22CD", - "/tildelowmod": "\u02F7", - "/tildeoperator": "\u223C", - "/tildeoverlaycmb": "\u0334", - "/tildereversed": "\u223D", - "/tildering": "\u2E1B", - "/tildetpl": "\u224B", - "/tildeverticalcmb": "\u033E", - "/timerclock": "\u23F2", - "/timescircle": "\u2297", - "/tinsular": "\uA787", - "/tipehahebrew": "\u0596", - "/tipehalefthebrew": "\u0596", - "/tippigurmukhi": "\u0A70", - "/tiredFace": "\u1F62B", - "/tironiansignet": "\u204A", - "/tirtatumetespada": "\uA9DE", - "/titlocmbcyr": "\u0483", - "/titlocyrilliccmb": "\u0483", - "/tiwnarmenian": "\u057F", - "/tjekomicyr": "\u050F", - "/tlinebelow": "\u1E6F", - "/tmonospace": "\uFF54", - "/toarmenian": "\u0569", - "/tocirclekatakana": "\u32E3", - "/tocornerarrowNW": "\u21F1", - "/tocornerarrowSE": "\u21F2", - "/tohiragana": "\u3068", - "/toilet": "\u1F6BD", - "/tokatakana": "\u30C8", - "/tokatakanahalfwidth": "\uFF84", - "/tokyoTower": "\u1F5FC", - "/tolongvowel": "\uA9B5", - "/tomato": "\u1F345", - "/tonebarextrahighmod": "\u02E5", - "/tonebarextralowmod": "\u02E9", - "/tonebarhighmod": "\u02E6", - "/tonebarlowmod": "\u02E8", - "/tonebarmidmod": "\u02E7", - "/tonefive": "\u01BD", - "/tonehighbeginmod": "\u02F9", - "/tonehighendmod": "\u02FA", - "/tonelowbeginmod": "\u02FB", - "/tonelowendmod": "\u02FC", - "/tonesix": "\u0185", - "/tonetwo": "\u01A8", - "/tongue": "\u1F445", - "/tonos": "\u0384", - "/tonsquare": "\u3327", - "/topHat": "\u1F3A9", - "/topUpwardsArrowAbove": "\u1F51D", - "/topatakthai": "\u0E0F", - "/tortoiseshellbracketleft": "\u3014", - "/tortoiseshellbracketleftsmall": "\uFE5D", - "/tortoiseshellbracketleftvertical": "\uFE39", - "/tortoiseshellbracketright": "\u3015", - "/tortoiseshellbracketrightsmall": "\uFE5E", - "/tortoiseshellbracketrightvertical": "\uFE3A", - "/totalrunout": "\u2330", - "/totaothai": "\u0E15", - "/tpalatalhook": "\u01AB", - "/tparen": "\u24AF", - "/tparenthesized": "\u24AF", - "/trackball": "\u1F5B2", - "/tractor": "\u1F69C", - "/trademark": "\u2122", - "/trademarksans": "\uF8EA", - "/trademarkserif": "\uF6DB", - "/train": "\u1F686", - "/tram": "\u1F68A", - "/tramCar": "\u1F68B", - "/trapeziumwhite": "\u23E2", - "/tresillo": "\uA72B", - "/tretroflex": "\u0288", - "/tretroflexhook": "\u0288", - "/triagdn": "\u25BC", - "/triaglf": "\u25C4", - "/triagrt": "\u25BA", - "/triagup": "\u25B2", - "/triangleWithRoundedCorners": "\u1F6C6", - "/triangledotupwhite": "\u25EC", - "/triangledownblack": "\u25BC", - "/triangledownsmallblack": "\u25BE", - "/triangledownsmallwhite": "\u25BF", - "/triangledownwhite": "\u25BD", - "/trianglehalfupleftblack": "\u25ED", - "/trianglehalfuprightblack": "\u25EE", - "/triangleleftblack": "\u25C0", - "/triangleleftsmallblack": "\u25C2", - "/triangleleftsmallwhite": "\u25C3", - "/triangleleftwhite": "\u25C1", - "/triangleright": "\u22BF", - "/trianglerightblack": "\u25B6", - "/trianglerightsmallblack": "\u25B8", - "/trianglerightsmallwhite": "\u25B9", - "/trianglerightwhite": "\u25B7", - "/triangleupblack": "\u25B2", - "/triangleupsmallblack": "\u25B4", - "/triangleupsmallwhite": "\u25B5", - "/triangleupwhite": "\u25B3", - "/triangularFlagOnPost": "\u1F6A9", - "/triangularRuler": "\u1F4D0", - "/triangularbullet": "\u2023", - "/tricolon": "\u205D", - "/tricontainingtriwhiteanglesmall": "\u27C1", - "/tridentEmblem": "\u1F531", - "/trigramearth": "\u2637", - "/trigramfire": "\u2632", - "/trigramheaven": "\u2630", - "/trigramlake": "\u2631", - "/trigrammountain": "\u2636", - "/trigramthunder": "\u2633", - "/trigramwater": "\u2635", - "/trigramwind": "\u2634", - "/triplearrowleft": "\u21DA", - "/triplearrowright": "\u21DB", - "/tripledot": "\u061E", - "/trisememetrical": "\u23D7", - "/trns:baby": "\u1F6BC", - "/trolleybus": "\u1F68E", - "/trophy": "\u1F3C6", - "/tropicalDrink": "\u1F379", - "/tropicalFish": "\u1F420", - "/truckblack": "\u26DF", - "/true": "\u22A8", - "/trumpet": "\u1F3BA", - "/ts": "\u02A6", - "/tsadi": "\u05E6", - "/tsadi:hb": "\u05E6", - "/tsadidagesh": "\uFB46", - "/tsadidageshhebrew": "\uFB46", - "/tsadihebrew": "\u05E6", - "/tsadiwithdagesh:hb": "\uFB46", - "/tsecyr": "\u0446", - "/tsecyrillic": "\u0446", - "/tsere": "\u05B5", - "/tsere12": "\u05B5", - "/tsere1e": "\u05B5", - "/tsere2b": "\u05B5", - "/tsere:hb": "\u05B5", - "/tserehebrew": "\u05B5", - "/tserenarrowhebrew": "\u05B5", - "/tserequarterhebrew": "\u05B5", - "/tserewidehebrew": "\u05B5", - "/tshecyr": "\u045B", - "/tshecyrillic": "\u045B", - "/tsinnorit:hb": "\u05AE", - "/tstroke": "\u2C66", - "/tsuperior": "\uF6F3", - "/ttabengali": "\u099F", - "/ttadeva": "\u091F", - "/ttagujarati": "\u0A9F", - "/ttagurmukhi": "\u0A1F", - "/ttamahaprana": "\uA99C", - "/tteh": "\u0679", - "/tteh.fina": "\uFB67", - "/tteh.init": "\uFB68", - "/tteh.isol": "\uFB66", - "/tteh.medi": "\uFB69", - "/tteharabic": "\u0679", - "/tteheh": "\u067A", - "/tteheh.fina": "\uFB5F", - "/tteheh.init": "\uFB60", - "/tteheh.isol": "\uFB5E", - "/tteheh.medi": "\uFB61", - "/ttehfinalarabic": "\uFB67", - "/ttehinitialarabic": "\uFB68", - "/ttehmedialarabic": "\uFB69", - "/tthabengali": "\u09A0", - "/tthadeva": "\u0920", - "/tthagujarati": "\u0AA0", - "/tthagurmukhi": "\u0A20", - "/tturned": "\u0287", - "/tucirclekatakana": "\u32E1", - "/tugrik": "\u20AE", - "/tuhiragana": "\u3064", - "/tukatakana": "\u30C4", - "/tukatakanahalfwidth": "\uFF82", - "/tulip": "\u1F337", - "/tum": "\uA777", - "/turkishlira": "\u20BA", - "/turnedOkHandSign": "\u1F58F", - "/turnedcomma": "\u2E32", - "/turneddagger": "\u2E38", - "/turneddigitthree": "\u218B", - "/turneddigittwo": "\u218A", - "/turnedpiselehpada": "\uA9CD", - "/turnedsemicolon": "\u2E35", - "/turnedshogipieceblack": "\u26CA", - "/turnedshogipiecewhite": "\u26C9", - "/turnstiledblverticalbarright": "\u22AB", - "/turnstileleftrightdbl": "\u27DA", - "/turnstiletplverticalbarright": "\u22AA", - "/turtle": "\u1F422", - "/tusmallhiragana": "\u3063", - "/tusmallkatakana": "\u30C3", - "/tusmallkatakanahalfwidth": "\uFF6F", - "/twelve.roman": "\u216B", - "/twelve.romansmall": "\u217B", - "/twelvecircle": "\u246B", - "/twelvecircleblack": "\u24EC", - "/twelveparen": "\u247F", - "/twelveparenthesized": "\u247F", - "/twelveperiod": "\u2493", - "/twelveroman": "\u217B", - "/twenty-twopointtwosquare": "\u1F1A2", - "/twentycircle": "\u2473", - "/twentycircleblack": "\u24F4", - "/twentycirclesquare": "\u3249", - "/twentyeightcircle": "\u3258", - "/twentyfivecircle": "\u3255", - "/twentyfourcircle": "\u3254", - "/twentyhangzhou": "\u5344", - "/twentyninecircle": "\u3259", - "/twentyonecircle": "\u3251", - "/twentyparen": "\u2487", - "/twentyparenthesized": "\u2487", - "/twentyperiod": "\u249B", - "/twentysevencircle": "\u3257", - "/twentysixcircle": "\u3256", - "/twentythreecircle": "\u3253", - "/twentytwocircle": "\u3252", - "/twistedRightwardsArrows": "\u1F500", - "/two": "\u0032", - "/two.inferior": "\u2082", - "/two.roman": "\u2161", - "/two.romansmall": "\u2171", - "/twoButtonMouse": "\u1F5B0", - "/twoHearts": "\u1F495", - "/twoMenHoldingHands": "\u1F46C", - "/twoSpeechBubbles": "\u1F5EA", - "/twoWomenHoldingHands": "\u1F46D", - "/twoarabic": "\u0662", - "/twoasterisksalignedvertically": "\u2051", - "/twobengali": "\u09E8", - "/twocircle": "\u2461", - "/twocircledbl": "\u24F6", - "/twocircleinversesansserif": "\u278B", - "/twocomma": "\u1F103", - "/twodeva": "\u0968", - "/twodotenleader": "\u2025", - "/twodotleader": "\u2025", - "/twodotleadervertical": "\uFE30", - "/twodotpunctuation": "\u205A", - "/twodotsoveronedot": "\u2E2A", - "/twofar": "\u06F2", - "/twofifths": "\u2156", - "/twogujarati": "\u0AE8", - "/twogurmukhi": "\u0A68", - "/twohackarabic": "\u0662", - "/twohangzhou": "\u3022", - "/twoideographiccircled": "\u3281", - "/twoideographicparen": "\u3221", - "/twoinferior": "\u2082", - "/twoksquare": "\u1F19D", - "/twomonospace": "\uFF12", - "/twonumeratorbengali": "\u09F5", - "/twooldstyle": "\uF732", - "/twoparen": "\u2475", - "/twoparenthesized": "\u2475", - "/twoperiod": "\u2489", - "/twopersian": "\u06F2", - "/tworoman": "\u2171", - "/twoshortsjoinedmetrical": "\u23D6", - "/twoshortsoverlongmetrical": "\u23D5", - "/twostroke": "\u01BB", - "/twosuperior": "\u00B2", - "/twothai": "\u0E52", - "/twothirds": "\u2154", - "/twowayleftwaytrafficblack": "\u26D6", - "/twowayleftwaytrafficwhite": "\u26D7", - "/tz": "\uA729", - "/u": "\u0075", - "/u.fina": "\uFBD8", - "/u.isol": "\uFBD7", - "/uacute": "\u00FA", - "/uacutedblcyr": "\u04F3", - "/ubar": "\u0289", - "/ubengali": "\u0989", - "/ubopomofo": "\u3128", - "/ubracketleft": "\u2E26", - "/ubracketright": "\u2E27", - "/ubreve": "\u016D", - "/ucaron": "\u01D4", - "/ucircle": "\u24E4", - "/ucirclekatakana": "\u32D2", - "/ucircumflex": "\u00FB", - "/ucircumflexbelow": "\u1E77", - "/ucyr": "\u0443", - "/ucyrillic": "\u0443", - "/udattadeva": "\u0951", - "/udblacute": "\u0171", - "/udblgrave": "\u0215", - "/udeva": "\u0909", - "/udieresis": "\u00FC", - "/udieresisacute": "\u01D8", - "/udieresisbelow": "\u1E73", - "/udieresiscaron": "\u01DA", - "/udieresiscyr": "\u04F1", - "/udieresiscyrillic": "\u04F1", - "/udieresisgrave": "\u01DC", - "/udieresismacron": "\u01D6", - "/udotbelow": "\u1EE5", - "/ugrave": "\u00F9", - "/ugravedbl": "\u0215", - "/ugujarati": "\u0A89", - "/ugurmukhi": "\u0A09", - "/uhamza": "\u0677", - "/uhamza.isol": "\uFBDD", - "/uhdsquare": "\u1F1AB", - "/uhiragana": "\u3046", - "/uhoi": "\u1EE7", - "/uhookabove": "\u1EE7", - "/uhorn": "\u01B0", - "/uhornacute": "\u1EE9", - "/uhorndotbelow": "\u1EF1", - "/uhorngrave": "\u1EEB", - "/uhornhoi": "\u1EED", - "/uhornhookabove": "\u1EED", - "/uhorntilde": "\u1EEF", - "/uhungarumlaut": "\u0171", - "/uhungarumlautcyrillic": "\u04F3", - "/uighurkazakhkirghizalefmaksura.init": "\uFBE8", - "/uighurkazakhkirghizalefmaksura.medi": "\uFBE9", - "/uighurkirghizyeh.init_hamzaabove.medi_alefmaksura.fina": "\uFBF9", - "/uighurkirghizyeh.init_hamzaabove.medi_alefmaksura.medi": "\uFBFB", - "/uighurkirghizyeh.medi_hamzaabove.medi_alefmaksura.fina": "\uFBFA", - "/uinvertedbreve": "\u0217", - "/ukatakana": "\u30A6", - "/ukatakanahalfwidth": "\uFF73", - "/ukcyr": "\u0479", - "/ukcyrillic": "\u0479", - "/ukorean": "\u315C", - "/um": "\uA778", - "/umacron": "\u016B", - "/umacroncyr": "\u04EF", - "/umacroncyrillic": "\u04EF", - "/umacrondieresis": "\u1E7B", - "/umatragurmukhi": "\u0A41", - "/umbrella": "\u2602", - "/umbrellaonground": "\u26F1", - "/umbrellaraindrops": "\u2614", - "/umonospace": "\uFF55", - "/unamusedFace": "\u1F612", - "/unaspiratedmod": "\u02ED", - "/underscore": "\u005F", - "/underscorecenterline": "\uFE4E", - "/underscoredashed": "\uFE4D", - "/underscoredbl": "\u2017", - "/underscoremonospace": "\uFF3F", - "/underscorevertical": "\uFE33", - "/underscorewavy": "\uFE4F", - "/underscorewavyvertical": "\uFE34", - "/undertie": "\u203F", - "/undo": "\u238C", - "/union": "\u222A", - "/unionarray": "\u22C3", - "/uniondbl": "\u22D3", - "/universal": "\u2200", - "/unmarriedpartnership": "\u26AF", - "/uogonek": "\u0173", - "/uonsquare": "\u3306", - "/upPointingAirplane": "\u1F6E7", - "/upPointingMilitaryAirplane": "\u1F6E6", - "/upPointingSmallAirplane": "\u1F6E8", - "/uparen": "\u24B0", - "/uparenthesized": "\u24B0", - "/uparrowleftofdownarrow": "\u21C5", - "/upblock": "\u2580", - "/updblhorzsng": "\u2568", - "/updblleftsng": "\u255C", - "/updblrightsng": "\u2559", - "/upheavydnhorzlight": "\u2540", - "/upheavyhorzlight": "\u2538", - "/upheavyleftdnlight": "\u2526", - "/upheavyleftlight": "\u251A", - "/upheavyrightdnlight": "\u251E", - "/upheavyrightlight": "\u2516", - "/uplightdnhorzheavy": "\u2548", - "/uplighthorzheavy": "\u2537", - "/uplightleftdnheavy": "\u252A", - "/uplightleftheavy": "\u2519", - "/uplightrightdnheavy": "\u2522", - "/uplightrightheavy": "\u2515", - "/upperHalfBlock": "\u2580", - "/upperOneEighthBlock": "\u2594", - "/upperRightShadowedWhiteCircle": "\u1F53F", - "/upperdothebrew": "\u05C4", - "/upperhalfcircle": "\u25E0", - "/upperhalfcircleinversewhite": "\u25DA", - "/upperquadrantcirculararcleft": "\u25DC", - "/upperquadrantcirculararcright": "\u25DD", - "/uppertriangleleft": "\u25F8", - "/uppertriangleleftblack": "\u25E4", - "/uppertriangleright": "\u25F9", - "/uppertrianglerightblack": "\u25E5", - "/upsideDownFace": "\u1F643", - "/upsilon": "\u03C5", - "/upsilonacute": "\u1F7B", - "/upsilonasper": "\u1F51", - "/upsilonasperacute": "\u1F55", - "/upsilonaspergrave": "\u1F53", - "/upsilonaspertilde": "\u1F57", - "/upsilonbreve": "\u1FE0", - "/upsilondieresis": "\u03CB", - "/upsilondieresisacute": "\u1FE3", - "/upsilondieresisgrave": "\u1FE2", - "/upsilondieresistilde": "\u1FE7", - "/upsilondieresistonos": "\u03B0", - "/upsilongrave": "\u1F7A", - "/upsilonlatin": "\u028A", - "/upsilonlenis": "\u1F50", - "/upsilonlenisacute": "\u1F54", - "/upsilonlenisgrave": "\u1F52", - "/upsilonlenistilde": "\u1F56", - "/upsilontilde": "\u1FE6", - "/upsilontonos": "\u03CD", - "/upsilonwithmacron": "\u1FE1", - "/upsnghorzdbl": "\u2567", - "/upsngleftdbl": "\u255B", - "/upsngrightdbl": "\u2558", - "/uptackbelowcmb": "\u031D", - "/uptackmod": "\u02D4", - "/upwithexclamationmarksquare": "\u1F199", - "/uragurmukhi": "\u0A73", - "/uranus": "\u2645", - "/uring": "\u016F", - "/ushortcyr": "\u045E", - "/ushortcyrillic": "\u045E", - "/usmallhiragana": "\u3045", - "/usmallkatakana": "\u30A5", - "/usmallkatakanahalfwidth": "\uFF69", - "/usmod": "\uA770", - "/ustraightcyr": "\u04AF", - "/ustraightcyrillic": "\u04AF", - "/ustraightstrokecyr": "\u04B1", - "/ustraightstrokecyrillic": "\u04B1", - "/utilde": "\u0169", - "/utildeacute": "\u1E79", - "/utildebelow": "\u1E75", - "/uubengali": "\u098A", - "/uudeva": "\u090A", - "/uugujarati": "\u0A8A", - "/uugurmukhi": "\u0A0A", - "/uumatragurmukhi": "\u0A42", - "/uuvowelsignbengali": "\u09C2", - "/uuvowelsigndeva": "\u0942", - "/uuvowelsigngujarati": "\u0AC2", - "/uvowelsignbengali": "\u09C1", - "/uvowelsigndeva": "\u0941", - "/uvowelsigngujarati": "\u0AC1", - "/v": "\u0076", - "/vadeva": "\u0935", - "/vagujarati": "\u0AB5", - "/vagurmukhi": "\u0A35", - "/vakatakana": "\u30F7", - "/vanedownfunc": "\u2356", - "/vaneleftfunc": "\u2345", - "/vanerightfunc": "\u2346", - "/vaneupfunc": "\u234F", - "/varikajudeospanish:hb": "\uFB1E", - "/vav": "\u05D5", - "/vav:hb": "\u05D5", - "/vav_vav:hb": "\u05F0", - "/vav_yod:hb": "\u05F1", - "/vavdagesh": "\uFB35", - "/vavdagesh65": "\uFB35", - "/vavdageshhebrew": "\uFB35", - "/vavhebrew": "\u05D5", - "/vavholam": "\uFB4B", - "/vavholamhebrew": "\uFB4B", - "/vavvavhebrew": "\u05F0", - "/vavwithdagesh:hb": "\uFB35", - "/vavwithholam:hb": "\uFB4B", - "/vavyodhebrew": "\u05F1", - "/vcircle": "\u24E5", - "/vcurl": "\u2C74", - "/vdiagonalstroke": "\uA75F", - "/vdotbelow": "\u1E7F", - "/ve.fina": "\uFBDF", - "/ve.isol": "\uFBDE", - "/ve:abovetonecandra": "\u1CF4", - "/ve:anusvaraantargomukhasign": "\u1CE9", - "/ve:anusvarabahirgomukhasign": "\u1CEA", - "/ve:anusvarasignlong": "\u1CEF", - "/ve:anusvaraubhayatomukhasign": "\u1CF1", - "/ve:anusvaravamagomukhasign": "\u1CEB", - "/ve:anusvaravamagomukhawithtailsign": "\u1CEC", - "/ve:ardhavisargasign": "\u1CF2", - "/ve:atharvaindependentsvaritatone": "\u1CE1", - "/ve:atikramasign": "\u1CF7", - "/ve:belowtonecandra": "\u1CD8", - "/ve:dotbelowtone": "\u1CDD", - "/ve:hexiformanusvarasignlong": "\u1CEE", - "/ve:jihvamuliyasign": "\u1CF5", - "/ve:karshanatone": "\u1CD0", - "/ve:kathakaanudattatone": "\u1CDC", - "/ve:nihshvasasign": "\u1CD3", - "/ve:prenkhatone": "\u1CD2", - "/ve:rigkashmiriindependentsvaritatone": "\u1CE0", - "/ve:ringabovetone": "\u1CF8", - "/ve:ringabovetonedbl": "\u1CF9", - "/ve:rotatedardhavisargasign": "\u1CF3", - "/ve:rthanganusvarasignlong": "\u1CF0", - "/ve:sharatone": "\u1CD1", - "/ve:svaritatonedbl": "\u1CDA", - "/ve:svaritatonetpl": "\u1CDB", - "/ve:threedotsbelowtone": "\u1CDF", - "/ve:tiryaksign": "\u1CED", - "/ve:twodotsbelowtone": "\u1CDE", - "/ve:upadhmaniyasign": "\u1CF6", - "/ve:visargaanudattasign": "\u1CE5", - "/ve:visargaanudattasignreversed": "\u1CE6", - "/ve:visargaanudattawithtailsign": "\u1CE8", - "/ve:visargasvaritasign": "\u1CE2", - "/ve:visargaudattasign": "\u1CE3", - "/ve:visargaudattasignreversed": "\u1CE4", - "/ve:visargaudattawithtailsign": "\u1CE7", - "/ve:yajuraggravatedindependentsvaritatone": "\u1CD5", - "/ve:yajurindependentsvaritatone": "\u1CD6", - "/ve:yajurkathakaindependentsvaritaschroedertone": "\u1CD9", - "/ve:yajurkathakaindependentsvaritatone": "\u1CD7", - "/ve:yajurmidlinesvaritasign": "\u1CD4", - "/vecyr": "\u0432", - "/vecyrillic": "\u0432", - "/veh": "\u06A4", - "/veh.fina": "\uFB6B", - "/veh.init": "\uFB6C", - "/veh.isol": "\uFB6A", - "/veh.medi": "\uFB6D", - "/veharabic": "\u06A4", - "/vehfinalarabic": "\uFB6B", - "/vehinitialarabic": "\uFB6C", - "/vehmedialarabic": "\uFB6D", - "/vekatakana": "\u30F9", - "/vend": "\uA769", - "/venus": "\u2640", - "/versicle": "\u2123", - "/vert:bracketwhiteleft": "\uFE17", - "/vert:brakcetwhiteright": "\uFE18", - "/vert:colon": "\uFE13", - "/vert:comma": "\uFE10", - "/vert:ellipsishor": "\uFE19", - "/vert:exclam": "\uFE15", - "/vert:ideographiccomma": "\uFE11", - "/vert:ideographicfullstop": "\uFE12", - "/vert:question": "\uFE16", - "/vert:semicolon": "\uFE14", - "/vertdblhorzsng": "\u256B", - "/vertdblleftsng": "\u2562", - "/vertdblrightsng": "\u255F", - "/vertheavyhorzlight": "\u2542", - "/vertheavyleftlight": "\u2528", - "/vertheavyrightlight": "\u2520", - "/verticalTrafficLight": "\u1F6A6", - "/verticalbar": "\u007C", - "/verticalbardbl": "\u2016", - "/verticalbarhorizontalstroke": "\u27CA", - "/verticalbarwhitearrowonpedestalup": "\u21ED", - "/verticalfourdots": "\u205E", - "/verticalideographiciterationmark": "\u303B", - "/verticalkanarepeatmark": "\u3031", - "/verticalkanarepeatmarklowerhalf": "\u3035", - "/verticalkanarepeatmarkupperhalf": "\u3033", - "/verticalkanarepeatwithvoicedsoundmark": "\u3032", - "/verticalkanarepeatwithvoicedsoundmarkupperhalf": "\u3034", - "/verticallineabovecmb": "\u030D", - "/verticallinebelowcmb": "\u0329", - "/verticallinelowmod": "\u02CC", - "/verticallinemod": "\u02C8", - "/verticalmalestroke": "\u26A8", - "/verticalsdbltrokearrowleft": "\u21FA", - "/verticalsdbltrokearrowleftright": "\u21FC", - "/verticalsdbltrokearrowright": "\u21FB", - "/verticalstrokearrowleft": "\u21F7", - "/verticalstrokearrowleftright": "\u21F9", - "/verticalstrokearrowright": "\u21F8", - "/vertlighthorzheavy": "\u253F", - "/vertlightleftheavy": "\u2525", - "/vertlightrightheavy": "\u251D", - "/vertsnghorzdbl": "\u256A", - "/vertsngleftdbl": "\u2561", - "/vertsngrightdbl": "\u255E", - "/verymuchgreater": "\u22D9", - "/verymuchless": "\u22D8", - "/vesta": "\u26B6", - "/vewarmenian": "\u057E", - "/vhook": "\u028B", - "/vibrationMode": "\u1F4F3", - "/videoCamera": "\u1F4F9", - "/videoGame": "\u1F3AE", - "/videocassette": "\u1F4FC", - "/viewdatasquare": "\u2317", - "/vikatakana": "\u30F8", - "/violin": "\u1F3BB", - "/viramabengali": "\u09CD", - "/viramadeva": "\u094D", - "/viramagujarati": "\u0ACD", - "/virgo": "\u264D", - "/visargabengali": "\u0983", - "/visargadeva": "\u0903", - "/visargagujarati": "\u0A83", - "/visigothicz": "\uA763", - "/vmonospace": "\uFF56", - "/voarmenian": "\u0578", - "/vodsquare": "\u1F1AC", - "/voicediterationhiragana": "\u309E", - "/voicediterationkatakana": "\u30FE", - "/voicedmarkkana": "\u309B", - "/voicedmarkkanahalfwidth": "\uFF9E", - "/voicingmod": "\u02EC", - "/vokatakana": "\u30FA", - "/volapukae": "\uA79B", - "/volapukoe": "\uA79D", - "/volapukue": "\uA79F", - "/volcano": "\u1F30B", - "/volleyball": "\u1F3D0", - "/vovermfullwidth": "\u33DE", - "/vowelVabove": "\u065A", - "/voweldotbelow": "\u065C", - "/vowelinvertedVabove": "\u065B", - "/vparen": "\u24B1", - "/vparenthesized": "\u24B1", - "/vrighthook": "\u2C71", - "/vssquare": "\u1F19A", - "/vtilde": "\u1E7D", - "/vturned": "\u028C", - "/vuhiragana": "\u3094", - "/vukatakana": "\u30F4", - "/vwelsh": "\u1EFD", - "/vy": "\uA761", - "/w": "\u0077", - "/wacirclekatakana": "\u32FB", - "/wacute": "\u1E83", - "/waekorean": "\u3159", - "/wahiragana": "\u308F", - "/wakatakana": "\u30EF", - "/wakatakanahalfwidth": "\uFF9C", - "/wakorean": "\u3158", - "/waningCrescentMoon": "\u1F318", - "/waningGibbousMoon": "\u1F316", - "/warning": "\u26A0", - "/wasmallhiragana": "\u308E", - "/wasmallkatakana": "\u30EE", - "/wastebasket": "\u1F5D1", - "/watch": "\u231A", - "/waterBuffalo": "\u1F403", - "/waterCloset": "\u1F6BE", - "/waterWave": "\u1F30A", - "/waterideographiccircled": "\u328C", - "/waterideographicparen": "\u322C", - "/watermelon": "\u1F349", - "/wattosquare": "\u3357", - "/wavedash": "\u301C", - "/wavingBlackFlag": "\u1F3F4", - "/wavingHandSign": "\u1F44B", - "/wavingWhiteFlag": "\u1F3F3", - "/wavydash": "\u3030", - "/wavyhamzabelow": "\u065F", - "/wavyline": "\u2307", - "/wavyunderscorevertical": "\uFE34", - "/waw": "\u0648", - "/waw.fina": "\uFEEE", - "/waw.isol": "\uFEED", - "/wawDigitThreeAbove": "\u0779", - "/wawDigitTwoAbove": "\u0778", - "/wawarabic": "\u0648", - "/wawdotabove": "\u06CF", - "/wawfinalarabic": "\uFEEE", - "/wawhamza": "\u0624", - "/wawhamza.fina": "\uFE86", - "/wawhamza.isol": "\uFE85", - "/wawhamzaabovearabic": "\u0624", - "/wawhamzaabovefinalarabic": "\uFE86", - "/wawhighhamza": "\u0676", - "/wawring": "\u06C4", - "/wawsmall": "\u06E5", - "/wawtwodotsabove": "\u06CA", - "/waxingCrescentMoon": "\u1F312", - "/waxingGibbousMoon": "\u1F314", - "/wbfullwidth": "\u33DD", - "/wbsquare": "\u33DD", - "/wcircle": "\u24E6", - "/wcircumflex": "\u0175", - "/wcsquare": "\u1F14F", - "/wcsquareblack": "\u1F18F", - "/wdieresis": "\u1E85", - "/wdot": "\u1E87", - "/wdotaccent": "\u1E87", - "/wdotbelow": "\u1E89", - "/wearyCatFace": "\u1F640", - "/wearyFace": "\u1F629", - "/wecirclekatakana": "\u32FD", - "/wecyr": "\u051D", - "/wedding": "\u1F492", - "/wehiragana": "\u3091", - "/weierstrass": "\u2118", - "/weightLifter": "\u1F3CB", - "/wekatakana": "\u30F1", - "/wekorean": "\u315E", - "/weokorean": "\u315D", - "/westsyriaccross": "\u2670", - "/wgrave": "\u1E81", - "/whale": "\u1F40B", - "/wheelchair": "\u267F", - "/wheelofdharma": "\u2638", - "/whiteDownPointingBackhandIndex": "\u1F447", - "/whiteDownPointingLeftHandIndex": "\u1F597", - "/whiteFlower": "\u1F4AE", - "/whiteHardShellFloppyDisk": "\u1F5AB", - "/whiteLatinCross": "\u1F546", - "/whiteLeftPointingBackhandIndex": "\u1F448", - "/whitePennant": "\u1F3F1", - "/whiteRightPointingBackhandIndex": "\u1F449", - "/whiteSquareButton": "\u1F533", - "/whiteSun": "\u1F323", - "/whiteSunBehindCloud": "\u1F325", - "/whiteSunBehindCloudRain": "\u1F326", - "/whiteSunSmallCloud": "\u1F324", - "/whiteTouchtoneTelephone": "\u1F57E", - "/whiteUpPointingBackhandIndex": "\u1F446", - "/whitearrowdown": "\u21E9", - "/whitearrowfromwallright": "\u21F0", - "/whitearrowleft": "\u21E6", - "/whitearrowonpedestalup": "\u21EB", - "/whitearrowright": "\u21E8", - "/whitearrowup": "\u21E7", - "/whitearrowupdown": "\u21F3", - "/whitearrowupfrombar": "\u21EA", - "/whitebullet": "\u25E6", - "/whitecircle": "\u25CB", - "/whitecircleinverse": "\u25D9", - "/whitecornerbracketleft": "\u300E", - "/whitecornerbracketleftvertical": "\uFE43", - "/whitecornerbracketright": "\u300F", - "/whitecornerbracketrightvertical": "\uFE44", - "/whitedblarrowonpedestalup": "\u21EF", - "/whitedblarrowup": "\u21EE", - "/whitediamond": "\u25C7", - "/whitediamondcontainingblacksmalldiamond": "\u25C8", - "/whitedownpointingsmalltriangle": "\u25BF", - "/whitedownpointingtriangle": "\u25BD", - "/whiteleftpointingsmalltriangle": "\u25C3", - "/whiteleftpointingtriangle": "\u25C1", - "/whitelenticularbracketleft": "\u3016", - "/whitelenticularbracketright": "\u3017", - "/whiterightpointingsmalltriangle": "\u25B9", - "/whiterightpointingtriangle": "\u25B7", - "/whitesesamedot": "\uFE46", - "/whitesmallsquare": "\u25AB", - "/whitesmilingface": "\u263A", - "/whitesquare": "\u25A1", - "/whitesquarebracketleft": "\u301A", - "/whitesquarebracketright": "\u301B", - "/whitestar": "\u2606", - "/whitetelephone": "\u260F", - "/whitetortoiseshellbracketleft": "\u3018", - "/whitetortoiseshellbracketright": "\u3019", - "/whiteuppointingsmalltriangle": "\u25B5", - "/whiteuppointingtriangle": "\u25B3", - "/whook": "\u2C73", - "/wicirclekatakana": "\u32FC", - "/wigglylinevertical": "\u2E3E", - "/wignyan": "\uA983", - "/wihiragana": "\u3090", - "/wikatakana": "\u30F0", - "/wikorean": "\u315F", - "/windBlowingFace": "\u1F32C", - "/windChime": "\u1F390", - "/windupada": "\uA9C6", - "/wineGlass": "\u1F377", - "/winkingFace": "\u1F609", - "/wiredKeyboard": "\u1F5AE", - "/wmonospace": "\uFF57", - "/wocirclekatakana": "\u32FE", - "/wohiragana": "\u3092", - "/wokatakana": "\u30F2", - "/wokatakanahalfwidth": "\uFF66", - "/wolfFace": "\u1F43A", - "/woman": "\u1F469", - "/womanBunnyEars": "\u1F46F", - "/womansBoots": "\u1F462", - "/womansClothes": "\u1F45A", - "/womansHat": "\u1F452", - "/womansSandal": "\u1F461", - "/womens": "\u1F6BA", - "/won": "\u20A9", - "/wonmonospace": "\uFFE6", - "/woodideographiccircled": "\u328D", - "/woodideographicparen": "\u322D", - "/wordjoiner": "\u2060", - "/wordseparatormiddledot": "\u2E31", - "/worldMap": "\u1F5FA", - "/worriedFace": "\u1F61F", - "/wowaenthai": "\u0E27", - "/wparen": "\u24B2", - "/wparenthesized": "\u24B2", - "/wrappedPresent": "\u1F381", - "/wreathproduct": "\u2240", - "/wrench": "\u1F527", - "/wring": "\u1E98", - "/wsuperior": "\u02B7", - "/wsupmod": "\u02B7", - "/wturned": "\u028D", - "/wulumelikvowel": "\uA9B7", - "/wuluvowel": "\uA9B6", - "/wynn": "\u01BF", - "/x": "\u0078", - "/x.inferior": "\u2093", - "/xabovecmb": "\u033D", - "/xatailcyr": "\u04B3", - "/xbopomofo": "\u3112", - "/xcircle": "\u24E7", - "/xdieresis": "\u1E8D", - "/xdot": "\u1E8B", - "/xdotaccent": "\u1E8B", - "/xeharmenian": "\u056D", - "/xi": "\u03BE", - "/xmonospace": "\uFF58", - "/xor": "\u22BB", - "/xparen": "\u24B3", - "/xparenthesized": "\u24B3", - "/xsuperior": "\u02E3", - "/xsupmod": "\u02E3", - "/y": "\u0079", - "/yaadosquare": "\u334E", - "/yaarusquare": "\u334F", - "/yabengali": "\u09AF", - "/yacirclekatakana": "\u32F3", - "/yacute": "\u00FD", - "/yacyr": "\u044F", - "/yadeva": "\u092F", - "/yaecyr": "\u0519", - "/yaekorean": "\u3152", - "/yagujarati": "\u0AAF", - "/yagurmukhi": "\u0A2F", - "/yahiragana": "\u3084", - "/yakatakana": "\u30E4", - "/yakatakanahalfwidth": "\uFF94", - "/yakorean": "\u3151", - "/yamakkanthai": "\u0E4E", - "/yangtonemod": "\u02EB", - "/yasmallhiragana": "\u3083", - "/yasmallkatakana": "\u30E3", - "/yasmallkatakanahalfwidth": "\uFF6C", - "/yatcyr": "\u0463", - "/yatcyrillic": "\u0463", - "/ycircle": "\u24E8", - "/ycircumflex": "\u0177", - "/ydieresis": "\u00FF", - "/ydot": "\u1E8F", - "/ydotaccent": "\u1E8F", - "/ydotbelow": "\u1EF5", - "/yeh": "\u064A", - "/yeh.fina": "\uFEF2", - "/yeh.init": "\uFEF3", - "/yeh.init_alefmaksura.fina": "\uFC59", - "/yeh.init_hah.fina": "\uFC56", - "/yeh.init_hah.medi": "\uFCDB", - "/yeh.init_hamzaabove.medi_ae.fina": "\uFBEC", - "/yeh.init_hamzaabove.medi_alef.fina": "\uFBEA", - "/yeh.init_hamzaabove.medi_alefmaksura.fina": "\uFC03", - "/yeh.init_hamzaabove.medi_e.fina": "\uFBF6", - "/yeh.init_hamzaabove.medi_e.medi": "\uFBF8", - "/yeh.init_hamzaabove.medi_hah.fina": "\uFC01", - "/yeh.init_hamzaabove.medi_hah.medi": "\uFC98", - "/yeh.init_hamzaabove.medi_heh.medi": "\uFC9B", - "/yeh.init_hamzaabove.medi_jeem.fina": "\uFC00", - "/yeh.init_hamzaabove.medi_jeem.medi": "\uFC97", - "/yeh.init_hamzaabove.medi_khah.medi": "\uFC99", - "/yeh.init_hamzaabove.medi_meem.fina": "\uFC02", - "/yeh.init_hamzaabove.medi_meem.medi": "\uFC9A", - "/yeh.init_hamzaabove.medi_oe.fina": "\uFBF2", - "/yeh.init_hamzaabove.medi_u.fina": "\uFBF0", - "/yeh.init_hamzaabove.medi_waw.fina": "\uFBEE", - "/yeh.init_hamzaabove.medi_yeh.fina": "\uFC04", - "/yeh.init_hamzaabove.medi_yu.fina": "\uFBF4", - "/yeh.init_heh.medi": "\uFCDE", - "/yeh.init_jeem.fina": "\uFC55", - "/yeh.init_jeem.medi": "\uFCDA", - "/yeh.init_khah.fina": "\uFC57", - "/yeh.init_khah.medi": "\uFCDC", - "/yeh.init_meem.fina": "\uFC58", - "/yeh.init_meem.medi": "\uFCDD", - "/yeh.init_meem.medi_meem.medi": "\uFD9D", - "/yeh.init_yeh.fina": "\uFC5A", - "/yeh.isol": "\uFEF1", - "/yeh.medi": "\uFEF4", - "/yeh.medi_alefmaksura.fina": "\uFC95", - "/yeh.medi_hah.medi_yeh.fina": "\uFDAE", - "/yeh.medi_hamzaabove.medi_ae.fina": "\uFBED", - "/yeh.medi_hamzaabove.medi_alef.fina": "\uFBEB", - "/yeh.medi_hamzaabove.medi_alefmaksura.fina": "\uFC68", - "/yeh.medi_hamzaabove.medi_e.fina": "\uFBF7", - "/yeh.medi_hamzaabove.medi_heh.medi": "\uFCE0", - "/yeh.medi_hamzaabove.medi_meem.fina": "\uFC66", - "/yeh.medi_hamzaabove.medi_meem.medi": "\uFCDF", - "/yeh.medi_hamzaabove.medi_noon.fina": "\uFC67", - "/yeh.medi_hamzaabove.medi_oe.fina": "\uFBF3", - "/yeh.medi_hamzaabove.medi_reh.fina": "\uFC64", - "/yeh.medi_hamzaabove.medi_u.fina": "\uFBF1", - "/yeh.medi_hamzaabove.medi_waw.fina": "\uFBEF", - "/yeh.medi_hamzaabove.medi_yeh.fina": "\uFC69", - "/yeh.medi_hamzaabove.medi_yu.fina": "\uFBF5", - "/yeh.medi_hamzaabove.medi_zain.fina": "\uFC65", - "/yeh.medi_heh.medi": "\uFCF1", - "/yeh.medi_jeem.medi_yeh.fina": "\uFDAF", - "/yeh.medi_meem.fina": "\uFC93", - "/yeh.medi_meem.medi": "\uFCF0", - "/yeh.medi_meem.medi_meem.fina": "\uFD9C", - "/yeh.medi_meem.medi_yeh.fina": "\uFDB0", - "/yeh.medi_noon.fina": "\uFC94", - "/yeh.medi_reh.fina": "\uFC91", - "/yeh.medi_yeh.fina": "\uFC96", - "/yeh.medi_zain.fina": "\uFC92", - "/yehBarreeDigitThreeAbove": "\u077B", - "/yehBarreeDigitTwoAbove": "\u077A", - "/yehVabove": "\u06CE", - "/yehabove": "\u06E7", - "/yeharabic": "\u064A", - "/yehbarree": "\u06D2", - "/yehbarree.fina": "\uFBAF", - "/yehbarree.isol": "\uFBAE", - "/yehbarreearabic": "\u06D2", - "/yehbarreefinalarabic": "\uFBAF", - "/yehbarreehamza": "\u06D3", - "/yehbarreehamza.fina": "\uFBB1", - "/yehbarreehamza.isol": "\uFBB0", - "/yehfarsi": "\u06CC", - "/yehfarsi.fina": "\uFBFD", - "/yehfarsi.init": "\uFBFE", - "/yehfarsi.isol": "\uFBFC", - "/yehfarsi.medi": "\uFBFF", - "/yehfarsiinvertedV": "\u063D", - "/yehfarsithreedotsabove": "\u063F", - "/yehfarsitwodotsabove": "\u063E", - "/yehfinalarabic": "\uFEF2", - "/yehhamza": "\u0626", - "/yehhamza.fina": "\uFE8A", - "/yehhamza.init": "\uFE8B", - "/yehhamza.isol": "\uFE89", - "/yehhamza.medi": "\uFE8C", - "/yehhamzaabovearabic": "\u0626", - "/yehhamzaabovefinalarabic": "\uFE8A", - "/yehhamzaaboveinitialarabic": "\uFE8B", - "/yehhamzaabovemedialarabic": "\uFE8C", - "/yehhighhamza": "\u0678", - "/yehinitialarabic": "\uFEF3", - "/yehmedialarabic": "\uFEF4", - "/yehmeeminitialarabic": "\uFCDD", - "/yehmeemisolatedarabic": "\uFC58", - "/yehnoonfinalarabic": "\uFC94", - "/yehsmall": "\u06E6", - "/yehtail": "\u06CD", - "/yehthreedotsbelow": "\u06D1", - "/yehthreedotsbelowarabic": "\u06D1", - "/yekorean": "\u3156", - "/yellowHeart": "\u1F49B", - "/yen": "\u00A5", - "/yenmonospace": "\uFFE5", - "/yeokorean": "\u3155", - "/yeorinhieuhkorean": "\u3186", - "/yerachBenYomo:hb": "\u05AA", - "/yerahbenyomohebrew": "\u05AA", - "/yerahbenyomolefthebrew": "\u05AA", - "/yericyrillic": "\u044B", - "/yerudieresiscyrillic": "\u04F9", - "/yesieungkorean": "\u3181", - "/yesieungpansioskorean": "\u3183", - "/yesieungsioskorean": "\u3182", - "/yetiv:hb": "\u059A", - "/yetivhebrew": "\u059A", - "/ygrave": "\u1EF3", - "/yhoi": "\u1EF7", - "/yhook": "\u01B4", - "/yhookabove": "\u1EF7", - "/yiarmenian": "\u0575", - "/yicyrillic": "\u0457", - "/yikorean": "\u3162", - "/yintonemod": "\u02EA", - "/yinyang": "\u262F", - "/yiwnarmenian": "\u0582", - "/ylongcyr": "\u044B", - "/ylongdieresiscyr": "\u04F9", - "/yloop": "\u1EFF", - "/ymacron": "\u0233", - "/ymonospace": "\uFF59", - "/yocirclekatakana": "\u32F5", - "/yod": "\u05D9", - "/yod:hb": "\u05D9", - "/yod_yod:hb": "\u05F2", - "/yod_yod_patah:hb": "\uFB1F", - "/yoddagesh": "\uFB39", - "/yoddageshhebrew": "\uFB39", - "/yodhebrew": "\u05D9", - "/yodwithdagesh:hb": "\uFB39", - "/yodwithhiriq:hb": "\uFB1D", - "/yodyodhebrew": "\u05F2", - "/yodyodpatahhebrew": "\uFB1F", - "/yogh": "\u021D", - "/yohiragana": "\u3088", - "/yoikorean": "\u3189", - "/yokatakana": "\u30E8", - "/yokatakanahalfwidth": "\uFF96", - "/yokorean": "\u315B", - "/yosmallhiragana": "\u3087", - "/yosmallkatakana": "\u30E7", - "/yosmallkatakanahalfwidth": "\uFF6E", - "/yot": "\u03F3", - "/yotgreek": "\u03F3", - "/yoyaekorean": "\u3188", - "/yoyakorean": "\u3187", - "/yoyakthai": "\u0E22", - "/yoyingthai": "\u0E0D", - "/yparen": "\u24B4", - "/yparenthesized": "\u24B4", - "/ypogegrammeni": "\u037A", - "/ypogegrammenigreekcmb": "\u0345", - "/yr": "\u01A6", - "/yring": "\u1E99", - "/ystroke": "\u024F", - "/ysuperior": "\u02B8", - "/ysupmod": "\u02B8", - "/ytilde": "\u1EF9", - "/yturned": "\u028E", - "/yu.fina": "\uFBDC", - "/yu.isol": "\uFBDB", - "/yuansquare": "\u3350", - "/yucirclekatakana": "\u32F4", - "/yucyr": "\u044E", - "/yuhiragana": "\u3086", - "/yuikorean": "\u318C", - "/yukatakana": "\u30E6", - "/yukatakanahalfwidth": "\uFF95", - "/yukirghiz": "\u06C9", - "/yukirghiz.fina": "\uFBE3", - "/yukirghiz.isol": "\uFBE2", - "/yukorean": "\u3160", - "/yukrcyr": "\u0457", - "/yusbigcyr": "\u046B", - "/yusbigcyrillic": "\u046B", - "/yusbigiotifiedcyr": "\u046D", - "/yusbigiotifiedcyrillic": "\u046D", - "/yuslittlecyr": "\u0467", - "/yuslittlecyrillic": "\u0467", - "/yuslittleiotifiedcyr": "\u0469", - "/yuslittleiotifiedcyrillic": "\u0469", - "/yusmallhiragana": "\u3085", - "/yusmallkatakana": "\u30E5", - "/yusmallkatakanahalfwidth": "\uFF6D", - "/yuyekorean": "\u318B", - "/yuyeokorean": "\u318A", - "/yyabengali": "\u09DF", - "/yyadeva": "\u095F", - "/z": "\u007A", - "/zaarmenian": "\u0566", - "/zacute": "\u017A", - "/zadeva": "\u095B", - "/zagurmukhi": "\u0A5B", - "/zah": "\u0638", - "/zah.fina": "\uFEC6", - "/zah.init": "\uFEC7", - "/zah.init_meem.fina": "\uFC28", - "/zah.init_meem.medi": "\uFCB9", - "/zah.isol": "\uFEC5", - "/zah.medi": "\uFEC8", - "/zah.medi_meem.medi": "\uFD3B", - "/zaharabic": "\u0638", - "/zahfinalarabic": "\uFEC6", - "/zahinitialarabic": "\uFEC7", - "/zahiragana": "\u3056", - "/zahmedialarabic": "\uFEC8", - "/zain": "\u0632", - "/zain.fina": "\uFEB0", - "/zain.isol": "\uFEAF", - "/zainabove": "\u0617", - "/zainarabic": "\u0632", - "/zainfinalarabic": "\uFEB0", - "/zakatakana": "\u30B6", - "/zaqefGadol:hb": "\u0595", - "/zaqefQatan:hb": "\u0594", - "/zaqefgadolhebrew": "\u0595", - "/zaqefqatanhebrew": "\u0594", - "/zarqa:hb": "\u0598", - "/zarqahebrew": "\u0598", - "/zayin": "\u05D6", - "/zayin:hb": "\u05D6", - "/zayindagesh": "\uFB36", - "/zayindageshhebrew": "\uFB36", - "/zayinhebrew": "\u05D6", - "/zayinwithdagesh:hb": "\uFB36", - "/zbopomofo": "\u3117", - "/zcaron": "\u017E", - "/zcircle": "\u24E9", - "/zcircumflex": "\u1E91", - "/zcurl": "\u0291", - "/zdescender": "\u2C6C", - "/zdot": "\u017C", - "/zdotaccent": "\u017C", - "/zdotbelow": "\u1E93", - "/zecyr": "\u0437", - "/zecyrillic": "\u0437", - "/zedescendercyrillic": "\u0499", - "/zedieresiscyr": "\u04DF", - "/zedieresiscyrillic": "\u04DF", - "/zehiragana": "\u305C", - "/zekatakana": "\u30BC", - "/zero": "\u0030", - "/zero.inferior": "\u2080", - "/zero.superior": "\u2070", - "/zeroarabic": "\u0660", - "/zerobengali": "\u09E6", - "/zerocircle": "\u24EA", - "/zerocircleblack": "\u24FF", - "/zerocomma": "\u1F101", - "/zerodeva": "\u0966", - "/zerofar": "\u06F0", - "/zerofullstop": "\u1F100", - "/zerogujarati": "\u0AE6", - "/zerogurmukhi": "\u0A66", - "/zerohackarabic": "\u0660", - "/zeroinferior": "\u2080", - "/zeromonospace": "\uFF10", - "/zerooldstyle": "\uF730", - "/zeropersian": "\u06F0", - "/zerosquareabove": "\u06E0", - "/zerosuperior": "\u2070", - "/zerothai": "\u0E50", - "/zerothirds": "\u2189", - "/zerowidthjoiner": "\uFEFF", - "/zerowidthnobreakspace": "\uFEFF", - "/zerowidthnonjoiner": "\u200C", - "/zerowidthspace": "\u200B", - "/zeta": "\u03B6", - "/zetailcyr": "\u0499", - "/zhbopomofo": "\u3113", - "/zhearmenian": "\u056A", - "/zhebrevecyr": "\u04C2", - "/zhebrevecyrillic": "\u04C2", - "/zhecyr": "\u0436", - "/zhecyrillic": "\u0436", - "/zhedescendercyrillic": "\u0497", - "/zhedieresiscyr": "\u04DD", - "/zhedieresiscyrillic": "\u04DD", - "/zhetailcyr": "\u0497", - "/zhook": "\u0225", - "/zihiragana": "\u3058", - "/zikatakana": "\u30B8", - "/zildefunc": "\u236C", - "/zinorhebrew": "\u05AE", - "/zjekomicyr": "\u0505", - "/zlinebelow": "\u1E95", - "/zmonospace": "\uFF5A", - "/znotationbagmembership": "\u22FF", - "/zohiragana": "\u305E", - "/zokatakana": "\u30BE", - "/zparen": "\u24B5", - "/zparenthesized": "\u24B5", - "/zretroflex": "\u0290", - "/zretroflexhook": "\u0290", - "/zstroke": "\u01B6", - "/zswashtail": "\u0240", - "/zuhiragana": "\u305A", - "/zukatakana": "\u30BA", - "/zwarakay": "\u0659", -} - - -def _complete() -> None: - global adobe_glyphs - for i in range(256): - adobe_glyphs[f"/a{i}"] = chr(i) - adobe_glyphs["/.notdef"] = "β–‘" - - -_complete() diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/pdfdoc.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/pdfdoc.py deleted file mode 100644 index 306357a5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/pdfdoc.py +++ /dev/null @@ -1,264 +0,0 @@ -# PDFDocEncoding Character Set: Table D.2 of PDF Reference 1.7 -# C.1 Predefined encodings sorted by character name of another PDF reference -# Some indices have '\u0000' although they should have something else: -# 22: should be '\u0017' -_pdfdoc_encoding = [ - "\u0000", - "\u0001", - "\u0002", - "\u0003", - "\u0004", - "\u0005", - "\u0006", - "\u0007", # 0 - 7 - "\u0008", - "\u0009", - "\u000a", - "\u000b", - "\u000c", - "\u000d", - "\u000e", - "\u000f", # 8 - 15 - "\u0010", - "\u0011", - "\u0012", - "\u0013", - "\u0014", - "\u0015", - "\u0000", - "\u0017", # 16 - 23 - "\u02d8", - "\u02c7", - "\u02c6", - "\u02d9", - "\u02dd", - "\u02db", - "\u02da", - "\u02dc", # 24 - 31 - "\u0020", - "\u0021", - "\u0022", - "\u0023", - "\u0024", - "\u0025", - "\u0026", - "\u0027", # 32 - 39 - "\u0028", - "\u0029", - "\u002a", - "\u002b", - "\u002c", - "\u002d", - "\u002e", - "\u002f", # 40 - 47 - "\u0030", - "\u0031", - "\u0032", - "\u0033", - "\u0034", - "\u0035", - "\u0036", - "\u0037", # 48 - 55 - "\u0038", - "\u0039", - "\u003a", - "\u003b", - "\u003c", - "\u003d", - "\u003e", - "\u003f", # 56 - 63 - "\u0040", - "\u0041", - "\u0042", - "\u0043", - "\u0044", - "\u0045", - "\u0046", - "\u0047", # 64 - 71 - "\u0048", - "\u0049", - "\u004a", - "\u004b", - "\u004c", - "\u004d", - "\u004e", - "\u004f", # 72 - 79 - "\u0050", - "\u0051", - "\u0052", - "\u0053", - "\u0054", - "\u0055", - "\u0056", - "\u0057", # 80 - 87 - "\u0058", - "\u0059", - "\u005a", - "\u005b", - "\u005c", - "\u005d", - "\u005e", - "\u005f", # 88 - 95 - "\u0060", - "\u0061", - "\u0062", - "\u0063", - "\u0064", - "\u0065", - "\u0066", - "\u0067", # 96 - 103 - "\u0068", - "\u0069", - "\u006a", - "\u006b", - "\u006c", - "\u006d", - "\u006e", - "\u006f", # 104 - 111 - "\u0070", - "\u0071", - "\u0072", - "\u0073", - "\u0074", - "\u0075", - "\u0076", - "\u0077", # 112 - 119 - "\u0078", - "\u0079", - "\u007a", - "\u007b", - "\u007c", - "\u007d", - "\u007e", - "\u0000", # 120 - 127 - "\u2022", - "\u2020", - "\u2021", - "\u2026", - "\u2014", - "\u2013", - "\u0192", - "\u2044", # 128 - 135 - "\u2039", - "\u203a", - "\u2212", - "\u2030", - "\u201e", - "\u201c", - "\u201d", - "\u2018", # 136 - 143 - "\u2019", - "\u201a", - "\u2122", - "\ufb01", - "\ufb02", - "\u0141", - "\u0152", - "\u0160", # 144 - 151 - "\u0178", - "\u017d", - "\u0131", - "\u0142", - "\u0153", - "\u0161", - "\u017e", - "\u0000", # 152 - 159 - "\u20ac", - "\u00a1", - "\u00a2", - "\u00a3", - "\u00a4", - "\u00a5", - "\u00a6", - "\u00a7", # 160 - 167 - "\u00a8", - "\u00a9", - "\u00aa", - "\u00ab", - "\u00ac", - "\u0000", - "\u00ae", - "\u00af", # 168 - 175 - "\u00b0", - "\u00b1", - "\u00b2", - "\u00b3", - "\u00b4", - "\u00b5", - "\u00b6", - "\u00b7", # 176 - 183 - "\u00b8", - "\u00b9", - "\u00ba", - "\u00bb", - "\u00bc", - "\u00bd", - "\u00be", - "\u00bf", # 184 - 191 - "\u00c0", - "\u00c1", - "\u00c2", - "\u00c3", - "\u00c4", - "\u00c5", - "\u00c6", - "\u00c7", # 192 - 199 - "\u00c8", - "\u00c9", - "\u00ca", - "\u00cb", - "\u00cc", - "\u00cd", - "\u00ce", - "\u00cf", # 200 - 207 - "\u00d0", - "\u00d1", - "\u00d2", - "\u00d3", - "\u00d4", - "\u00d5", - "\u00d6", - "\u00d7", # 208 - 215 - "\u00d8", - "\u00d9", - "\u00da", - "\u00db", - "\u00dc", - "\u00dd", - "\u00de", - "\u00df", # 216 - 223 - "\u00e0", - "\u00e1", - "\u00e2", - "\u00e3", - "\u00e4", - "\u00e5", - "\u00e6", - "\u00e7", # 224 - 231 - "\u00e8", - "\u00e9", - "\u00ea", - "\u00eb", - "\u00ec", - "\u00ed", - "\u00ee", - "\u00ef", # 232 - 239 - "\u00f0", - "\u00f1", - "\u00f2", - "\u00f3", - "\u00f4", - "\u00f5", - "\u00f6", - "\u00f7", # 240 - 247 - "\u00f8", - "\u00f9", - "\u00fa", - "\u00fb", - "\u00fc", - "\u00fd", - "\u00fe", - "\u00ff", # 248 - 255 -] - -assert len(_pdfdoc_encoding) == 256 diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/std.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/std.py deleted file mode 100644 index a6057ff3..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/std.py +++ /dev/null @@ -1,258 +0,0 @@ -_std_encoding = [ - "\x00", - "\x01", - "\x02", - "\x03", - "\x04", - "\x05", - "\x06", - "\x07", - "\x08", - "\t", - "\n", - "\x0b", - "\x0c", - "\r", - "\x0e", - "\x0f", - "\x10", - "\x11", - "\x12", - "\x13", - "\x14", - "\x15", - "\x16", - "\x17", - "\x18", - "\x19", - "\x1a", - "\x1b", - "\x1c", - "\x1d", - "\x1e", - "\x1f", - " ", - "!", - '"', - "#", - "$", - "%", - "&", - "’", - "(", - ")", - "*", - "+", - ",", - "-", - ".", - "/", - "0", - "1", - "2", - "3", - "4", - "5", - "6", - "7", - "8", - "9", - ":", - ";", - "<", - "=", - ">", - "?", - "@", - "A", - "B", - "C", - "D", - "E", - "F", - "G", - "H", - "I", - "J", - "K", - "L", - "M", - "N", - "O", - "P", - "Q", - "R", - "S", - "T", - "U", - "V", - "W", - "X", - "Y", - "Z", - "[", - "\\", - "]", - "^", - "_", - "β€˜", - "a", - "b", - "c", - "d", - "e", - "f", - "g", - "h", - "i", - "j", - "k", - "l", - "m", - "n", - "o", - "p", - "q", - "r", - "s", - "t", - "u", - "v", - "w", - "x", - "y", - "z", - "{", - "|", - "}", - "~", - "\x7f", - "\x80", - "\x81", - "\x82", - "\x83", - "\x84", - "\x85", - "\x86", - "\x87", - "\x88", - "\x89", - "\x8a", - "\x8b", - "\x8c", - "\x8d", - "\x8e", - "\x8f", - "\x90", - "\x91", - "\x92", - "\x93", - "\x94", - "\x95", - "\x96", - "\x97", - "\x98", - "\x99", - "\x9a", - "\x9b", - "\x9c", - "\x9d", - "\x9e", - "\x9f", - "\xa0", - "Β‘", - "Β’", - "Β£", - "⁄", - "Β₯", - "Ζ’", - "Β§", - "Β€", - "'", - "β€œ", - "Β«", - "β€Ή", - "β€Ί", - "fi", - "fl", - "Β°", - "–", - "†", - "‑", - "Β·", - "Β΅", - "ΒΆ", - "β€’", - "β€š", - "β€ž", - "”", - "Β»", - "…", - "‰", - "ΒΎ", - "ΒΏ", - "Γ€", - "`", - "Β΄", - "Λ†", - "˜", - "Β―", - "˘", - "Λ™", - "Β¨", - "Γ‰", - "˚", - "ΒΈ", - "Ì", - "˝", - "Λ›", - "Λ‡", - "β€”", - "Γ‘", - "Γ’", - "Γ“", - "Γ”", - "Γ•", - "Γ–", - "Γ—", - "Ø", - "Γ™", - "Ú", - "Γ›", - "Ü", - "Ý", - "Þ", - "ß", - "Γ ", - "Γ†", - "Γ’", - "Βͺ", - "Γ€", - "Γ₯", - "Γ¦", - "Γ§", - "Ł", - "Ø", - "Ε’", - "ΒΊ", - "Γ¬", - "Γ­", - "Γ", - "Γ―", - "Γ°", - "Γ¦", - "Γ²", - "Γ³", - "Γ΄", - "Δ±", - "ΓΆ", - "Γ·", - "Ε‚", - "ΓΈ", - "Ε“", - "ß", - "ΓΌ", - "Γ½", - "ΓΎ", - "ΓΏ", -] diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/symbol.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/symbol.py deleted file mode 100644 index 4c0d680f..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/symbol.py +++ /dev/null @@ -1,260 +0,0 @@ -# manually generated from https://www.unicode.org/Public/MAPPINGS/VENDORS/ADOBE/symbol.txt -_symbol_encoding = [ - "\u0000", - "\u0001", - "\u0002", - "\u0003", - "\u0004", - "\u0005", - "\u0006", - "\u0007", - "\u0008", - "\u0009", - "\u000A", - "\u000B", - "\u000C", - "\u000D", - "\u000E", - "\u000F", - "\u0010", - "\u0011", - "\u0012", - "\u0013", - "\u0014", - "\u0015", - "\u0016", - "\u0017", - "\u0018", - "\u0019", - "\u001A", - "\u001B", - "\u001C", - "\u001D", - "\u001E", - "\u001F", - "\u0020", - "\u0021", - "\u2200", - "\u0023", - "\u2203", - "\u0025", - "\u0026", - "\u220B", - "\u0028", - "\u0029", - "\u2217", - "\u002B", - "\u002C", - "\u2212", - "\u002E", - "\u002F", - "\u0030", - "\u0031", - "\u0032", - "\u0033", - "\u0034", - "\u0035", - "\u0036", - "\u0037", - "\u0038", - "\u0039", - "\u003A", - "\u003B", - "\u003C", - "\u003D", - "\u003E", - "\u003F", - "\u2245", - "\u0391", - "\u0392", - "\u03A7", - "\u0394", - "\u0395", - "\u03A6", - "\u0393", - "\u0397", - "\u0399", - "\u03D1", - "\u039A", - "\u039B", - "\u039C", - "\u039D", - "\u039F", - "\u03A0", - "\u0398", - "\u03A1", - "\u03A3", - "\u03A4", - "\u03A5", - "\u03C2", - "\u03A9", - "\u039E", - "\u03A8", - "\u0396", - "\u005B", - "\u2234", - "\u005D", - "\u22A5", - "\u005F", - "\uF8E5", - "\u03B1", - "\u03B2", - "\u03C7", - "\u03B4", - "\u03B5", - "\u03C6", - "\u03B3", - "\u03B7", - "\u03B9", - "\u03D5", - "\u03BA", - "\u03BB", - "\u00B5", - "\u03BD", - "\u03BF", - "\u03C0", - "\u03B8", - "\u03C1", - "\u03C3", - "\u03C4", - "\u03C5", - "\u03D6", - "\u03C9", - "\u03BE", - "\u03C8", - "\u03B6", - "\u007B", - "\u007C", - "\u007D", - "\u223C", - "\u007F", - "\u0080", - "\u0081", - "\u0082", - "\u0083", - "\u0084", - "\u0085", - "\u0086", - "\u0087", - "\u0088", - "\u0089", - "\u008A", - "\u008B", - "\u008C", - "\u008D", - "\u008E", - "\u008F", - "\u0090", - "\u0091", - "\u0092", - "\u0093", - "\u0094", - "\u0095", - "\u0096", - "\u0097", - "\u0098", - "\u0099", - "\u009A", - "\u009B", - "\u009C", - "\u009D", - "\u009E", - "\u009F", - "\u20AC", - "\u03D2", - "\u2032", - "\u2264", - "\u2044", - "\u221E", - "\u0192", - "\u2663", - "\u2666", - "\u2665", - "\u2660", - "\u2194", - "\u2190", - "\u2191", - "\u2192", - "\u2193", - "\u00B0", - "\u00B1", - "\u2033", - "\u2265", - "\u00D7", - "\u221D", - "\u2202", - "\u2022", - "\u00F7", - "\u2260", - "\u2261", - "\u2248", - "\u2026", - "\uF8E6", - "\uF8E7", - "\u21B5", - "\u2135", - "\u2111", - "\u211C", - "\u2118", - "\u2297", - "\u2295", - "\u2205", - "\u2229", - "\u222A", - "\u2283", - "\u2287", - "\u2284", - "\u2282", - "\u2286", - "\u2208", - "\u2209", - "\u2220", - "\u2207", - "\uF6DA", - "\uF6D9", - "\uF6DB", - "\u220F", - "\u221A", - "\u22C5", - "\u00AC", - "\u2227", - "\u2228", - "\u21D4", - "\u21D0", - "\u21D1", - "\u21D2", - "\u21D3", - "\u25CA", - "\u2329", - "\uF8E8", - "\uF8E9", - "\uF8EA", - "\u2211", - "\uF8EB", - "\uF8EC", - "\uF8ED", - "\uF8EE", - "\uF8EF", - "\uF8F0", - "\uF8F1", - "\uF8F2", - "\uF8F3", - "\uF8F4", - "\u00F0", - "\u232A", - "\u222B", - "\u2320", - "\uF8F5", - "\u2321", - "\uF8F6", - "\uF8F7", - "\uF8F8", - "\uF8F9", - "\uF8FA", - "\uF8FB", - "\uF8FC", - "\uF8FD", - "\uF8FE", - "\u00FF", -] -assert len(_symbol_encoding) == 256 diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/zapfding.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/zapfding.py deleted file mode 100644 index 9b6cdbcc..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_codecs/zapfding.py +++ /dev/null @@ -1,261 +0,0 @@ -# manually generated from https://www.unicode.org/Public/MAPPINGS/VENDORS/ADOBE/zdingbat.txt - -_zapfding_encoding = [ - "\u0000", - "\u0001", - "\u0002", - "\u0003", - "\u0004", - "\u0005", - "\u0006", - "\u0007", - "\u0008", - "\u0009", - "\u000A", - "\u000B", - "\u000C", - "\u000D", - "\u000E", - "\u000F", - "\u0010", - "\u0011", - "\u0012", - "\u0013", - "\u0014", - "\u0015", - "\u0016", - "\u0017", - "\u0018", - "\u0019", - "\u001A", - "\u001B", - "\u001C", - "\u001D", - "\u001E", - "\u001F", - "\u0020", - "\u2701", - "\u2702", - "\u2703", - "\u2704", - "\u260E", - "\u2706", - "\u2707", - "\u2708", - "\u2709", - "\u261B", - "\u261E", - "\u270C", - "\u270D", - "\u270E", - "\u270F", - "\u2710", - "\u2711", - "\u2712", - "\u2713", - "\u2714", - "\u2715", - "\u2716", - "\u2717", - "\u2718", - "\u2719", - "\u271A", - "\u271B", - "\u271C", - "\u271D", - "\u271E", - "\u271F", - "\u2720", - "\u2721", - "\u2722", - "\u2723", - "\u2724", - "\u2725", - "\u2726", - "\u2727", - "\u2605", - "\u2729", - "\u272A", - "\u272B", - "\u272C", - "\u272D", - "\u272E", - "\u272F", - "\u2730", - "\u2731", - "\u2732", - "\u2733", - "\u2734", - "\u2735", - "\u2736", - "\u2737", - "\u2738", - "\u2739", - "\u273A", - "\u273B", - "\u273C", - "\u273D", - "\u273E", - "\u273F", - "\u2740", - "\u2741", - "\u2742", - "\u2743", - "\u2744", - "\u2745", - "\u2746", - "\u2747", - "\u2748", - "\u2749", - "\u274A", - "\u274B", - "\u25CF", - "\u274D", - "\u25A0", - "\u274F", - "\u2750", - "\u2751", - "\u2752", - "\u25B2", - "\u25BC", - "\u25C6", - "\u2756", - "\u25D7", - "\u2758", - "\u2759", - "\u275A", - "\u275B", - "\u275C", - "\u275D", - "\u275E", - "\u007F", - "\uF8D7", - "\uF8D8", - "\uF8D9", - "\uF8DA", - "\uF8DB", - "\uF8DC", - "\uF8DD", - "\uF8DE", - "\uF8DF", - "\uF8E0", - "\uF8E1", - "\uF8E2", - "\uF8E3", - "\uF8E4", - "\u008E", - "\u008F", - "\u0090", - "\u0091", - "\u0092", - "\u0093", - "\u0094", - "\u0095", - "\u0096", - "\u0097", - "\u0098", - "\u0099", - "\u009A", - "\u009B", - "\u009C", - "\u009D", - "\u009E", - "\u009F", - "\u00A0", - "\u2761", - "\u2762", - "\u2763", - "\u2764", - "\u2765", - "\u2766", - "\u2767", - "\u2663", - "\u2666", - "\u2665", - "\u2660", - "\u2460", - "\u2461", - "\u2462", - "\u2463", - "\u2464", - "\u2465", - "\u2466", - "\u2467", - "\u2468", - "\u2469", - "\u2776", - "\u2777", - "\u2778", - "\u2779", - "\u277A", - "\u277B", - "\u277C", - "\u277D", - "\u277E", - "\u277F", - "\u2780", - "\u2781", - "\u2782", - "\u2783", - "\u2784", - "\u2785", - "\u2786", - "\u2787", - "\u2788", - "\u2789", - "\u278A", - "\u278B", - "\u278C", - "\u278D", - "\u278E", - "\u278F", - "\u2790", - "\u2791", - "\u2792", - "\u2793", - "\u2794", - "\u2192", - "\u2194", - "\u2195", - "\u2798", - "\u2799", - "\u279A", - "\u279B", - "\u279C", - "\u279D", - "\u279E", - "\u279F", - "\u27A0", - "\u27A1", - "\u27A2", - "\u27A3", - "\u27A4", - "\u27A5", - "\u27A6", - "\u27A7", - "\u27A8", - "\u27A9", - "\u27AA", - "\u27AB", - "\u27AC", - "\u27AD", - "\u27AE", - "\u27AF", - "\u00F0", - "\u27B1", - "\u27B2", - "\u27B3", - "\u27B4", - "\u27B5", - "\u27B6", - "\u27B7", - "\u27B8", - "\u27B9", - "\u27BA", - "\u27BB", - "\u27BC", - "\u27BD", - "\u27BE", - "\u00FF", -] -assert len(_zapfding_encoding) == 256 diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_encryption.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_encryption.py deleted file mode 100644 index 5c80e600..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_encryption.py +++ /dev/null @@ -1,895 +0,0 @@ -# Copyright (c) 2022, exiledkingcc -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import hashlib -import random -import struct -from enum import IntEnum -from typing import Any, Dict, Optional, Tuple, Union, cast - -from ._utils import logger_warning -from .errors import DependencyError -from .generic import ( - ArrayObject, - ByteStringObject, - DictionaryObject, - PdfObject, - StreamObject, - TextStringObject, - create_string_object, -) - - -class CryptBase: - def encrypt(self, data: bytes) -> bytes: # pragma: no cover - return data - - def decrypt(self, data: bytes) -> bytes: # pragma: no cover - return data - - -class CryptIdentity(CryptBase): - pass - - -try: - from Crypto.Cipher import AES, ARC4 # type: ignore[import] - from Crypto.Util.Padding import pad # type: ignore[import] - - class CryptRC4(CryptBase): - def __init__(self, key: bytes) -> None: - self.key = key - - def encrypt(self, data: bytes) -> bytes: - return ARC4.ARC4Cipher(self.key).encrypt(data) - - def decrypt(self, data: bytes) -> bytes: - return ARC4.ARC4Cipher(self.key).decrypt(data) - - class CryptAES(CryptBase): - def __init__(self, key: bytes) -> None: - self.key = key - - def encrypt(self, data: bytes) -> bytes: - iv = bytes(bytearray(random.randint(0, 255) for _ in range(16))) - p = 16 - len(data) % 16 - data += bytes(bytearray(p for _ in range(p))) - aes = AES.new(self.key, AES.MODE_CBC, iv) - return iv + aes.encrypt(data) - - def decrypt(self, data: bytes) -> bytes: - iv = data[:16] - data = data[16:] - aes = AES.new(self.key, AES.MODE_CBC, iv) - if len(data) % 16: - data = pad(data, 16) - d = aes.decrypt(data) - if len(d) == 0: - return d - else: - return d[: -d[-1]] - - def RC4_encrypt(key: bytes, data: bytes) -> bytes: - return ARC4.ARC4Cipher(key).encrypt(data) - - def RC4_decrypt(key: bytes, data: bytes) -> bytes: - return ARC4.ARC4Cipher(key).decrypt(data) - - def AES_ECB_encrypt(key: bytes, data: bytes) -> bytes: - return AES.new(key, AES.MODE_ECB).encrypt(data) - - def AES_ECB_decrypt(key: bytes, data: bytes) -> bytes: - return AES.new(key, AES.MODE_ECB).decrypt(data) - - def AES_CBC_encrypt(key: bytes, iv: bytes, data: bytes) -> bytes: - return AES.new(key, AES.MODE_CBC, iv).encrypt(data) - - def AES_CBC_decrypt(key: bytes, iv: bytes, data: bytes) -> bytes: - return AES.new(key, AES.MODE_CBC, iv).decrypt(data) - -except ImportError: - - class CryptRC4(CryptBase): # type: ignore - def __init__(self, key: bytes) -> None: - self.S = list(range(256)) - j = 0 - for i in range(256): - j = (j + self.S[i] + key[i % len(key)]) % 256 - self.S[i], self.S[j] = self.S[j], self.S[i] - - def encrypt(self, data: bytes) -> bytes: - S = list(self.S) - out = list(0 for _ in range(len(data))) - i, j = 0, 0 - for k in range(len(data)): - i = (i + 1) % 256 - j = (j + S[i]) % 256 - S[i], S[j] = S[j], S[i] - x = S[(S[i] + S[j]) % 256] - out[k] = data[k] ^ x - return bytes(bytearray(out)) - - def decrypt(self, data: bytes) -> bytes: - return self.encrypt(data) - - class CryptAES(CryptBase): # type: ignore - def __init__(self, key: bytes) -> None: - pass - - def encrypt(self, data: bytes) -> bytes: - raise DependencyError("PyCryptodome is required for AES algorithm") - - def decrypt(self, data: bytes) -> bytes: - raise DependencyError("PyCryptodome is required for AES algorithm") - - def RC4_encrypt(key: bytes, data: bytes) -> bytes: - return CryptRC4(key).encrypt(data) - - def RC4_decrypt(key: bytes, data: bytes) -> bytes: - return CryptRC4(key).decrypt(data) - - def AES_ECB_encrypt(key: bytes, data: bytes) -> bytes: - raise DependencyError("PyCryptodome is required for AES algorithm") - - def AES_ECB_decrypt(key: bytes, data: bytes) -> bytes: - raise DependencyError("PyCryptodome is required for AES algorithm") - - def AES_CBC_encrypt(key: bytes, iv: bytes, data: bytes) -> bytes: - raise DependencyError("PyCryptodome is required for AES algorithm") - - def AES_CBC_decrypt(key: bytes, iv: bytes, data: bytes) -> bytes: - raise DependencyError("PyCryptodome is required for AES algorithm") - - -class CryptFilter: - def __init__( - self, stmCrypt: CryptBase, strCrypt: CryptBase, efCrypt: CryptBase - ) -> None: - self.stmCrypt = stmCrypt - self.strCrypt = strCrypt - self.efCrypt = efCrypt - - def encrypt_object(self, obj: PdfObject) -> PdfObject: - # TODO - return NotImplemented - - def decrypt_object(self, obj: PdfObject) -> PdfObject: - if isinstance(obj, (ByteStringObject, TextStringObject)): - data = self.strCrypt.decrypt(obj.original_bytes) - obj = create_string_object(data) - elif isinstance(obj, StreamObject): - obj._data = self.stmCrypt.decrypt(obj._data) - elif isinstance(obj, DictionaryObject): - for dictkey, value in list(obj.items()): - obj[dictkey] = self.decrypt_object(value) - elif isinstance(obj, ArrayObject): - for i in range(len(obj)): - obj[i] = self.decrypt_object(obj[i]) - return obj - - -_PADDING = bytes( - [ - 0x28, - 0xBF, - 0x4E, - 0x5E, - 0x4E, - 0x75, - 0x8A, - 0x41, - 0x64, - 0x00, - 0x4E, - 0x56, - 0xFF, - 0xFA, - 0x01, - 0x08, - 0x2E, - 0x2E, - 0x00, - 0xB6, - 0xD0, - 0x68, - 0x3E, - 0x80, - 0x2F, - 0x0C, - 0xA9, - 0xFE, - 0x64, - 0x53, - 0x69, - 0x7A, - ] -) - - -def _padding(data: bytes) -> bytes: - return (data + _PADDING)[:32] - - -class AlgV4: - @staticmethod - def compute_key( - password: bytes, - rev: int, - key_size: int, - o_entry: bytes, - P: int, - id1_entry: bytes, - metadata_encrypted: bool, - ) -> bytes: - """ - Algorithm 2: Computing an encryption key. - - a) Pad or truncate the password string to exactly 32 bytes. If the - password string is more than 32 bytes long, - use only its first 32 bytes; if it is less than 32 bytes long, pad it - by appending the required number of - additional bytes from the beginning of the following padding string: - < 28 BF 4E 5E 4E 75 8A 41 64 00 4E 56 FF FA 01 08 - 2E 2E 00 B6 D0 68 3E 80 2F 0C A9 FE 64 53 69 7A > - That is, if the password string is n bytes long, append - the first 32 - n bytes of the padding string to the end - of the password string. If the password string is empty (zero-length), - meaning there is no user password, - substitute the entire padding string in its place. - - b) Initialize the MD5 hash function and pass the result of step (a) - as input to this function. - c) Pass the value of the encryption dictionary’s O entry to the - MD5 hash function. ("Algorithm 3: Computing - the encryption dictionary’s O (owner password) value" shows how the - O value is computed.) - d) Convert the integer value of the P entry to a 32-bit unsigned binary - number and pass these bytes to the - MD5 hash function, low-order byte first. - e) Pass the first element of the file’s file identifier array (the value - of the ID entry in the document’s trailer - dictionary; see Table 15) to the MD5 hash function. - f) (Security handlers of revision 4 or greater) If document metadata is - not being encrypted, pass 4 bytes with - the value 0xFFFFFFFF to the MD5 hash function. - g) Finish the hash. - h) (Security handlers of revision 3 or greater) Do the following - 50 times: Take the output from the previous - MD5 hash and pass the first n bytes of the output as input into a new - MD5 hash, where n is the number of - bytes of the encryption key as defined by the value of the encryption - dictionary’s Length entry. - i) Set the encryption key to the first n bytes of the output from the - final MD5 hash, where n shall always be 5 - for security handlers of revision 2 but, for security handlers of - revision 3 or greater, shall depend on the - value of the encryption dictionary’s Length entry. - """ - a = _padding(password) - u_hash = hashlib.md5(a) - u_hash.update(o_entry) - u_hash.update(struct.pack("= 4 and metadata_encrypted is False: - u_hash.update(b"\xff\xff\xff\xff") - u_hash_digest = u_hash.digest() - length = key_size // 8 - if rev >= 3: - for _ in range(50): - u_hash_digest = hashlib.md5(u_hash_digest[:length]).digest() - return u_hash_digest[:length] - - @staticmethod - def compute_O_value_key(owner_password: bytes, rev: int, key_size: int) -> bytes: - """ - Algorithm 3: Computing the encryption dictionary’s O (owner password) value. - - a) Pad or truncate the owner password string as described in step (a) - of "Algorithm 2: Computing an encryption key". - If there is no owner password, use the user password instead. - b) Initialize the MD5 hash function and pass the result of step (a) as - input to this function. - c) (Security handlers of revision 3 or greater) Do the following 50 times: - Take the output from the previous - MD5 hash and pass it as input into a new MD5 hash. - d) Create an RC4 encryption key using the first n bytes of the output - from the final MD5 hash, where n shall - always be 5 for security handlers of revision 2 but, for security - handlers of revision 3 or greater, shall - depend on the value of the encryption dictionary’s Length entry. - e) Pad or truncate the user password string as described in step (a) of - "Algorithm 2: Computing an encryption key". - f) Encrypt the result of step (e), using an RC4 encryption function with - the encryption key obtained in step (d). - g) (Security handlers of revision 3 or greater) Do the following 19 times: - Take the output from the previous - invocation of the RC4 function and pass it as input to a new - invocation of the function; use an encryption - key generated by taking each byte of the encryption key obtained in - step (d) and performing an XOR - (exclusive or) operation between that byte and the single-byte value - of the iteration counter (from 1 to 19). - h) Store the output from the final invocation of the RC4 function as - the value of the O entry in the encryption dictionary. - """ - a = _padding(owner_password) - o_hash_digest = hashlib.md5(a).digest() - - if rev >= 3: - for _ in range(50): - o_hash_digest = hashlib.md5(o_hash_digest).digest() - - rc4_key = o_hash_digest[: key_size // 8] - return rc4_key - - @staticmethod - def compute_O_value(rc4_key: bytes, user_password: bytes, rev: int) -> bytes: - """See :func:`compute_O_value_key`.""" - a = _padding(user_password) - rc4_enc = RC4_encrypt(rc4_key, a) - if rev >= 3: - for i in range(1, 20): - key = bytes(bytearray(x ^ i for x in rc4_key)) - rc4_enc = RC4_encrypt(key, rc4_enc) - return rc4_enc - - @staticmethod - def compute_U_value(key: bytes, rev: int, id1_entry: bytes) -> bytes: - """ - Algorithm 4: Computing the encryption dictionary’s U (user password) value. - - (Security handlers of revision 2) - - a) Create an encryption key based on the user password string, as - described in "Algorithm 2: Computing an encryption key". - b) Encrypt the 32-byte padding string shown in step (a) of - "Algorithm 2: Computing an encryption key", using an RC4 encryption - function with the encryption key from the preceding step. - c) Store the result of step (b) as the value of the U entry in the - encryption dictionary. - """ - if rev <= 2: - value = RC4_encrypt(key, _PADDING) - return value - - """ - Algorithm 5: Computing the encryption dictionary’s U (user password) value. - - (Security handlers of revision 3 or greater) - - a) Create an encryption key based on the user password string, as - described in "Algorithm 2: Computing an encryption key". - b) Initialize the MD5 hash function and pass the 32-byte padding string - shown in step (a) of "Algorithm 2: - Computing an encryption key" as input to this function. - c) Pass the first element of the file’s file identifier array (the value - of the ID entry in the document’s trailer - dictionary; see Table 15) to the hash function and finish the hash. - d) Encrypt the 16-byte result of the hash, using an RC4 encryption - function with the encryption key from step (a). - e) Do the following 19 times: Take the output from the previous - invocation of the RC4 function and pass it as input to a new - invocation of the function; use an encryption key generated by - taking each byte of the original encryption key obtained in - step (a) and performing an XOR (exclusive or) operation between that - byte and the single-byte value of the iteration counter (from 1 to 19). - f) Append 16 bytes of arbitrary padding to the output from the final - invocation of the RC4 function and store the 32-byte result as the - value of the U entry in the encryption dictionary. - """ - u_hash = hashlib.md5(_PADDING) - u_hash.update(id1_entry) - rc4_enc = RC4_encrypt(key, u_hash.digest()) - for i in range(1, 20): - rc4_key = bytes(bytearray(x ^ i for x in key)) - rc4_enc = RC4_encrypt(rc4_key, rc4_enc) - return _padding(rc4_enc) - - @staticmethod - def verify_user_password( - user_password: bytes, - rev: int, - key_size: int, - o_entry: bytes, - u_entry: bytes, - P: int, - id1_entry: bytes, - metadata_encrypted: bool, - ) -> bytes: - """ - Algorithm 6: Authenticating the user password. - - a) Perform all but the last step of "Algorithm 4: Computing the encryption dictionary’s U (user password) - value (Security handlers of revision 2)" or "Algorithm 5: Computing the encryption dictionary’s U (user - password) value (Security handlers of revision 3 or greater)" using the supplied password string. - b) If the result of step (a) is equal to the value of the encryption dictionary’s U entry (comparing on the first 16 - bytes in the case of security handlers of revision 3 or greater), the password supplied is the correct user - password. The key obtained in step (a) (that is, in the first step of "Algorithm 4: Computing the encryption - dictionary’s U (user password) value (Security handlers of revision 2)" or "Algorithm 5: Computing the - encryption dictionary’s U (user password) value (Security handlers of revision 3 or greater)") shall be used - to decrypt the document. - """ - key = AlgV4.compute_key( - user_password, rev, key_size, o_entry, P, id1_entry, metadata_encrypted - ) - u_value = AlgV4.compute_U_value(key, rev, id1_entry) - if rev >= 3: - u_value = u_value[:16] - u_entry = u_entry[:16] - if u_value != u_entry: - key = b"" - return key - - @staticmethod - def verify_owner_password( - owner_password: bytes, - rev: int, - key_size: int, - o_entry: bytes, - u_entry: bytes, - P: int, - id1_entry: bytes, - metadata_encrypted: bool, - ) -> bytes: - """ - Algorithm 7: Authenticating the owner password. - - a) Compute an encryption key from the supplied password string, as described in steps (a) to (d) of - "Algorithm 3: Computing the encryption dictionary’s O (owner password) value". - b) (Security handlers of revision 2 only) Decrypt the value of the encryption dictionary’s O entry, using an RC4 - encryption function with the encryption key computed in step (a). - (Security handlers of revision 3 or greater) Do the following 20 times: Decrypt the value of the encryption - dictionary’s O entry (first iteration) or the output from the previous iteration (all subsequent iterations), - using an RC4 encryption function with a different encryption key at each iteration. The key shall be - generated by taking the original key (obtained in step (a)) and performing an XOR (exclusive or) operation - between each byte of the key and the single-byte value of the iteration counter (from 19 to 0). - c) The result of step (b) purports to be the user password. Authenticate this user password using "Algorithm 6: - Authenticating the user password". If it is correct, the password supplied is the correct owner password. - """ - rc4_key = AlgV4.compute_O_value_key(owner_password, rev, key_size) - - if rev <= 2: - user_password = RC4_decrypt(rc4_key, o_entry) - else: - user_password = o_entry - for i in range(19, -1, -1): - key = bytes(bytearray(x ^ i for x in rc4_key)) - user_password = RC4_decrypt(key, user_password) - return AlgV4.verify_user_password( - user_password, - rev, - key_size, - o_entry, - u_entry, - P, - id1_entry, - metadata_encrypted, - ) - - -class AlgV5: - @staticmethod - def verify_owner_password( - R: int, password: bytes, o_value: bytes, oe_value: bytes, u_value: bytes - ) -> bytes: - """ - Algorithm 3.2a Computing an encryption key. - - To understand the algorithm below, it is necessary to treat the O and U strings in the Encrypt dictionary - as made up of three sections. The first 32 bytes are a hash value (explained below). The next 8 bytes are - called the Validation Salt. The final 8 bytes are called the Key Salt. - - 1. The password string is generated from Unicode input by processing the input string with the SASLprep - (IETF RFC 4013) profile of stringprep (IETF RFC 3454), and then converting to a UTF-8 representation. - 2. Truncate the UTF-8 representation to 127 bytes if it is longer than 127 bytes. - 3. Test the password against the owner key by computing the SHA-256 hash of the UTF-8 password - concatenated with the 8 bytes of owner Validation Salt, concatenated with the 48-byte U string. If the - 32-byte result matches the first 32 bytes of the O string, this is the owner password. - Compute an intermediate owner key by computing the SHA-256 hash of the UTF-8 password - concatenated with the 8 bytes of owner Key Salt, concatenated with the 48-byte U string. The 32-byte - result is the key used to decrypt the 32-byte OE string using AES-256 in CBC mode with no padding and - an initialization vector of zero. The 32-byte result is the file encryption key. - 4. Test the password against the user key by computing the SHA-256 hash of the UTF-8 password - concatenated with the 8 bytes of user Validation Salt. If the 32 byte result matches the first 32 bytes of - the U string, this is the user password. - Compute an intermediate user key by computing the SHA-256 hash of the UTF-8 password - concatenated with the 8 bytes of user Key Salt. The 32-byte result is the key used to decrypt the 32-byte - UE string using AES-256 in CBC mode with no padding and an initialization vector of zero. The 32-byte - result is the file encryption key. - 5. Decrypt the 16-byte Perms string using AES-256 in ECB mode with an initialization vector of zero and - the file encryption key as the key. Verify that bytes 9-11 of the result are the characters β€˜a’, β€˜d’, β€˜b’. Bytes - 0-3 of the decrypted Perms entry, treated as a little-endian integer, are the user permissions. They - should match the value in the P key. - """ - password = password[:127] - if ( - AlgV5.calculate_hash(R, password, o_value[32:40], u_value[:48]) - != o_value[:32] - ): - return b"" - iv = bytes(0 for _ in range(16)) - tmp_key = AlgV5.calculate_hash(R, password, o_value[40:48], u_value[:48]) - key = AES_CBC_decrypt(tmp_key, iv, oe_value) - return key - - @staticmethod - def verify_user_password( - R: int, password: bytes, u_value: bytes, ue_value: bytes - ) -> bytes: - """See :func:`verify_owner_password`.""" - password = password[:127] - if AlgV5.calculate_hash(R, password, u_value[32:40], b"") != u_value[:32]: - return b"" - iv = bytes(0 for _ in range(16)) - tmp_key = AlgV5.calculate_hash(R, password, u_value[40:48], b"") - return AES_CBC_decrypt(tmp_key, iv, ue_value) - - @staticmethod - def calculate_hash(R: int, password: bytes, salt: bytes, udata: bytes) -> bytes: - # from https://github.com/qpdf/qpdf/blob/main/libqpdf/QPDF_encryption.cc - K = hashlib.sha256(password + salt + udata).digest() - if R < 6: - return K - count = 0 - while True: - count += 1 - K1 = password + K + udata - E = AES_CBC_encrypt(K[:16], K[16:32], K1 * 64) - hash_fn = ( - hashlib.sha256, - hashlib.sha384, - hashlib.sha512, - )[sum(E[:16]) % 3] - K = hash_fn(E).digest() - if count >= 64 and E[-1] <= count - 32: - break - return K[:32] - - @staticmethod - def verify_perms( - key: bytes, perms: bytes, p: int, metadata_encrypted: bool - ) -> bool: - """See :func:`verify_owner_password` and :func:`compute_Perms_value`.""" - b8 = b"T" if metadata_encrypted else b"F" - p1 = struct.pack(" Dict[Any, Any]: - u_value, ue_value = AlgV5.compute_U_value(user_password, key) - o_value, oe_value = AlgV5.compute_O_value(owner_password, key, u_value) - perms = AlgV5.compute_Perms_value(key, p, metadata_encrypted) - return { - "/U": u_value, - "/UE": ue_value, - "/O": o_value, - "/OE": oe_value, - "/Perms": perms, - } - - @staticmethod - def compute_U_value(password: bytes, key: bytes) -> Tuple[bytes, bytes]: - """ - Algorithm 3.8 Computing the encryption dictionary’s U (user password) and UE (user encryption key) values - - 1. Generate 16 random bytes of data using a strong random number generator. The first 8 bytes are the - User Validation Salt. The second 8 bytes are the User Key Salt. Compute the 32-byte SHA-256 hash of - the password concatenated with the User Validation Salt. The 48-byte string consisting of the 32-byte - hash followed by the User Validation Salt followed by the User Key Salt is stored as the U key. - 2. Compute the 32-byte SHA-256 hash of the password concatenated with the User Key Salt. Using this - hash as the key, encrypt the file encryption key using AES-256 in CBC mode with no padding and an - initialization vector of zero. The resulting 32-byte string is stored as the UE key. - """ - random_bytes = bytes(random.randrange(0, 256) for _ in range(16)) - val_salt = random_bytes[:8] - key_salt = random_bytes[8:] - u_value = hashlib.sha256(password + val_salt).digest() + val_salt + key_salt - - tmp_key = hashlib.sha256(password + key_salt).digest() - iv = bytes(0 for _ in range(16)) - ue_value = AES_CBC_encrypt(tmp_key, iv, key) - return u_value, ue_value - - @staticmethod - def compute_O_value( - password: bytes, key: bytes, u_value: bytes - ) -> Tuple[bytes, bytes]: - """ - Algorithm 3.9 Computing the encryption dictionary’s O (owner password) and OE (owner encryption key) values. - - 1. Generate 16 random bytes of data using a strong random number generator. The first 8 bytes are the - Owner Validation Salt. The second 8 bytes are the Owner Key Salt. Compute the 32-byte SHA-256 hash - of the password concatenated with the Owner Validation Salt and then concatenated with the 48-byte - U string as generated in Algorithm 3.8. The 48-byte string consisting of the 32-byte hash followed by - the Owner Validation Salt followed by the Owner Key Salt is stored as the O key. - 2. Compute the 32-byte SHA-256 hash of the password concatenated with the Owner Key Salt and then - concatenated with the 48-byte U string as generated in Algorithm 3.8. Using this hash as the key, - encrypt the file encryption key using AES-256 in CBC mode with no padding and an initialization vector - of zero. The resulting 32-byte string is stored as the OE key. - """ - random_bytes = bytes(random.randrange(0, 256) for _ in range(16)) - val_salt = random_bytes[:8] - key_salt = random_bytes[8:] - o_value = ( - hashlib.sha256(password + val_salt + u_value).digest() + val_salt + key_salt - ) - - tmp_key = hashlib.sha256(password + key_salt + u_value).digest() - iv = bytes(0 for _ in range(16)) - oe_value = AES_CBC_encrypt(tmp_key, iv, key) - return o_value, oe_value - - @staticmethod - def compute_Perms_value(key: bytes, p: int, metadata_encrypted: bool) -> bytes: - """ - Algorithm 3.10 Computing the encryption dictionary’s Perms (permissions) value - - 1. Extend the permissions (contents of the P integer) to 64 bits by setting the upper 32 bits to all 1’s. (This - allows for future extension without changing the format.) - 2. Record the 8 bytes of permission in the bytes 0-7 of the block, low order byte first. - 3. Set byte 8 to the ASCII value ' T ' or ' F ' according to the EncryptMetadata Boolean. - 4. Set bytes 9-11 to the ASCII characters ' a ', ' d ', ' b '. - 5. Set bytes 12-15 to 4 bytes of random data, which will be ignored. - 6. Encrypt the 16-byte block using AES-256 in ECB mode with an initialization vector of zero, using the file - encryption key as the key. The result (16 bytes) is stored as the Perms string, and checked for validity - when the file is opened. - """ - b8 = b"T" if metadata_encrypted else b"F" - rr = bytes(random.randrange(0, 256) for _ in range(4)) - data = struct.pack(" None: - # See TABLE 3.18 Entries common to all encryption dictionaries - self.algV = algV - self.algR = algR - self.entry = entry - self.key_size = entry.get("/Length", 40) - self.id1_entry = first_id_entry - self.StmF = StmF - self.StrF = StrF - self.EFF = EFF - - # 1 => owner password - # 2 => user password - self._password_type = PasswordType.NOT_DECRYPTED - self._key: Optional[bytes] = None - - def is_decrypted(self) -> bool: - return self._password_type != PasswordType.NOT_DECRYPTED - - def decrypt_object(self, obj: PdfObject, idnum: int, generation: int) -> PdfObject: - """ - Algorithm 1: Encryption of data using the RC4 or AES algorithms. - - a) Obtain the object number and generation number from the object identifier of the string or stream to be - encrypted (see 7.3.10, "Indirect Objects"). If the string is a direct object, use the identifier of the indirect - object containing it. - b) For all strings and streams without crypt filter specifier; treating the object number and generation number - as binary integers, extend the original n-byte encryption key to n + 5 bytes by appending the low-order 3 - bytes of the object number and the low-order 2 bytes of the generation number in that order, low-order byte - first. (n is 5 unless the value of V in the encryption dictionary is greater than 1, in which case n is the value - of Length divided by 8.) - If using the AES algorithm, extend the encryption key an additional 4 bytes by adding the value β€œsAlT”, - which corresponds to the hexadecimal values 0x73, 0x41, 0x6C, 0x54. (This addition is done for backward - compatibility and is not intended to provide additional security.) - c) Initialize the MD5 hash function and pass the result of step (b) as input to this function. - d) Use the first (n + 5) bytes, up to a maximum of 16, of the output from the MD5 hash as the key for the RC4 - or AES symmetric key algorithms, along with the string or stream data to be encrypted. - If using the AES algorithm, the Cipher Block Chaining (CBC) mode, which requires an initialization vector, - is used. The block size parameter is set to 16 bytes, and the initialization vector is a 16-byte random - number that is stored as the first 16 bytes of the encrypted stream or string. - - Algorithm 3.1a Encryption of data using the AES algorithm - 1. Use the 32-byte file encryption key for the AES-256 symmetric key algorithm, along with the string or - stream data to be encrypted. - Use the AES algorithm in Cipher Block Chaining (CBC) mode, which requires an initialization vector. The - block size parameter is set to 16 bytes, and the initialization vector is a 16-byte random number that is - stored as the first 16 bytes of the encrypted stream or string. - The output is the encrypted data to be stored in the PDF file. - """ - pack1 = struct.pack(" CryptBase: - if method == "/AESV3": - return CryptAES(aes256_key) - if method == "/AESV2": - return CryptAES(aes128_key) - elif method == "/Identity": - return CryptIdentity() - else: - return CryptRC4(rc4_key) - - def verify(self, password: Union[bytes, str]) -> PasswordType: - if isinstance(password, str): - try: - pwd = password.encode("latin-1") - except Exception: # noqa - pwd = password.encode("utf-8") - else: - pwd = password - - key, rc = self.verify_v4(pwd) if self.algV <= 4 else self.verify_v5(pwd) - if rc != PasswordType.NOT_DECRYPTED: - self._password_type = rc - self._key = key - return rc - - def verify_v4(self, password: bytes) -> Tuple[bytes, PasswordType]: - R = cast(int, self.entry["/R"]) - P = cast(int, self.entry["/P"]) - P = (P + 0x100000000) % 0x100000000 # maybe < 0 - # make type(metadata_encrypted) == bool - em = self.entry.get("/EncryptMetadata") - metadata_encrypted = em.value if em else True - o_entry = cast(ByteStringObject, self.entry["/O"].get_object()).original_bytes - u_entry = cast(ByteStringObject, self.entry["/U"].get_object()).original_bytes - - # verify owner password first - key = AlgV4.verify_owner_password( - password, - R, - self.key_size, - o_entry, - u_entry, - P, - self.id1_entry, - metadata_encrypted, - ) - if key: - return key, PasswordType.OWNER_PASSWORD - key = AlgV4.verify_user_password( - password, - R, - self.key_size, - o_entry, - u_entry, - P, - self.id1_entry, - metadata_encrypted, - ) - if key: - return key, PasswordType.USER_PASSWORD - return b"", PasswordType.NOT_DECRYPTED - - def verify_v5(self, password: bytes) -> Tuple[bytes, PasswordType]: - # TODO: use SASLprep process - o_entry = cast(ByteStringObject, self.entry["/O"].get_object()).original_bytes - u_entry = cast(ByteStringObject, self.entry["/U"].get_object()).original_bytes - oe_entry = cast(ByteStringObject, self.entry["/OE"].get_object()).original_bytes - ue_entry = cast(ByteStringObject, self.entry["/UE"].get_object()).original_bytes - - # verify owner password first - key = AlgV5.verify_owner_password( - self.algR, password, o_entry, oe_entry, u_entry - ) - rc = PasswordType.OWNER_PASSWORD - if not key: - key = AlgV5.verify_user_password(self.algR, password, u_entry, ue_entry) - rc = PasswordType.USER_PASSWORD - if not key: - return b"", PasswordType.NOT_DECRYPTED - - # verify Perms - perms = cast(ByteStringObject, self.entry["/Perms"].get_object()).original_bytes - P = cast(int, self.entry["/P"]) - P = (P + 0x100000000) % 0x100000000 # maybe < 0 - metadata_encrypted = self.entry.get("/EncryptMetadata", True) - if not AlgV5.verify_perms(key, perms, P, metadata_encrypted): - logger_warning("ignore '/Perms' verify failed", __name__) - return key, rc - - @staticmethod - def read(encryption_entry: DictionaryObject, first_id_entry: bytes) -> "Encryption": - filter = encryption_entry.get("/Filter") - if filter != "/Standard": - raise NotImplementedError( - "only Standard PDF encryption handler is available" - ) - if "/SubFilter" in encryption_entry: - raise NotImplementedError("/SubFilter NOT supported") - - StmF = "/V2" - StrF = "/V2" - EFF = "/V2" - - V = encryption_entry.get("/V", 0) - if V not in (1, 2, 3, 4, 5): - raise NotImplementedError(f"Encryption V={V} NOT supported") - if V >= 4: - filters = encryption_entry["/CF"] - - StmF = encryption_entry.get("/StmF", "/Identity") - StrF = encryption_entry.get("/StrF", "/Identity") - EFF = encryption_entry.get("/EFF", StmF) - - if StmF != "/Identity": - StmF = filters[StmF]["/CFM"] # type: ignore - if StrF != "/Identity": - StrF = filters[StrF]["/CFM"] # type: ignore - if EFF != "/Identity": - EFF = filters[EFF]["/CFM"] # type: ignore - - allowed_methods = ("/Identity", "/V2", "/AESV2", "/AESV3") - if StmF not in allowed_methods: - raise NotImplementedError("StmF Method {StmF} NOT supported!") - if StrF not in allowed_methods: - raise NotImplementedError(f"StrF Method {StrF} NOT supported!") - if EFF not in allowed_methods: - raise NotImplementedError(f"EFF Method {EFF} NOT supported!") - - R = cast(int, encryption_entry["/R"]) - return Encryption(V, R, encryption_entry, first_id_entry, StmF, StrF, EFF) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_merger.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_merger.py deleted file mode 100644 index 4d7a659d..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_merger.py +++ /dev/null @@ -1,821 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import warnings -from io import BytesIO, FileIO, IOBase -from pathlib import Path -from types import TracebackType -from typing import ( - Any, - Dict, - Iterable, - List, - Optional, - Tuple, - Type, - Union, - cast, -) - -from ._encryption import Encryption -from ._page import PageObject -from ._reader import PdfReader -from ._utils import ( - StrByteType, - deprecation_bookmark, - deprecation_with_replacement, - str_, -) -from ._writer import PdfWriter -from .constants import GoToActionArguments -from .constants import PagesAttributes as PA -from .constants import TypArguments, TypFitArguments -from .generic import ( - PAGE_FIT, - ArrayObject, - Destination, - DictionaryObject, - Fit, - FloatObject, - IndirectObject, - NameObject, - NullObject, - NumberObject, - OutlineItem, - TextStringObject, - TreeObject, -) -from .pagerange import PageRange, PageRangeSpec -from .types import FitType, LayoutType, OutlineType, PagemodeType, ZoomArgType - -ERR_CLOSED_WRITER = "close() was called and thus the writer cannot be used anymore" - - -class _MergedPage: - """Collect necessary information on each page that is being merged.""" - - def __init__(self, pagedata: PageObject, src: PdfReader, id: int) -> None: - self.src = src - self.pagedata = pagedata - self.out_pagedata = None - self.id = id - - -class PdfMerger: - """ - Initialize a ``PdfMerger`` object. - - ``PdfMerger`` merges multiple PDFs into a single PDF. - It can concatenate, slice, insert, or any combination of the above. - - See the functions :meth:`merge()` (or :meth:`append()`) - and :meth:`write()` for usage information. - - :param bool strict: Determines whether user should be warned of all - problems and also causes some correctable problems to be fatal. - Defaults to ``False``. - :param fileobj: Output file. Can be a filename or any kind of - file-like object. - """ - - @deprecation_bookmark(bookmarks="outline") - def __init__( - self, strict: bool = False, fileobj: Union[Path, StrByteType] = "" - ) -> None: - self.inputs: List[Tuple[Any, PdfReader]] = [] - self.pages: List[Any] = [] - self.output: Optional[PdfWriter] = PdfWriter() - self.outline: OutlineType = [] - self.named_dests: List[Any] = [] - self.id_count = 0 - self.fileobj = fileobj - self.strict = strict - - def __enter__(self) -> "PdfMerger": - # There is nothing to do. - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - """Write to the fileobj and close the merger.""" - if self.fileobj: - self.write(self.fileobj) - self.close() - - @deprecation_bookmark(bookmark="outline_item", import_bookmarks="import_outline") - def merge( - self, - page_number: Optional[int] = None, - fileobj: Union[Path, StrByteType, PdfReader] = None, - outline_item: Optional[str] = None, - pages: Optional[PageRangeSpec] = None, - import_outline: bool = True, - position: Optional[int] = None, # deprecated - ) -> None: - """ - Merge the pages from the given file into the output file at the - specified page number. - - :param int page_number: The *page number* to insert this file. File will - be inserted after the given number. - - :param fileobj: A File Object or an object that supports the standard - read and seek methods similar to a File Object. Could also be a - string representing a path to a PDF file. - - :param str outline_item: Optionally, you may specify an outline item - (previously referred to as a 'bookmark') to be applied at the - beginning of the included file by supplying the text of the outline item. - - :param pages: can be a :class:`PageRange` - or a ``(start, stop[, step])`` tuple - to merge only the specified range of pages from the source - document into the output document. - Can also be a list of pages to merge. - - :param bool import_outline: You may prevent the source document's - outline (collection of outline items, previously referred to as - 'bookmarks') from being imported by specifying this as ``False``. - """ - if position is not None: # deprecated - if page_number is None: - page_number = position - old_term = "position" - new_term = "page_number" - warnings.warn( - ( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2=4.0.0. Use {new_term} instead" - ), - DeprecationWarning, - ) - else: - raise ValueError( - "The argument position of merge is deprecated. Use page_number only." - ) - - if page_number is None: # deprecated - # The paremter is only marked as Optional as long as - # position is not fully deprecated - raise ValueError("page_number may not be None") - if fileobj is None: # deprecated - # The argument is only Optional due to the deprecated position - # argument - raise ValueError("fileobj may not be None") - - stream, encryption_obj = self._create_stream(fileobj) - - # Create a new PdfReader instance using the stream - # (either file or BytesIO or StringIO) created above - reader = PdfReader(stream, strict=self.strict) # type: ignore[arg-type] - self.inputs.append((stream, reader)) - if encryption_obj is not None: - reader._encryption = encryption_obj - - # Find the range of pages to merge. - if pages is None: - pages = (0, len(reader.pages)) - elif isinstance(pages, PageRange): - pages = pages.indices(len(reader.pages)) - elif isinstance(pages, list): - pass - elif not isinstance(pages, tuple): - raise TypeError('"pages" must be a tuple of (start, stop[, step])') - - srcpages = [] - - outline = [] - if import_outline: - outline = reader.outline - outline = self._trim_outline(reader, outline, pages) - - if outline_item: - outline_item_typ = OutlineItem( - TextStringObject(outline_item), - NumberObject(self.id_count), - Fit.fit(), - ) - self.outline += [outline_item_typ, outline] # type: ignore - else: - self.outline += outline - - dests = reader.named_destinations - trimmed_dests = self._trim_dests(reader, dests, pages) - self.named_dests += trimmed_dests - - # Gather all the pages that are going to be merged - for i in range(*pages): - page = reader.pages[i] - - id = self.id_count - self.id_count += 1 - - mp = _MergedPage(page, reader, id) - - srcpages.append(mp) - - self._associate_dests_to_pages(srcpages) - self._associate_outline_items_to_pages(srcpages) - - # Slice to insert the pages at the specified page_number - self.pages[page_number:page_number] = srcpages - - def _create_stream( - self, fileobj: Union[Path, StrByteType, PdfReader] - ) -> Tuple[IOBase, Optional[Encryption]]: - # If the fileobj parameter is a string, assume it is a path - # and create a file object at that location. If it is a file, - # copy the file's contents into a BytesIO stream object; if - # it is a PdfReader, copy that reader's stream into a - # BytesIO stream. - # If fileobj is none of the above types, it is not modified - encryption_obj = None - stream: IOBase - if isinstance(fileobj, (str, Path)): - stream = FileIO(fileobj, "rb") - elif isinstance(fileobj, PdfReader): - if fileobj._encryption: - encryption_obj = fileobj._encryption - orig_tell = fileobj.stream.tell() - fileobj.stream.seek(0) - stream = BytesIO(fileobj.stream.read()) - - # reset the stream to its original location - fileobj.stream.seek(orig_tell) - elif hasattr(fileobj, "seek") and hasattr(fileobj, "read"): - fileobj.seek(0) - filecontent = fileobj.read() - stream = BytesIO(filecontent) - else: - raise NotImplementedError( - "PdfMerger.merge requires an object that PdfReader can parse. " - "Typically, that is a Path or a string representing a Path, " - "a file object, or an object implementing .seek and .read. " - "Passing a PdfReader directly works as well." - ) - return stream, encryption_obj - - @deprecation_bookmark(bookmark="outline_item", import_bookmarks="import_outline") - def append( - self, - fileobj: Union[StrByteType, PdfReader, Path], - outline_item: Optional[str] = None, - pages: Union[ - None, PageRange, Tuple[int, int], Tuple[int, int, int], List[int] - ] = None, - import_outline: bool = True, - ) -> None: - """ - Identical to the :meth:`merge()` method, but assumes you want to - concatenate all pages onto the end of the file instead of specifying a - position. - - :param fileobj: A File Object or an object that supports the standard - read and seek methods similar to a File Object. Could also be a - string representing a path to a PDF file. - - :param str outline_item: Optionally, you may specify an outline item - (previously referred to as a 'bookmark') to be applied at the - beginning of the included file by supplying the text of the outline item. - - :param pages: can be a :class:`PageRange` - or a ``(start, stop[, step])`` tuple - to merge only the specified range of pages from the source - document into the output document. - Can also be a list of pages to append. - - :param bool import_outline: You may prevent the source document's - outline (collection of outline items, previously referred to as - 'bookmarks') from being imported by specifying this as ``False``. - """ - self.merge(len(self.pages), fileobj, outline_item, pages, import_outline) - - def write(self, fileobj: Union[Path, StrByteType]) -> None: - """ - Write all data that has been merged to the given output file. - - :param fileobj: Output file. Can be a filename or any kind of - file-like object. - """ - if self.output is None: - raise RuntimeError(ERR_CLOSED_WRITER) - - # Add pages to the PdfWriter - # The commented out line below was replaced with the two lines below it - # to allow PdfMerger to work with PyPdf 1.13 - for page in self.pages: - self.output.add_page(page.pagedata) - pages_obj = cast(Dict[str, Any], self.output._pages.get_object()) - page.out_pagedata = self.output.get_reference( - pages_obj[PA.KIDS][-1].get_object() - ) - # idnum = self.output._objects.index(self.output._pages.get_object()[PA.KIDS][-1].get_object()) + 1 - # page.out_pagedata = IndirectObject(idnum, 0, self.output) - - # Once all pages are added, create outline items to point at those pages - self._write_dests() - self._write_outline() - - # Write the output to the file - my_file, ret_fileobj = self.output.write(fileobj) - - if my_file: - ret_fileobj.close() - - def close(self) -> None: - """Shut all file descriptors (input and output) and clear all memory usage.""" - self.pages = [] - for fo, _reader in self.inputs: - fo.close() - - self.inputs = [] - self.output = None - - def add_metadata(self, infos: Dict[str, Any]) -> None: - """ - Add custom metadata to the output. - - :param dict infos: a Python dictionary where each key is a field - and each value is your new metadata. - Example: ``{u'/Title': u'My title'}`` - """ - if self.output is None: - raise RuntimeError(ERR_CLOSED_WRITER) - self.output.add_metadata(infos) - - def addMetadata(self, infos: Dict[str, Any]) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_metadata` instead. - """ - deprecation_with_replacement("addMetadata", "add_metadata") - self.add_metadata(infos) - - def setPageLayout(self, layout: LayoutType) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`set_page_layout` instead. - """ - deprecation_with_replacement("setPageLayout", "set_page_layout") - self.set_page_layout(layout) - - def set_page_layout(self, layout: LayoutType) -> None: - """ - Set the page layout. - - :param str layout: The page layout to be used - - .. list-table:: Valid ``layout`` arguments - :widths: 50 200 - - * - /NoLayout - - Layout explicitly not specified - * - /SinglePage - - Show one page at a time - * - /OneColumn - - Show one column at a time - * - /TwoColumnLeft - - Show pages in two columns, odd-numbered pages on the left - * - /TwoColumnRight - - Show pages in two columns, odd-numbered pages on the right - * - /TwoPageLeft - - Show two pages at a time, odd-numbered pages on the left - * - /TwoPageRight - - Show two pages at a time, odd-numbered pages on the right - """ - if self.output is None: - raise RuntimeError(ERR_CLOSED_WRITER) - self.output._set_page_layout(layout) - - def setPageMode(self, mode: PagemodeType) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`set_page_mode` instead. - """ - deprecation_with_replacement("setPageMode", "set_page_mode", "3.0.0") - self.set_page_mode(mode) - - def set_page_mode(self, mode: PagemodeType) -> None: - """ - Set the page mode. - - :param str mode: The page mode to use. - - .. list-table:: Valid ``mode`` arguments - :widths: 50 200 - - * - /UseNone - - Do not show outline or thumbnails panels - * - /UseOutlines - - Show outline (aka bookmarks) panel - * - /UseThumbs - - Show page thumbnails panel - * - /FullScreen - - Fullscreen view - * - /UseOC - - Show Optional Content Group (OCG) panel - * - /UseAttachments - - Show attachments panel - """ - if self.output is None: - raise RuntimeError(ERR_CLOSED_WRITER) - self.output.set_page_mode(mode) - - def _trim_dests( - self, - pdf: PdfReader, - dests: Dict[str, Dict[str, Any]], - pages: Union[Tuple[int, int], Tuple[int, int, int], List[int]], - ) -> List[Dict[str, Any]]: - """Remove named destinations that are not a part of the specified page set.""" - new_dests = [] - lst = pages if isinstance(pages, list) else list(range(*pages)) - for key, obj in dests.items(): - for j in lst: - if pdf.pages[j].get_object() == obj["/Page"].get_object(): - obj[NameObject("/Page")] = obj["/Page"].get_object() - assert str_(key) == str_(obj["/Title"]) - new_dests.append(obj) - break - return new_dests - - def _trim_outline( - self, - pdf: PdfReader, - outline: OutlineType, - pages: Union[Tuple[int, int], Tuple[int, int, int], List[int]], - ) -> OutlineType: - """Remove outline item entries that are not a part of the specified page set.""" - new_outline = [] - prev_header_added = True - lst = pages if isinstance(pages, list) else list(range(*pages)) - for i, outline_item in enumerate(outline): - if isinstance(outline_item, list): - sub = self._trim_outline(pdf, outline_item, lst) # type: ignore - if sub: - if not prev_header_added: - new_outline.append(outline[i - 1]) - new_outline.append(sub) # type: ignore - else: - prev_header_added = False - for j in lst: - if outline_item["/Page"] is None: - continue - if pdf.pages[j].get_object() == outline_item["/Page"].get_object(): - outline_item[NameObject("/Page")] = outline_item[ - "/Page" - ].get_object() - new_outline.append(outline_item) - prev_header_added = True - break - return new_outline - - def _write_dests(self) -> None: - if self.output is None: - raise RuntimeError(ERR_CLOSED_WRITER) - for named_dest in self.named_dests: - pageno = None - if "/Page" in named_dest: - for pageno, page in enumerate(self.pages): # noqa: B007 - if page.id == named_dest["/Page"]: - named_dest[NameObject("/Page")] = page.out_pagedata - break - - if pageno is not None: - self.output.add_named_destination_object(named_dest) - - @deprecation_bookmark(bookmarks="outline") - def _write_outline( - self, - outline: Optional[Iterable[OutlineItem]] = None, - parent: Optional[TreeObject] = None, - ) -> None: - if self.output is None: - raise RuntimeError(ERR_CLOSED_WRITER) - if outline is None: - outline = self.outline # type: ignore - assert outline is not None, "hint for mypy" # TODO: is that true? - - last_added = None - for outline_item in outline: - if isinstance(outline_item, list): - self._write_outline(outline_item, last_added) - continue - - page_no = None - if "/Page" in outline_item: - for page_no, page in enumerate(self.pages): # noqa: B007 - if page.id == outline_item["/Page"]: - self._write_outline_item_on_page(outline_item, page) - break - if page_no is not None: - del outline_item["/Page"], outline_item["/Type"] - last_added = self.output.add_outline_item_dict(outline_item, parent) - - @deprecation_bookmark(bookmark="outline_item") - def _write_outline_item_on_page( - self, outline_item: Union[OutlineItem, Destination], page: _MergedPage - ) -> None: - oi_type = cast(str, outline_item["/Type"]) - args = [NumberObject(page.id), NameObject(oi_type)] - fit2arg_keys: Dict[str, Tuple[str, ...]] = { - TypFitArguments.FIT_H: (TypArguments.TOP,), - TypFitArguments.FIT_BH: (TypArguments.TOP,), - TypFitArguments.FIT_V: (TypArguments.LEFT,), - TypFitArguments.FIT_BV: (TypArguments.LEFT,), - TypFitArguments.XYZ: (TypArguments.LEFT, TypArguments.TOP, "/Zoom"), - TypFitArguments.FIT_R: ( - TypArguments.LEFT, - TypArguments.BOTTOM, - TypArguments.RIGHT, - TypArguments.TOP, - ), - } - for arg_key in fit2arg_keys.get(oi_type, tuple()): - if arg_key in outline_item and not isinstance( - outline_item[arg_key], NullObject - ): - args.append(FloatObject(outline_item[arg_key])) - else: - args.append(FloatObject(0)) - del outline_item[arg_key] - - outline_item[NameObject("/A")] = DictionaryObject( - { - NameObject(GoToActionArguments.S): NameObject("/GoTo"), - NameObject(GoToActionArguments.D): ArrayObject(args), - } - ) - - def _associate_dests_to_pages(self, pages: List[_MergedPage]) -> None: - for named_dest in self.named_dests: - pageno = None - np = named_dest["/Page"] - - if isinstance(np, NumberObject): - continue - - for page in pages: - if np.get_object() == page.pagedata.get_object(): - pageno = page.id - - if pageno is None: - raise ValueError( - f"Unresolved named destination '{named_dest['/Title']}'" - ) - named_dest[NameObject("/Page")] = NumberObject(pageno) - - @deprecation_bookmark(bookmarks="outline") - def _associate_outline_items_to_pages( - self, pages: List[_MergedPage], outline: Optional[Iterable[OutlineItem]] = None - ) -> None: - if outline is None: - outline = self.outline # type: ignore # TODO: self.bookmarks can be None! - assert outline is not None, "hint for mypy" - for outline_item in outline: - if isinstance(outline_item, list): - self._associate_outline_items_to_pages(pages, outline_item) - continue - - pageno = None - outline_item_page = outline_item["/Page"] - - if isinstance(outline_item_page, NumberObject): - continue - - for p in pages: - if outline_item_page.get_object() == p.pagedata.get_object(): - pageno = p.id - - if pageno is not None: - outline_item[NameObject("/Page")] = NumberObject(pageno) - - @deprecation_bookmark(bookmark="outline_item") - def find_outline_item( - self, - outline_item: Dict[str, Any], - root: Optional[OutlineType] = None, - ) -> Optional[List[int]]: - if root is None: - root = self.outline - - for i, oi_enum in enumerate(root): - if isinstance(oi_enum, list): - # oi_enum is still an inner node - # (OutlineType, if recursive types were supported by mypy) - res = self.find_outline_item(outline_item, oi_enum) # type: ignore - if res: - return [i] + res - elif ( - oi_enum == outline_item - or cast(Dict[Any, Any], oi_enum["/Title"]) == outline_item - ): - # we found a leaf node - return [i] - - return None - - @deprecation_bookmark(bookmark="outline_item") - def find_bookmark( - self, - outline_item: Dict[str, Any], - root: Optional[OutlineType] = None, - ) -> Optional[List[int]]: # pragma: no cover - """ - .. deprecated:: 2.9.0 - Use :meth:`find_outline_item` instead. - """ - return self.find_outline_item(outline_item, root) - - def add_outline_item( - self, - title: str, - page_number: Optional[int] = None, - parent: Union[None, TreeObject, IndirectObject] = None, - color: Optional[Tuple[float, float, float]] = None, - bold: bool = False, - italic: bool = False, - fit: Fit = PAGE_FIT, - pagenum: Optional[int] = None, # deprecated - ) -> IndirectObject: - """ - Add an outline item (commonly referred to as a "Bookmark") to this PDF file. - - :param str title: Title to use for this outline item. - :param int page_number: Page number this outline item will point to. - :param parent: A reference to a parent outline item to create nested - outline items. - :param tuple color: Color of the outline item's font as a red, green, blue tuple - from 0.0 to 1.0 - :param bool bold: Outline item font is bold - :param bool italic: Outline item font is italic - :param Fit fit: The fit of the destination page. - """ - if page_number is not None and pagenum is not None: - raise ValueError( - "The argument pagenum of add_outline_item is deprecated. Use page_number only." - ) - if pagenum is not None: - old_term = "pagenum" - new_term = "page_number" - warnings.warn( - ( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2==4.0.0. Use {new_term} instead" - ), - DeprecationWarning, - ) - page_number = pagenum - if page_number is None: - raise ValueError("page_number may not be None") - writer = self.output - if writer is None: - raise RuntimeError(ERR_CLOSED_WRITER) - return writer.add_outline_item( - title, - page_number, - parent, - None, - color, - bold, - italic, - fit, - ) - - def addBookmark( - self, - title: str, - pagenum: int, # deprecated, but the whole method is deprecated - parent: Union[None, TreeObject, IndirectObject] = None, - color: Optional[Tuple[float, float, float]] = None, - bold: bool = False, - italic: bool = False, - fit: FitType = "/Fit", - *args: ZoomArgType, - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - Use :meth:`add_outline_item` instead. - """ - deprecation_with_replacement("addBookmark", "add_outline_item", "3.0.0") - return self.add_outline_item( - title, - pagenum, - parent, - color, - bold, - italic, - Fit(fit_type=fit, fit_args=args), - ) - - def add_bookmark( - self, - title: str, - pagenum: int, # deprecated, but the whole method is deprecated already - parent: Union[None, TreeObject, IndirectObject] = None, - color: Optional[Tuple[float, float, float]] = None, - bold: bool = False, - italic: bool = False, - fit: FitType = "/Fit", - *args: ZoomArgType, - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 2.9.0 - Use :meth:`add_outline_item` instead. - """ - deprecation_with_replacement("addBookmark", "add_outline_item", "3.0.0") - return self.add_outline_item( - title, - pagenum, - parent, - color, - bold, - italic, - Fit(fit_type=fit, fit_args=args), - ) - - def addNamedDestination(self, title: str, pagenum: int) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - Use :meth:`add_named_destination` instead. - """ - deprecation_with_replacement( - "addNamedDestination", "add_named_destination", "3.0.0" - ) - return self.add_named_destination(title, pagenum) - - def add_named_destination( - self, - title: str, - page_number: Optional[int] = None, - pagenum: Optional[int] = None, - ) -> None: - """ - Add a destination to the output. - - :param str title: Title to use - :param int page_number: Page number this destination points at. - """ - if page_number is not None and pagenum is not None: - raise ValueError( - "The argument pagenum of add_named_destination is deprecated. Use page_number only." - ) - if pagenum is not None: - old_term = "pagenum" - new_term = "page_number" - warnings.warn( - ( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2==4.0.0. Use {new_term} instead" - ), - DeprecationWarning, - ) - page_number = pagenum - if page_number is None: - raise ValueError("page_number may not be None") - dest = Destination( - TextStringObject(title), - NumberObject(page_number), - Fit.fit_horizontally(top=826), - ) - self.named_dests.append(dest) - - -class PdfFileMerger(PdfMerger): # pragma: no cover - def __init__(self, *args: Any, **kwargs: Any) -> None: - deprecation_with_replacement("PdfFileMerger", "PdfMerger", "3.0.0") - - if "strict" not in kwargs and len(args) < 1: - kwargs["strict"] = True # maintain the default - super().__init__(*args, **kwargs) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_page.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_page.py deleted file mode 100644 index ed385bb3..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_page.py +++ /dev/null @@ -1,2114 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# Copyright (c) 2007, Ashish Kulkarni -# -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import math -import uuid -import warnings -from decimal import Decimal -from typing import ( - Any, - Callable, - Dict, - Iterable, - Iterator, - List, - Optional, - Set, - Tuple, - Union, - cast, -) - -from ._cmap import build_char_map, unknown_char_map -from ._protocols import PdfReaderProtocol -from ._utils import ( - CompressedTransformationMatrix, - File, - TransformationMatrixType, - deprecation_no_replacement, - deprecation_with_replacement, - logger_warning, - matrix_multiply, -) -from .constants import AnnotationDictionaryAttributes as ADA -from .constants import ImageAttributes as IA -from .constants import PageAttributes as PG -from .constants import Ressources as RES -from .errors import PageSizeNotDefinedError -from .filters import _xobj_to_image -from .generic import ( - ArrayObject, - ContentStream, - DictionaryObject, - EncodedStreamObject, - FloatObject, - IndirectObject, - NameObject, - NullObject, - NumberObject, - RectangleObject, - encode_pdfdocencoding, -) - -CUSTOM_RTL_MIN: int = -1 -CUSTOM_RTL_MAX: int = -1 -CUSTOM_RTL_SPECIAL_CHARS: List[int] = [] - - -def set_custom_rtl( - _min: Union[str, int, None] = None, - _max: Union[str, int, None] = None, - specials: Union[str, List[int], None] = None, -) -> Tuple[int, int, List[int]]: - """ - Change the Right-To-Left and special characters custom parameters. - - Args: - _min: The new minimum value for the range of custom characters that - will be written right to left. - If set to `None`, the value will not be changed. - If set to an integer or string, it will be converted to its ASCII code. - The default value is -1, which sets no additional range to be converted. - _max: The new maximum value for the range of custom characters that will be written right to left. - If set to `None`, the value will not be changed. - If set to an integer or string, it will be converted to its ASCII code. - The default value is -1, which sets no additional range to be converted. - specials: The new list of special characters to be inserted in the current insertion order. - If set to `None`, the current value will not be changed. - If set to a string, it will be converted to a list of ASCII codes. - The default value is an empty list. - - Returns: - A tuple containing the new values for `CUSTOM_RTL_MIN`, `CUSTOM_RTL_MAX`, and `CUSTOM_RTL_SPECIAL_CHARS`. - """ - global CUSTOM_RTL_MIN, CUSTOM_RTL_MAX, CUSTOM_RTL_SPECIAL_CHARS - if isinstance(_min, int): - CUSTOM_RTL_MIN = _min - elif isinstance(_min, str): - CUSTOM_RTL_MIN = ord(_min) - if isinstance(_max, int): - CUSTOM_RTL_MAX = _max - elif isinstance(_max, str): - CUSTOM_RTL_MAX = ord(_max) - if isinstance(specials, str): - CUSTOM_RTL_SPECIAL_CHARS = [ord(x) for x in specials] - elif isinstance(specials, list): - CUSTOM_RTL_SPECIAL_CHARS = specials - return CUSTOM_RTL_MIN, CUSTOM_RTL_MAX, CUSTOM_RTL_SPECIAL_CHARS - - -def _get_rectangle(self: Any, name: str, defaults: Iterable[str]) -> RectangleObject: - retval: Union[None, RectangleObject, IndirectObject] = self.get(name) - if isinstance(retval, RectangleObject): - return retval - if retval is None: - for d in defaults: - retval = self.get(d) - if retval is not None: - break - if isinstance(retval, IndirectObject): - retval = self.pdf.get_object(retval) - retval = RectangleObject(retval) # type: ignore - _set_rectangle(self, name, retval) - return retval - - -def getRectangle( - self: Any, name: str, defaults: Iterable[str] -) -> RectangleObject: # pragma: no cover - deprecation_no_replacement("getRectangle", "3.0.0") - return _get_rectangle(self, name, defaults) - - -def _set_rectangle(self: Any, name: str, value: Union[RectangleObject, float]) -> None: - name = NameObject(name) - self[name] = value - - -def setRectangle( - self: Any, name: str, value: Union[RectangleObject, float] -) -> None: # pragma: no cover - deprecation_no_replacement("setRectangle", "3.0.0") - _set_rectangle(self, name, value) - - -def _delete_rectangle(self: Any, name: str) -> None: - del self[name] - - -def deleteRectangle(self: Any, name: str) -> None: # pragma: no cover - deprecation_no_replacement("deleteRectangle", "3.0.0") - del self[name] - - -def _create_rectangle_accessor(name: str, fallback: Iterable[str]) -> property: - return property( - lambda self: _get_rectangle(self, name, fallback), - lambda self, value: _set_rectangle(self, name, value), - lambda self: _delete_rectangle(self, name), - ) - - -def createRectangleAccessor( - name: str, fallback: Iterable[str] -) -> property: # pragma: no cover - deprecation_no_replacement("createRectangleAccessor", "3.0.0") - return _create_rectangle_accessor(name, fallback) - - -class Transformation: - """ - Represent a 2D transformation. - - The transformation between two coordinate systems is represented by a 3-by-3 - transformation matrix matrix with the following form:: - - a b 0 - c d 0 - e f 1 - - Because a transformation matrix has only six elements that can be changed, - it is usually specified in PDF as the six-element array [ a b c d e f ]. - - Coordinate transformations are expressed as matrix multiplications:: - - a b 0 - [ xβ€² yβ€² 1 ] = [ x y 1 ] Γ— c d 0 - e f 1 - - - Example - ------- - - >>> from PyPDF2 import Transformation - >>> op = Transformation().scale(sx=2, sy=3).translate(tx=10, ty=20) - >>> page.add_transformation(op) - """ - - # 9.5.4 Coordinate Systems for 3D - # 4.2.2 Common Transformations - def __init__(self, ctm: CompressedTransformationMatrix = (1, 0, 0, 1, 0, 0)): - self.ctm = ctm - - @property - def matrix(self) -> TransformationMatrixType: - """ - Return the transformation matrix as a tuple of tuples in the form: - ((a, b, 0), (c, d, 0), (e, f, 1)) - """ - return ( - (self.ctm[0], self.ctm[1], 0), - (self.ctm[2], self.ctm[3], 0), - (self.ctm[4], self.ctm[5], 1), - ) - - @staticmethod - def compress(matrix: TransformationMatrixType) -> CompressedTransformationMatrix: - """ - Compresses the transformation matrix into a tuple of (a, b, c, d, e, f). - - Args: - matrix: The transformation matrix as a tuple of tuples. - - Returns: - A tuple representing the transformation matrix as (a, b, c, d, e, f) - """ - return ( - matrix[0][0], - matrix[0][1], - matrix[1][0], - matrix[1][1], - matrix[2][0], - matrix[2][1], - ) - - def translate(self, tx: float = 0, ty: float = 0) -> "Transformation": - """ - Translate the contents of a page. - - Args: - tx: The translation along the x-axis. - ty: The translation along the y-axis. - - Returns: - A new `Transformation` instance - """ - m = self.ctm - return Transformation(ctm=(m[0], m[1], m[2], m[3], m[4] + tx, m[5] + ty)) - - def scale( - self, sx: Optional[float] = None, sy: Optional[float] = None - ) -> "Transformation": - """ - Scale the contents of a page towards the origin of the coordinate system. - - Typically, that is the lower-left corner of the page. That can be - changed by translating the contents / the page boxes. - - Args: - sx: The scale factor along the x-axis. - sy: The scale factor along the y-axis. - - Returns: - A new Transformation instance with the scaled matrix. - """ - if sx is None and sy is None: - raise ValueError("Either sx or sy must be specified") - if sx is None: - sx = sy - if sy is None: - sy = sx - assert sx is not None - assert sy is not None - op: TransformationMatrixType = ((sx, 0, 0), (0, sy, 0), (0, 0, 1)) - ctm = Transformation.compress(matrix_multiply(self.matrix, op)) - return Transformation(ctm) - - def rotate(self, rotation: float) -> "Transformation": - """ - Rotate the contents of a page. - - Args: - rotation: The angle of rotation in degrees. - - Returns: - A new `Transformation` instance with the rotated matrix. - """ - rotation = math.radians(rotation) - op: TransformationMatrixType = ( - (math.cos(rotation), math.sin(rotation), 0), - (-math.sin(rotation), math.cos(rotation), 0), - (0, 0, 1), - ) - ctm = Transformation.compress(matrix_multiply(self.matrix, op)) - return Transformation(ctm) - - def __repr__(self) -> str: - return f"Transformation(ctm={self.ctm})" - - def apply_on( - self, pt: Union[Tuple[Decimal, Decimal], Tuple[float, float], List[float]] - ) -> Union[Tuple[float, float], List[float]]: - """ - Apply the transformation matrix on the given point. - - Args: - pt: A tuple or list representing the point in the form (x, y) - - Returns: - A tuple or list representing the transformed point in the form (x', y') - """ - pt1 = ( - float(pt[0]) * self.ctm[0] + float(pt[1]) * self.ctm[2] + self.ctm[4], - float(pt[0]) * self.ctm[1] + float(pt[1]) * self.ctm[3] + self.ctm[5], - ) - return list(pt1) if isinstance(pt, list) else pt1 - - -class PageObject(DictionaryObject): - """ - PageObject represents a single page within a PDF file. - - Typically this object will be created by accessing the - :meth:`get_page()` method of the - :class:`PdfReader` class, but it is - also possible to create an empty page with the - :meth:`create_blank_page()` static method. - - Args: - pdf: PDF file the page belongs to. - indirect_reference: Stores the original indirect reference to - this object in its source PDF - """ - - original_page: "PageObject" # very local use in writer when appending - - def __init__( - self, - pdf: Optional[PdfReaderProtocol] = None, - indirect_reference: Optional[IndirectObject] = None, - indirect_ref: Optional[IndirectObject] = None, # deprecated - ) -> None: - - DictionaryObject.__init__(self) - self.pdf: Optional[PdfReaderProtocol] = pdf - if indirect_ref is not None: # deprecated - warnings.warn( - ( - "indirect_ref is deprecated and will be removed in " - "PyPDF2 4.0.0. Use indirect_reference instead of indirect_ref." - ), - DeprecationWarning, - ) - if indirect_reference is not None: - raise ValueError("Use indirect_reference instead of indirect_ref.") - indirect_reference = indirect_ref - self.indirect_reference = indirect_reference - - @property - def indirect_ref(self) -> Optional[IndirectObject]: # deprecated - warnings.warn( - ( - "indirect_ref is deprecated and will be removed in PyPDF2 4.0.0" - "Use indirect_reference instead of indirect_ref." - ), - DeprecationWarning, - ) - return self.indirect_reference - - @indirect_ref.setter - def indirect_ref(self, value: Optional[IndirectObject]) -> None: # deprecated - self.indirect_reference = value - - def hash_value_data(self) -> bytes: - data = super().hash_value_data() - data += b"%d" % id(self) - return data - - @property - def user_unit(self) -> float: - """ - A read-only positive number giving the size of user space units. - - It is in multiples of 1/72 inch. Hence a value of 1 means a user space - unit is 1/72 inch, and a value of 3 means that a user space unit is - 3/72 inch. - """ - return self.get(PG.USER_UNIT, 1) - - @staticmethod - def create_blank_page( - pdf: Optional[Any] = None, # PdfReader - width: Union[float, Decimal, None] = None, - height: Union[float, Decimal, None] = None, - ) -> "PageObject": - """ - Return a new blank page. - - If ``width`` or ``height`` is ``None``, try to get the page size - from the last page of *pdf*. - - Args: - pdf: PDF file the page belongs to - width: The width of the new page expressed in default user - space units. - height: The height of the new page expressed in default user - space units. - - Returns: - The new blank page - - Raises: - PageSizeNotDefinedError: if ``pdf`` is ``None`` or contains - no page - """ - page = PageObject(pdf) - - # Creates a new page (cf PDF Reference 7.7.3.3) - page.__setitem__(NameObject(PG.TYPE), NameObject("/Page")) - page.__setitem__(NameObject(PG.PARENT), NullObject()) - page.__setitem__(NameObject(PG.RESOURCES), DictionaryObject()) - if width is None or height is None: - if pdf is not None and len(pdf.pages) > 0: - lastpage = pdf.pages[len(pdf.pages) - 1] - width = lastpage.mediabox.width - height = lastpage.mediabox.height - else: - raise PageSizeNotDefinedError - page.__setitem__( - NameObject(PG.MEDIABOX), RectangleObject((0, 0, width, height)) # type: ignore - ) - - return page - - @staticmethod - def createBlankPage( - pdf: Optional[Any] = None, # PdfReader - width: Union[float, Decimal, None] = None, - height: Union[float, Decimal, None] = None, - ) -> "PageObject": # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`create_blank_page` instead. - """ - deprecation_with_replacement("createBlankPage", "create_blank_page", "3.0.0") - return PageObject.create_blank_page(pdf, width, height) - - @property - def images(self) -> List[File]: - """ - Get a list of all images of the page. - - This requires pillow. You can install it via 'pip install PyPDF2[image]'. - - For the moment, this does NOT include inline images. They will be added - in future. - """ - images_extracted: List[File] = [] - if RES.XOBJECT not in self[PG.RESOURCES]: # type: ignore - return images_extracted - - x_object = self[PG.RESOURCES][RES.XOBJECT].get_object() # type: ignore - for obj in x_object: - if x_object[obj][IA.SUBTYPE] == "/Image": - extension, byte_stream = _xobj_to_image(x_object[obj]) - if extension is not None: - filename = f"{obj[1:]}{extension}" - images_extracted.append(File(name=filename, data=byte_stream)) - return images_extracted - - @property - def rotation(self) -> int: - """ - The VISUAL rotation of the page. - - This number has to be a multiple of 90 degrees: 0,90,180,270 - This property does not affect "/Contents" - """ - return int(self.get(PG.ROTATE, 0)) - - @rotation.setter - def rotation(self, r: Union[int, float]) -> None: - self[NameObject(PG.ROTATE)] = NumberObject((((int(r) + 45) // 90) * 90) % 360) - - def transfer_rotation_to_content(self) -> None: - """ - Apply the rotation of the page to the content and the media/crop/... boxes. - - It's recommended to apply this function before page merging. - """ - r = -self.rotation # rotation to apply is in the otherway - self.rotation = 0 - mb = RectangleObject(self.mediabox) - trsf = ( - Transformation() - .translate( - -float(mb.left + mb.width / 2), -float(mb.bottom + mb.height / 2) - ) - .rotate(r) - ) - pt1 = trsf.apply_on(mb.lower_left) - pt2 = trsf.apply_on(mb.upper_right) - trsf = trsf.translate(-min(pt1[0], pt2[0]), -min(pt1[1], pt2[1])) - self.add_transformation(trsf, False) - for b in ["/MediaBox", "/CropBox", "/BleedBox", "/TrimBox", "/ArtBox"]: - if b in self: - rr = RectangleObject(self[b]) # type: ignore - pt1 = trsf.apply_on(rr.lower_left) - pt2 = trsf.apply_on(rr.upper_right) - self[NameObject(b)] = RectangleObject( - ( - min(pt1[0], pt2[0]), - min(pt1[1], pt2[1]), - max(pt1[0], pt2[0]), - max(pt1[1], pt2[1]), - ) - ) - - def rotate(self, angle: int) -> "PageObject": - """ - Rotate a page clockwise by increments of 90 degrees. - - Args: - angle: Angle to rotate the page. Must be an increment of 90 deg. - """ - if angle % 90 != 0: - raise ValueError("Rotation angle must be a multiple of 90") - rotate_obj = self.get(PG.ROTATE, 0) - current_angle = ( - rotate_obj if isinstance(rotate_obj, int) else rotate_obj.get_object() - ) - self[NameObject(PG.ROTATE)] = NumberObject(current_angle + angle) - return self - - def rotate_clockwise(self, angle: int) -> "PageObject": # pragma: no cover - deprecation_with_replacement("rotate_clockwise", "rotate", "3.0.0") - return self.rotate(angle) - - def rotateClockwise(self, angle: int) -> "PageObject": # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`rotate_clockwise` instead. - """ - deprecation_with_replacement("rotateClockwise", "rotate", "3.0.0") - return self.rotate(angle) - - def rotateCounterClockwise(self, angle: int) -> "PageObject": # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`rotate_clockwise` with a negative argument instead. - """ - deprecation_with_replacement("rotateCounterClockwise", "rotate", "3.0.0") - return self.rotate(-angle) - - @staticmethod - def _merge_resources( - res1: DictionaryObject, res2: DictionaryObject, resource: Any - ) -> Tuple[Dict[str, Any], Dict[str, Any]]: - new_res = DictionaryObject() - new_res.update(res1.get(resource, DictionaryObject()).get_object()) - page2res = cast( - DictionaryObject, res2.get(resource, DictionaryObject()).get_object() - ) - rename_res = {} - for key in list(page2res.keys()): - if key in new_res and new_res.raw_get(key) != page2res.raw_get(key): - newname = NameObject(key + str(uuid.uuid4())) - rename_res[key] = newname - new_res[newname] = page2res[key] - elif key not in new_res: - new_res[key] = page2res.raw_get(key) - return new_res, rename_res - - @staticmethod - def _content_stream_rename( - stream: ContentStream, rename: Dict[Any, Any], pdf: Any # PdfReader - ) -> ContentStream: - if not rename: - return stream - stream = ContentStream(stream, pdf) - for operands, _operator in stream.operations: - if isinstance(operands, list): - for i in range(len(operands)): - op = operands[i] - if isinstance(op, NameObject): - operands[i] = rename.get(op, op) - elif isinstance(operands, dict): - for i in operands: - op = operands[i] - if isinstance(op, NameObject): - operands[i] = rename.get(op, op) - else: - raise KeyError(f"type of operands is {type(operands)}") - return stream - - @staticmethod - def _push_pop_gs(contents: Any, pdf: Any) -> ContentStream: # PdfReader - # adds a graphics state "push" and "pop" to the beginning and end - # of a content stream. This isolates it from changes such as - # transformation matricies. - stream = ContentStream(contents, pdf) - stream.operations.insert(0, ([], "q")) - stream.operations.append(([], "Q")) - return stream - - @staticmethod - def _add_transformation_matrix( - contents: Any, pdf: Any, ctm: CompressedTransformationMatrix - ) -> ContentStream: # PdfReader - # adds transformation matrix at the beginning of the given - # contents stream. - a, b, c, d, e, f = ctm - contents = ContentStream(contents, pdf) - contents.operations.insert( - 0, - [ - [ - FloatObject(a), - FloatObject(b), - FloatObject(c), - FloatObject(d), - FloatObject(e), - FloatObject(f), - ], - " cm", - ], - ) - return contents - - def get_contents(self) -> Optional[ContentStream]: - """ - Access the page contents. - - :return: the ``/Contents`` object, or ``None`` if it doesn't exist. - ``/Contents`` is optional, as described in PDF Reference 7.7.3.3 - """ - if PG.CONTENTS in self: - return self[PG.CONTENTS].get_object() # type: ignore - else: - return None - - def getContents(self) -> Optional[ContentStream]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_contents` instead. - """ - deprecation_with_replacement("getContents", "get_contents", "3.0.0") - return self.get_contents() - - def merge_page(self, page2: "PageObject", expand: bool = False) -> None: - """ - Merge the content streams of two pages into one. - - Resource references - (i.e. fonts) are maintained from both pages. The mediabox/cropbox/etc - of this page are not altered. The parameter page's content stream will - be added to the end of this page's content stream, meaning that it will - be drawn after, or "on top" of this page. - - Args: - page2: The page to be merged into this one. Should be - an instance of :class:`PageObject`. - expand: If true, the current page dimensions will be - expanded to accommodate the dimensions of the page to be merged. - """ - self._merge_page(page2, expand=expand) - - def mergePage(self, page2: "PageObject") -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`merge_page` instead. - """ - deprecation_with_replacement("mergePage", "merge_page", "3.0.0") - return self.merge_page(page2) - - def _merge_page( - self, - page2: "PageObject", - page2transformation: Optional[Callable[[Any], ContentStream]] = None, - ctm: Optional[CompressedTransformationMatrix] = None, - expand: bool = False, - ) -> None: - # First we work on merging the resource dictionaries. This allows us - # to find out what symbols in the content streams we might need to - # rename. - - new_resources = DictionaryObject() - rename = {} - try: - original_resources = cast(DictionaryObject, self[PG.RESOURCES].get_object()) - except KeyError: - original_resources = DictionaryObject() - try: - page2resources = cast(DictionaryObject, page2[PG.RESOURCES].get_object()) - except KeyError: - page2resources = DictionaryObject() - new_annots = ArrayObject() - - for page in (self, page2): - if PG.ANNOTS in page: - annots = page[PG.ANNOTS] - if isinstance(annots, ArrayObject): - for ref in annots: - new_annots.append(ref) - - for res in ( - RES.EXT_G_STATE, - RES.FONT, - RES.XOBJECT, - RES.COLOR_SPACE, - RES.PATTERN, - RES.SHADING, - RES.PROPERTIES, - ): - new, newrename = PageObject._merge_resources( - original_resources, page2resources, res - ) - if new: - new_resources[NameObject(res)] = new - rename.update(newrename) - - # Combine /ProcSet sets. - new_resources[NameObject(RES.PROC_SET)] = ArrayObject( - frozenset( - original_resources.get(RES.PROC_SET, ArrayObject()).get_object() - ).union( - frozenset(page2resources.get(RES.PROC_SET, ArrayObject()).get_object()) - ) - ) - - new_content_array = ArrayObject() - - original_content = self.get_contents() - if original_content is not None: - new_content_array.append( - PageObject._push_pop_gs(original_content, self.pdf) - ) - - page2content = page2.get_contents() - if page2content is not None: - page2content = ContentStream(page2content, self.pdf) - rect = page2.trimbox - page2content.operations.insert( - 0, - ( - map( - FloatObject, - [ - rect.left, - rect.bottom, - rect.width, - rect.height, - ], - ), - "re", - ), - ) - page2content.operations.insert(1, ([], "W")) - page2content.operations.insert(2, ([], "n")) - if page2transformation is not None: - page2content = page2transformation(page2content) - page2content = PageObject._content_stream_rename( - page2content, rename, self.pdf - ) - page2content = PageObject._push_pop_gs(page2content, self.pdf) - new_content_array.append(page2content) - - # if expanding the page to fit a new page, calculate the new media box size - if expand: - self._expand_mediabox(page2, ctm) - - self[NameObject(PG.CONTENTS)] = ContentStream(new_content_array, self.pdf) - self[NameObject(PG.RESOURCES)] = new_resources - self[NameObject(PG.ANNOTS)] = new_annots - - def _expand_mediabox( - self, page2: "PageObject", ctm: Optional[CompressedTransformationMatrix] - ) -> None: - corners1 = ( - self.mediabox.left.as_numeric(), - self.mediabox.bottom.as_numeric(), - self.mediabox.right.as_numeric(), - self.mediabox.top.as_numeric(), - ) - corners2 = ( - page2.mediabox.left.as_numeric(), - page2.mediabox.bottom.as_numeric(), - page2.mediabox.left.as_numeric(), - page2.mediabox.top.as_numeric(), - page2.mediabox.right.as_numeric(), - page2.mediabox.top.as_numeric(), - page2.mediabox.right.as_numeric(), - page2.mediabox.bottom.as_numeric(), - ) - if ctm is not None: - ctm = tuple(float(x) for x in ctm) # type: ignore[assignment] - new_x = tuple( - ctm[0] * corners2[i] + ctm[2] * corners2[i + 1] + ctm[4] - for i in range(0, 8, 2) - ) - new_y = tuple( - ctm[1] * corners2[i] + ctm[3] * corners2[i + 1] + ctm[5] - for i in range(0, 8, 2) - ) - else: - new_x = corners2[0:8:2] - new_y = corners2[1:8:2] - lowerleft = (min(new_x), min(new_y)) - upperright = (max(new_x), max(new_y)) - lowerleft = (min(corners1[0], lowerleft[0]), min(corners1[1], lowerleft[1])) - upperright = ( - max(corners1[2], upperright[0]), - max(corners1[3], upperright[1]), - ) - - self.mediabox.lower_left = lowerleft - self.mediabox.upper_right = upperright - - def mergeTransformedPage( - self, - page2: "PageObject", - ctm: Union[CompressedTransformationMatrix, Transformation], - expand: bool = False, - ) -> None: # pragma: no cover - """ - mergeTransformedPage is similar to merge_page, but a transformation - matrix is applied to the merged stream. - - :param PageObject page2: The page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param tuple ctm: a 6-element tuple containing the operands of the - transformation matrix - :param bool expand: Whether the page should be expanded to fit the dimensions - of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeTransformedPage(page2, ctm)", - "page2.add_transformation(ctm); page.merge_page(page2)", - "3.0.0", - ) - if isinstance(ctm, Transformation): - ctm = ctm.ctm - ctm = cast(CompressedTransformationMatrix, ctm) - self._merge_page( - page2, - lambda page2Content: PageObject._add_transformation_matrix( - page2Content, page2.pdf, ctm # type: ignore[arg-type] - ), - ctm, - expand, - ) - - def mergeScaledPage( - self, page2: "PageObject", scale: float, expand: bool = False - ) -> None: # pragma: no cover - """ - mergeScaledPage is similar to merge_page, but the stream to be merged - is scaled by applying a transformation matrix. - - :param PageObject page2: The page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float scale: The scaling factor - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeScaledPage(page2, scale, expand)", - "page2.add_transformation(Transformation().scale(scale)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().scale(scale, scale) - self.mergeTransformedPage(page2, op, expand) - - def mergeRotatedPage( - self, page2: "PageObject", rotation: float, expand: bool = False - ) -> None: # pragma: no cover - """ - mergeRotatedPage is similar to merge_page, but the stream to be merged - is rotated by applying a transformation matrix. - - :param PageObject page2: the page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float rotation: The angle of the rotation, in degrees - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeRotatedPage(page2, rotation, expand)", - "page2.add_transformation(Transformation().rotate(rotation)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().rotate(rotation) - self.mergeTransformedPage(page2, op, expand) - - def mergeTranslatedPage( - self, page2: "PageObject", tx: float, ty: float, expand: bool = False - ) -> None: # pragma: no cover - """ - mergeTranslatedPage is similar to merge_page, but the stream to be - merged is translated by applying a transformation matrix. - - :param PageObject page2: the page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float tx: The translation on X axis - :param float ty: The translation on Y axis - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeTranslatedPage(page2, tx, ty, expand)", - "page2.add_transformation(Transformation().translate(tx, ty)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().translate(tx, ty) - self.mergeTransformedPage(page2, op, expand) - - def mergeRotatedTranslatedPage( - self, - page2: "PageObject", - rotation: float, - tx: float, - ty: float, - expand: bool = False, - ) -> None: # pragma: no cover - """ - mergeRotatedTranslatedPage is similar to merge_page, but the stream to - be merged is rotated and translated by applying a transformation matrix. - - :param PageObject page2: the page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float tx: The translation on X axis - :param float ty: The translation on Y axis - :param float rotation: The angle of the rotation, in degrees - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeRotatedTranslatedPage(page2, rotation, tx, ty, expand)", - "page2.add_transformation(Transformation().rotate(rotation).translate(tx, ty)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().translate(-tx, -ty).rotate(rotation).translate(tx, ty) - return self.mergeTransformedPage(page2, op, expand) - - def mergeRotatedScaledPage( - self, page2: "PageObject", rotation: float, scale: float, expand: bool = False - ) -> None: # pragma: no cover - """ - mergeRotatedScaledPage is similar to merge_page, but the stream to be - merged is rotated and scaled by applying a transformation matrix. - - :param PageObject page2: the page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float rotation: The angle of the rotation, in degrees - :param float scale: The scaling factor - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeRotatedScaledPage(page2, rotation, scale, expand)", - "page2.add_transformation(Transformation().rotate(rotation).scale(scale)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().rotate(rotation).scale(scale, scale) - self.mergeTransformedPage(page2, op, expand) - - def mergeScaledTranslatedPage( - self, - page2: "PageObject", - scale: float, - tx: float, - ty: float, - expand: bool = False, - ) -> None: # pragma: no cover - """ - mergeScaledTranslatedPage is similar to merge_page, but the stream to be - merged is translated and scaled by applying a transformation matrix. - - :param PageObject page2: the page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float scale: The scaling factor - :param float tx: The translation on X axis - :param float ty: The translation on Y axis - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeScaledTranslatedPage(page2, scale, tx, ty, expand)", - "page2.add_transformation(Transformation().scale(scale).translate(tx, ty)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().scale(scale, scale).translate(tx, ty) - return self.mergeTransformedPage(page2, op, expand) - - def mergeRotatedScaledTranslatedPage( - self, - page2: "PageObject", - rotation: float, - scale: float, - tx: float, - ty: float, - expand: bool = False, - ) -> None: # pragma: no cover - """ - mergeRotatedScaledTranslatedPage is similar to merge_page, but the - stream to be merged is translated, rotated and scaled by applying a - transformation matrix. - - :param PageObject page2: the page to be merged into this one. Should be - an instance of :class:`PageObject`. - :param float tx: The translation on X axis - :param float ty: The translation on Y axis - :param float rotation: The angle of the rotation, in degrees - :param float scale: The scaling factor - :param bool expand: Whether the page should be expanded to fit the - dimensions of the page to be merged. - - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` and :meth:`merge_page` instead. - """ - deprecation_with_replacement( - "page.mergeRotatedScaledTranslatedPage(page2, rotation, tx, ty, expand)", - "page2.add_transformation(Transformation().rotate(rotation).scale(scale)); page.merge_page(page2, expand)", - "3.0.0", - ) - op = Transformation().rotate(rotation).scale(scale, scale).translate(tx, ty) - self.mergeTransformedPage(page2, op, expand) - - def add_transformation( - self, - ctm: Union[Transformation, CompressedTransformationMatrix], - expand: bool = False, - ) -> None: - """ - Apply a transformation matrix to the page. - - Args: - ctm: A 6-element tuple containing the operands of the - transformation matrix. Alternatively, a - :py:class:`Transformation` - object can be passed. - - See :doc:`/user/cropping-and-transforming`. - """ - if isinstance(ctm, Transformation): - ctm = ctm.ctm - content = self.get_contents() - if content is not None: - content = PageObject._add_transformation_matrix(content, self.pdf, ctm) - content = PageObject._push_pop_gs(content, self.pdf) - self[NameObject(PG.CONTENTS)] = content - # if expanding the page to fit a new page, calculate the new media box size - if expand: - corners = [ - self.mediabox.left.as_numeric(), - self.mediabox.bottom.as_numeric(), - self.mediabox.left.as_numeric(), - self.mediabox.top.as_numeric(), - self.mediabox.right.as_numeric(), - self.mediabox.top.as_numeric(), - self.mediabox.right.as_numeric(), - self.mediabox.bottom.as_numeric(), - ] - - ctm = tuple(float(x) for x in ctm) # type: ignore[assignment] - new_x = [ - ctm[0] * corners[i] + ctm[2] * corners[i + 1] + ctm[4] - for i in range(0, 8, 2) - ] - new_y = [ - ctm[1] * corners[i] + ctm[3] * corners[i + 1] + ctm[5] - for i in range(0, 8, 2) - ] - - lowerleft = (min(new_x), min(new_y)) - upperright = (max(new_x), max(new_y)) - lowerleft = (min(corners[0], lowerleft[0]), min(corners[1], lowerleft[1])) - upperright = ( - max(corners[2], upperright[0]), - max(corners[3], upperright[1]), - ) - - self.mediabox.lower_left = lowerleft - self.mediabox.upper_right = upperright - - def addTransformation( - self, ctm: CompressedTransformationMatrix - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_transformation` instead. - """ - deprecation_with_replacement("addTransformation", "add_transformation", "3.0.0") - self.add_transformation(ctm) - - def scale(self, sx: float, sy: float) -> None: - """ - Scale a page by the given factors by applying a transformation - matrix to its content and updating the page size. - - This updates the mediabox, the cropbox, and the contents - of the page. - - Args: - sx: The scaling factor on horizontal axis. - sy: The scaling factor on vertical axis. - """ - self.add_transformation((sx, 0, 0, sy, 0, 0)) - self.cropbox = self.cropbox.scale(sx, sy) - self.artbox = self.artbox.scale(sx, sy) - self.bleedbox = self.bleedbox.scale(sx, sy) - self.trimbox = self.trimbox.scale(sx, sy) - self.mediabox = self.mediabox.scale(sx, sy) - - if PG.ANNOTS in self: - annotations = self[PG.ANNOTS] - if isinstance(annotations, ArrayObject): - for annotation in annotations: - annotation_obj = annotation.get_object() - if ADA.Rect in annotation_obj: - rectangle = annotation_obj[ADA.Rect] - if isinstance(rectangle, ArrayObject): - rectangle[0] = FloatObject(float(rectangle[0]) * sx) - rectangle[1] = FloatObject(float(rectangle[1]) * sy) - rectangle[2] = FloatObject(float(rectangle[2]) * sx) - rectangle[3] = FloatObject(float(rectangle[3]) * sy) - - if PG.VP in self: - viewport = self[PG.VP] - if isinstance(viewport, ArrayObject): - bbox = viewport[0]["/BBox"] - else: - bbox = viewport["/BBox"] # type: ignore - scaled_bbox = RectangleObject( - ( - float(bbox[0]) * sx, - float(bbox[1]) * sy, - float(bbox[2]) * sx, - float(bbox[3]) * sy, - ) - ) - if isinstance(viewport, ArrayObject): - self[NameObject(PG.VP)][NumberObject(0)][ # type: ignore - NameObject("/BBox") - ] = scaled_bbox - else: - self[NameObject(PG.VP)][NameObject("/BBox")] = scaled_bbox # type: ignore - - def scale_by(self, factor: float) -> None: - """ - Scale a page by the given factor by applying a transformation - matrix to its content and updating the page size. - - Args: - factor: The scaling factor (for both X and Y axis). - """ - self.scale(factor, factor) - - def scaleBy(self, factor: float) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`scale_by` instead. - """ - deprecation_with_replacement("scaleBy", "scale_by", "3.0.0") - self.scale(factor, factor) - - def scale_to(self, width: float, height: float) -> None: - """ - Scale a page to the specified dimensions by applying a - transformation matrix to its content and updating the page size. - - Args: - width: The new width. - height: The new height. - """ - sx = width / float(self.mediabox.width) - sy = height / float(self.mediabox.height) - self.scale(sx, sy) - - def scaleTo(self, width: float, height: float) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`scale_to` instead. - """ - deprecation_with_replacement("scaleTo", "scale_to", "3.0.0") - self.scale_to(width, height) - - def compress_content_streams(self) -> None: - """ - Compress the size of this page by joining all content streams and - applying a FlateDecode filter. - - However, it is possible that this function will perform no action if - content stream compression becomes "automatic". - """ - content = self.get_contents() - if content is not None: - if not isinstance(content, ContentStream): - content = ContentStream(content, self.pdf) - self[NameObject(PG.CONTENTS)] = content.flate_encode() - - def compressContentStreams(self) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`compress_content_streams` instead. - """ - deprecation_with_replacement( - "compressContentStreams", "compress_content_streams", "3.0.0" - ) - self.compress_content_streams() - - def _debug_for_extract(self) -> str: # pragma: no cover - out = "" - for ope, op in ContentStream( - self["/Contents"].get_object(), self.pdf, "bytes" - ).operations: - if op == b"TJ": - s = [x for x in ope[0] if isinstance(x, str)] - else: - s = [] - out += op.decode("utf-8") + " " + "".join(s) + ope.__repr__() + "\n" - out += "\n=============================\n" - try: - for fo in self[PG.RESOURCES]["/Font"]: # type:ignore - out += fo + "\n" - out += self[PG.RESOURCES]["/Font"][fo].__repr__() + "\n" # type:ignore - try: - enc_repr = self[PG.RESOURCES]["/Font"][fo][ # type:ignore - "/Encoding" - ].__repr__() - out += enc_repr + "\n" - except Exception: - pass - try: - out += ( - self[PG.RESOURCES]["/Font"][fo][ # type:ignore - "/ToUnicode" - ] - .get_data() - .decode() - + "\n" - ) - except Exception: - pass - - except KeyError: - out += "No Font\n" - return out - - def _extract_text( - self, - obj: Any, - pdf: Any, - orientations: Tuple[int, ...] = (0, 90, 180, 270), - space_width: float = 200.0, - content_key: Optional[str] = PG.CONTENTS, - visitor_operand_before: Optional[Callable[[Any, Any, Any, Any], None]] = None, - visitor_operand_after: Optional[Callable[[Any, Any, Any, Any], None]] = None, - visitor_text: Optional[Callable[[Any, Any, Any, Any, Any], None]] = None, - ) -> str: - """ - See extract_text for most arguments. - - Args: - content_key: indicate the default key where to extract data - None = the object; this allow to reuse the function on XObject - default = "/Content" - """ - text: str = "" - output: str = "" - rtl_dir: bool = False # right-to-left - cmaps: Dict[ - str, - Tuple[ - str, float, Union[str, Dict[int, str]], Dict[str, str], DictionaryObject - ], - ] = {} - try: - objr = obj - while NameObject(PG.RESOURCES) not in objr: - # /Resources can be inherited sometimes so we look to parents - objr = objr["/Parent"].get_object() - # if no parents we will have no /Resources will be available => an exception wil be raised - resources_dict = cast(DictionaryObject, objr[PG.RESOURCES]) - except Exception: - return "" # no resources means no text is possible (no font) we consider the file as not damaged, no need to check for TJ or Tj - if "/Font" in resources_dict: - for f in cast(DictionaryObject, resources_dict["/Font"]): - cmaps[f] = build_char_map(f, space_width, obj) - cmap: Tuple[ - Union[str, Dict[int, str]], Dict[str, str], str, Optional[DictionaryObject] - ] = ( - "charmap", - {}, - "NotInitialized", - None, - ) # (encoding,CMAP,font resource name,dictionary-object of font) - try: - content = ( - obj[content_key].get_object() if isinstance(content_key, str) else obj - ) - if not isinstance(content, ContentStream): - content = ContentStream(content, pdf, "bytes") - except KeyError: # it means no content can be extracted(certainly empty page) - return "" - # Note: we check all strings are TextStringObjects. ByteStringObjects - # are strings where the byte->string encoding was unknown, so adding - # them to the text here would be gibberish. - - cm_matrix: List[float] = [1.0, 0.0, 0.0, 1.0, 0.0, 0.0] - cm_stack = [] - tm_matrix: List[float] = [1.0, 0.0, 0.0, 1.0, 0.0, 0.0] - tm_prev: List[float] = [ - 1.0, - 0.0, - 0.0, - 1.0, - 0.0, - 0.0, - ] # will store cm_matrix * tm_matrix - char_scale = 1.0 - space_scale = 1.0 - _space_width: float = 500.0 # will be set correctly at first Tf - TL = 0.0 - font_size = 12.0 # init just in case of - - def mult(m: List[float], n: List[float]) -> List[float]: - return [ - m[0] * n[0] + m[1] * n[2], - m[0] * n[1] + m[1] * n[3], - m[2] * n[0] + m[3] * n[2], - m[2] * n[1] + m[3] * n[3], - m[4] * n[0] + m[5] * n[2] + n[4], - m[4] * n[1] + m[5] * n[3] + n[5], - ] - - def orient(m: List[float]) -> int: - if m[3] > 1e-6: - return 0 - elif m[3] < -1e-6: - return 180 - elif m[1] > 0: - return 90 - else: - return 270 - - def current_spacewidth() -> float: - # return space_scale * _space_width * char_scale - return _space_width / 1000.0 - - def process_operation(operator: bytes, operands: List) -> None: - nonlocal cm_matrix, cm_stack, tm_matrix, tm_prev, output, text, char_scale, space_scale, _space_width, TL, font_size, cmap, orientations, rtl_dir, visitor_text - global CUSTOM_RTL_MIN, CUSTOM_RTL_MAX, CUSTOM_RTL_SPECIAL_CHARS - - check_crlf_space: bool = False - # Table 5.4 page 405 - if operator == b"BT": - tm_matrix = [1.0, 0.0, 0.0, 1.0, 0.0, 0.0] - # tm_prev = tm_matrix - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - # based - # if output != "" and output[-1]!="\n": - # output += "\n" - text = "" - return None - elif operator == b"ET": - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - text = "" - # table 4.7 "Graphics state operators", page 219 - # cm_matrix calculation is a reserved for the moment - elif operator == b"q": - cm_stack.append( - ( - cm_matrix, - cmap, - font_size, - char_scale, - space_scale, - _space_width, - TL, - ) - ) - elif operator == b"Q": - try: - ( - cm_matrix, - cmap, - font_size, - char_scale, - space_scale, - _space_width, - TL, - ) = cm_stack.pop() - except Exception: - cm_matrix = [1.0, 0.0, 0.0, 1.0, 0.0, 0.0] - # rtl_dir = False - elif operator == b"cm": - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - text = "" - cm_matrix = mult( - [ - float(operands[0]), - float(operands[1]), - float(operands[2]), - float(operands[3]), - float(operands[4]), - float(operands[5]), - ], - cm_matrix, - ) - # rtl_dir = False - # Table 5.2 page 398 - elif operator == b"Tz": - char_scale = float(operands[0]) / 100.0 - elif operator == b"Tw": - space_scale = 1.0 + float(operands[0]) - elif operator == b"TL": - TL = float(operands[0]) - elif operator == b"Tf": - if text != "": - output += text # .translate(cmap) - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - text = "" - # rtl_dir = False - try: - # charMapTuple: font_type, float(sp_width / 2), encoding, map_dict, font-dictionary - charMapTuple = cmaps[operands[0]] - _space_width = charMapTuple[1] - # current cmap: encoding, map_dict, font resource name (internal name, not the real font-name), - # font-dictionary. The font-dictionary describes the font. - cmap = ( - charMapTuple[2], - charMapTuple[3], - operands[0], - charMapTuple[4], - ) - except KeyError: # font not found - _space_width = unknown_char_map[1] - cmap = ( - unknown_char_map[2], - unknown_char_map[3], - "???" + operands[0], - None, - ) - try: - font_size = float(operands[1]) - except Exception: - pass # keep previous size - # Table 5.5 page 406 - elif operator == b"Td": - check_crlf_space = True - # A special case is a translating only tm: - # tm[0..5] = 1 0 0 1 e f, - # i.e. tm[4] += tx, tm[5] += ty. - tx = float(operands[0]) - ty = float(operands[1]) - tm_matrix[4] += tx * tm_matrix[0] + ty * tm_matrix[2] - tm_matrix[5] += tx * tm_matrix[1] + ty * tm_matrix[3] - elif operator == b"Tm": - check_crlf_space = True - tm_matrix = [ - float(operands[0]), - float(operands[1]), - float(operands[2]), - float(operands[3]), - float(operands[4]), - float(operands[5]), - ] - elif operator == b"T*": - check_crlf_space = True - tm_matrix[5] -= TL - - elif operator == b"Tj": - check_crlf_space = True - m = mult(tm_matrix, cm_matrix) - orientation = orient(m) - if orientation in orientations: - if isinstance(operands[0], str): - text += operands[0] - else: - t: str = "" - tt: bytes = ( - encode_pdfdocencoding(operands[0]) - if isinstance(operands[0], str) - else operands[0] - ) - if isinstance(cmap[0], str): - try: - t = tt.decode( - cmap[0], "surrogatepass" - ) # apply str encoding - except Exception: # the data does not match the expectation, we use the alternative ; text extraction may not be good - t = tt.decode( - "utf-16-be" if cmap[0] == "charmap" else "charmap", - "surrogatepass", - ) # apply str encoding - else: # apply dict encoding - t = "".join( - [ - cmap[0][x] if x in cmap[0] else bytes((x,)).decode() - for x in tt - ] - ) - # "\u0590 - \u08FF \uFB50 - \uFDFF" - for x in "".join( - [cmap[1][x] if x in cmap[1] else x for x in t] - ): - xx = ord(x) - # fmt: off - if ( # cases where the current inserting order is kept (punctuation,...) - (xx <= 0x2F) # punctuations but... - or (0x3A <= xx and xx <= 0x40) # numbers (x30-39) - or (0x2000 <= xx and xx <= 0x206F) # upper punctuations.. - or (0x20A0 <= xx and xx <= 0x21FF) # but (numbers) indices/exponents - or xx in CUSTOM_RTL_SPECIAL_CHARS # customized.... - ): - text = x + text if rtl_dir else text + x - elif ( # right-to-left characters set - (0x0590 <= xx and xx <= 0x08FF) - or (0xFB1D <= xx and xx <= 0xFDFF) - or (0xFE70 <= xx and xx <= 0xFEFF) - or (CUSTOM_RTL_MIN <= xx and xx <= CUSTOM_RTL_MAX) - ): - # print("<",xx,x) - if not rtl_dir: - rtl_dir = True - # print("RTL",text,"*") - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - text = "" - text = x + text - else: # left-to-right - # print(">",xx,x,end="") - if rtl_dir: - rtl_dir = False - # print("LTR",text,"*") - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - text = "" - text = text + x - # fmt: on - else: - return None - if check_crlf_space: - m = mult(tm_matrix, cm_matrix) - orientation = orient(m) - delta_x = m[4] - tm_prev[4] - delta_y = m[5] - tm_prev[5] - k = math.sqrt(abs(m[0] * m[3]) + abs(m[1] * m[2])) - f = font_size * k - tm_prev = m - if orientation not in orientations: - return None - try: - if orientation == 0: - if delta_y < -0.8 * f: - if (output + text)[-1] != "\n": - output += text + "\n" - if visitor_text is not None: - visitor_text( - text + "\n", - cm_matrix, - tm_matrix, - cmap[3], - font_size, - ) - text = "" - elif ( - abs(delta_y) < f * 0.3 - and abs(delta_x) > current_spacewidth() * f * 15 - ): - if (output + text)[-1] != " ": - text += " " - elif orientation == 180: - if delta_y > 0.8 * f: - if (output + text)[-1] != "\n": - output += text + "\n" - if visitor_text is not None: - visitor_text( - text + "\n", - cm_matrix, - tm_matrix, - cmap[3], - font_size, - ) - text = "" - elif ( - abs(delta_y) < f * 0.3 - and abs(delta_x) > current_spacewidth() * f * 15 - ): - if (output + text)[-1] != " ": - text += " " - elif orientation == 90: - if delta_x > 0.8 * f: - if (output + text)[-1] != "\n": - output += text + "\n" - if visitor_text is not None: - visitor_text( - text + "\n", - cm_matrix, - tm_matrix, - cmap[3], - font_size, - ) - text = "" - elif ( - abs(delta_x) < f * 0.3 - and abs(delta_y) > current_spacewidth() * f * 15 - ): - if (output + text)[-1] != " ": - text += " " - elif orientation == 270: - if delta_x < -0.8 * f: - if (output + text)[-1] != "\n": - output += text + "\n" - if visitor_text is not None: - visitor_text( - text + "\n", - cm_matrix, - tm_matrix, - cmap[3], - font_size, - ) - text = "" - elif ( - abs(delta_x) < f * 0.3 - and abs(delta_y) > current_spacewidth() * f * 15 - ): - if (output + text)[-1] != " ": - text += " " - except Exception: - pass - - for operands, operator in content.operations: - if visitor_operand_before is not None: - visitor_operand_before(operator, operands, cm_matrix, tm_matrix) - # multiple operators are defined in here #### - if operator == b"'": - process_operation(b"T*", []) - process_operation(b"Tj", operands) - elif operator == b'"': - process_operation(b"Tw", [operands[0]]) - process_operation(b"Tc", [operands[1]]) - process_operation(b"T*", []) - process_operation(b"Tj", operands[2:]) - elif operator == b"TD": - process_operation(b"TL", [-operands[1]]) - process_operation(b"Td", operands) - elif operator == b"TJ": - for op in operands[0]: - if isinstance(op, (str, bytes)): - process_operation(b"Tj", [op]) - if isinstance(op, (int, float, NumberObject, FloatObject)): - if ( - (abs(float(op)) >= _space_width) - and (len(text) > 0) - and (text[-1] != " ") - ): - process_operation(b"Tj", [" "]) - elif operator == b"Do": - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - try: - if output[-1] != "\n": - output += "\n" - if visitor_text is not None: - visitor_text("\n", cm_matrix, tm_matrix, cmap[3], font_size) - except IndexError: - pass - try: - xobj = resources_dict["/XObject"] - if xobj[operands[0]]["/Subtype"] != "/Image": # type: ignore - # output += text - text = self.extract_xform_text( - xobj[operands[0]], # type: ignore - orientations, - space_width, - visitor_operand_before, - visitor_operand_after, - visitor_text, - ) - output += text - if visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - except Exception: - logger_warning( - f" impossible to decode XFormObject {operands[0]}", - __name__, - ) - finally: - text = "" - else: - process_operation(operator, operands) - if visitor_operand_after is not None: - visitor_operand_after(operator, operands, cm_matrix, tm_matrix) - output += text # just in case of - if text != "" and visitor_text is not None: - visitor_text(text, cm_matrix, tm_matrix, cmap[3], font_size) - return output - - def extract_text( - self, - *args: Any, - Tj_sep: str = None, - TJ_sep: str = None, - orientations: Union[int, Tuple[int, ...]] = (0, 90, 180, 270), - space_width: float = 200.0, - visitor_operand_before: Optional[Callable[[Any, Any, Any, Any], None]] = None, - visitor_operand_after: Optional[Callable[[Any, Any, Any, Any], None]] = None, - visitor_text: Optional[Callable[[Any, Any, Any, Any, Any], None]] = None, - ) -> str: - """ - Locate all text drawing commands, in the order they are provided in the - content stream, and extract the text. - - This works well for some PDF files, but poorly for others, depending on - the generator used. This will be refined in the future. - - Do not rely on the order of text coming out of this function, as it - will change if this function is made more sophisticated. - - Arabic, Hebrew,... are extracted in the good order. - If required an custom RTL range of characters can be defined; see function set_custom_rtl - - Additionally you can provide visitor-methods to get informed on all operands and all text-objects. - For example in some PDF files this can be useful to parse tables. - - Args: - Tj_sep: Deprecated. Kept for compatibility until PyPDF2 4.0.0 - TJ_sep: Deprecated. Kept for compatibility until PyPDF2 4.0.0 - orientations: list of orientations text_extraction will look for - default = (0, 90, 180, 270) - note: currently only 0(Up),90(turned Left), 180(upside Down), - 270 (turned Right) - space_width: force default space width - if not extracted from font (default: 200) - visitor_operand_before: function to be called before processing an operand. - It has four arguments: operand, operand-arguments, - current transformation matrix and text matrix. - visitor_operand_after: function to be called after processing an operand. - It has four arguments: operand, operand-arguments, - current transformation matrix and text matrix. - visitor_text: function to be called when extracting some text at some position. - It has five arguments: text, current transformation matrix, - text matrix, font-dictionary and font-size. - The font-dictionary may be None in case of unknown fonts. - If not None it may e.g. contain key "/BaseFont" with value "/Arial,Bold". - - Returns: - The extracted text - """ - if len(args) >= 1: - if isinstance(args[0], str): - Tj_sep = args[0] - if len(args) >= 2: - if isinstance(args[1], str): - TJ_sep = args[1] - else: - raise TypeError(f"Invalid positional parameter {args[1]}") - if len(args) >= 3: - if isinstance(args[2], (tuple, int)): - orientations = args[2] - else: - raise TypeError(f"Invalid positional parameter {args[2]}") - if len(args) >= 4: - if isinstance(args[3], (float, int)): - space_width = args[3] - else: - raise TypeError(f"Invalid positional parameter {args[3]}") - elif isinstance(args[0], (tuple, int)): - orientations = args[0] - if len(args) >= 2: - if isinstance(args[1], (float, int)): - space_width = args[1] - else: - raise TypeError(f"Invalid positional parameter {args[1]}") - else: - raise TypeError(f"Invalid positional parameter {args[0]}") - if Tj_sep is not None or TJ_sep is not None: - warnings.warn( - "parameters Tj_Sep, TJ_sep depreciated, and will be removed in PyPDF2 4.0.0.", - DeprecationWarning, - ) - - if isinstance(orientations, int): - orientations = (orientations,) - - return self._extract_text( - self, - self.pdf, - orientations, - space_width, - PG.CONTENTS, - visitor_operand_before, - visitor_operand_after, - visitor_text, - ) - - def extract_xform_text( - self, - xform: EncodedStreamObject, - orientations: Tuple[int, ...] = (0, 90, 270, 360), - space_width: float = 200.0, - visitor_operand_before: Optional[Callable[[Any, Any, Any, Any], None]] = None, - visitor_operand_after: Optional[Callable[[Any, Any, Any, Any], None]] = None, - visitor_text: Optional[Callable[[Any, Any, Any, Any, Any], None]] = None, - ) -> str: - """ - Extract text from an XObject. - - Args: - space_width: force default space width (if not extracted from font (default 200) - - Returns: - The extracted text - """ - return self._extract_text( - xform, - self.pdf, - orientations, - space_width, - None, - visitor_operand_before, - visitor_operand_after, - visitor_text, - ) - - def extractText( - self, Tj_sep: str = "", TJ_sep: str = "" - ) -> str: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`extract_text` instead. - """ - deprecation_with_replacement("extractText", "extract_text", "3.0.0") - return self.extract_text() - - def _get_fonts(self) -> Tuple[Set[str], Set[str]]: - """ - Get the names of embedded fonts and unembedded fonts. - - :return: (Set of embedded fonts, set of unembedded fonts) - """ - obj = self.get_object() - assert isinstance(obj, DictionaryObject) - fonts, embedded = _get_fonts_walk(cast(DictionaryObject, obj[PG.RESOURCES])) - unembedded = fonts - embedded - return embedded, unembedded - - mediabox = _create_rectangle_accessor(PG.MEDIABOX, ()) - """ - A :class:`RectangleObject`, expressed in default user space units, - defining the boundaries of the physical medium on which the page is - intended to be displayed or printed. - """ - - @property - def mediaBox(self) -> RectangleObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`mediabox` instead. - """ - deprecation_with_replacement("mediaBox", "mediabox", "3.0.0") - return self.mediabox - - @mediaBox.setter - def mediaBox(self, value: RectangleObject) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`mediabox` instead. - """ - deprecation_with_replacement("mediaBox", "mediabox", "3.0.0") - self.mediabox = value - - cropbox = _create_rectangle_accessor("/CropBox", (PG.MEDIABOX,)) - """ - A :class:`RectangleObject`, expressed in default user space units, - defining the visible region of default user space. When the page is - displayed or printed, its contents are to be clipped (cropped) to this - rectangle and then imposed on the output medium in some - implementation-defined manner. Default value: same as :attr:`mediabox`. - """ - - @property - def cropBox(self) -> RectangleObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`cropbox` instead. - """ - deprecation_with_replacement("cropBox", "cropbox", "3.0.0") - return self.cropbox - - @cropBox.setter - def cropBox(self, value: RectangleObject) -> None: # pragma: no cover - deprecation_with_replacement("cropBox", "cropbox", "3.0.0") - self.cropbox = value - - bleedbox = _create_rectangle_accessor("/BleedBox", ("/CropBox", PG.MEDIABOX)) - """ - A :class:`RectangleObject`, expressed in default user space units, - defining the region to which the contents of the page should be clipped - when output in a production environment. - """ - - @property - def bleedBox(self) -> RectangleObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`bleedbox` instead. - """ - deprecation_with_replacement("bleedBox", "bleedbox", "3.0.0") - return self.bleedbox - - @bleedBox.setter - def bleedBox(self, value: RectangleObject) -> None: # pragma: no cover - deprecation_with_replacement("bleedBox", "bleedbox", "3.0.0") - self.bleedbox = value - - trimbox = _create_rectangle_accessor("/TrimBox", ("/CropBox", PG.MEDIABOX)) - """ - A :class:`RectangleObject`, expressed in default user space units, - defining the intended dimensions of the finished page after trimming. - """ - - @property - def trimBox(self) -> RectangleObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`trimbox` instead. - """ - deprecation_with_replacement("trimBox", "trimbox", "3.0.0") - return self.trimbox - - @trimBox.setter - def trimBox(self, value: RectangleObject) -> None: # pragma: no cover - deprecation_with_replacement("trimBox", "trimbox", "3.0.0") - self.trimbox = value - - artbox = _create_rectangle_accessor("/ArtBox", ("/CropBox", PG.MEDIABOX)) - """ - A :class:`RectangleObject`, expressed in default user space units, - defining the extent of the page's meaningful content as intended by the - page's creator. - """ - - @property - def artBox(self) -> RectangleObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`artbox` instead. - """ - deprecation_with_replacement("artBox", "artbox", "3.0.0") - return self.artbox - - @artBox.setter - def artBox(self, value: RectangleObject) -> None: # pragma: no cover - deprecation_with_replacement("artBox", "artbox", "3.0.0") - self.artbox = value - - @property - def annotations(self) -> Optional[ArrayObject]: - if "/Annots" not in self: - return None - else: - return cast(ArrayObject, self["/Annots"]) - - @annotations.setter - def annotations(self, value: Optional[ArrayObject]) -> None: - """ - Set the annotations array of the page. - - Typically you don't want to set this value, but append to it. - If you append to it, don't forget to add the object first to the writer - and only add the indirect object. - """ - if value is None: - del self[NameObject("/Annots")] - else: - self[NameObject("/Annots")] = value - - -class _VirtualList: - def __init__( - self, - length_function: Callable[[], int], - get_function: Callable[[int], PageObject], - ) -> None: - self.length_function = length_function - self.get_function = get_function - self.current = -1 - - def __len__(self) -> int: - return self.length_function() - - def __getitem__(self, index: int) -> PageObject: - if isinstance(index, slice): - indices = range(*index.indices(len(self))) - cls = type(self) - return cls(indices.__len__, lambda idx: self[indices[idx]]) # type: ignore - if not isinstance(index, int): - raise TypeError("sequence indices must be integers") - len_self = len(self) - if index < 0: - # support negative indexes - index = len_self + index - if index < 0 or index >= len_self: - raise IndexError("sequence index out of range") - return self.get_function(index) - - def __iter__(self) -> Iterator[PageObject]: - for i in range(len(self)): - yield self[i] - - -def _get_fonts_walk( - obj: DictionaryObject, - fnt: Optional[Set[str]] = None, - emb: Optional[Set[str]] = None, -) -> Tuple[Set[str], Set[str]]: - """ - If there is a key called 'BaseFont', that is a font that is used in the document. - If there is a key called 'FontName' and another key in the same dictionary object - that is called 'FontFilex' (where x is null, 2, or 3), then that fontname is - embedded. - - We create and add to two sets, fnt = fonts used and emb = fonts embedded. - """ - if fnt is None: - fnt = set() - if emb is None: - emb = set() - if not hasattr(obj, "keys"): - return set(), set() - fontkeys = ("/FontFile", "/FontFile2", "/FontFile3") - if "/BaseFont" in obj: - fnt.add(cast(str, obj["/BaseFont"])) - if "/FontName" in obj: - if [x for x in fontkeys if x in obj]: # test to see if there is FontFile - emb.add(cast(str, obj["/FontName"])) - - for key in obj.keys(): - _get_fonts_walk(cast(DictionaryObject, obj[key]), fnt, emb) - - return fnt, emb # return the sets for each page diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_protocols.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_protocols.py deleted file mode 100644 index 89c80f9a..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_protocols.py +++ /dev/null @@ -1,62 +0,0 @@ -"""Helpers for working with PDF types.""" - -from pathlib import Path -from typing import IO, Any, Dict, List, Optional, Tuple, Union - -try: - # Python 3.8+: https://peps.python.org/pep-0586 - from typing import Protocol # type: ignore[attr-defined] -except ImportError: - from typing_extensions import Protocol # type: ignore[misc] - -from ._utils import StrByteType - - -class PdfObjectProtocol(Protocol): - indirect_reference: Any - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> Any: - ... - - def _reference_clone(self, clone: Any, pdf_dest: Any) -> Any: - ... - - def get_object(self) -> Optional["PdfObjectProtocol"]: - ... - - -class PdfReaderProtocol(Protocol): # pragma: no cover - @property - def pdf_header(self) -> str: - ... - - @property - def strict(self) -> bool: - ... - - @property - def xref(self) -> Dict[int, Dict[int, Any]]: - ... - - @property - def pages(self) -> List[Any]: - ... - - def get_object(self, indirect_reference: Any) -> Optional[PdfObjectProtocol]: - ... - - -class PdfWriterProtocol(Protocol): # pragma: no cover - _objects: List[Any] - _id_translated: Dict[int, Dict[int, int]] - - def get_object(self, indirect_reference: Any) -> Optional[PdfObjectProtocol]: - ... - - def write(self, stream: Union[Path, StrByteType]) -> Tuple[bool, IO]: - ... diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_reader.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_reader.py deleted file mode 100644 index 0a914476..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_reader.py +++ /dev/null @@ -1,1977 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# Copyright (c) 2007, Ashish Kulkarni -# -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import os -import re -import struct -import zlib -from datetime import datetime -from io import BytesIO -from pathlib import Path -from typing import ( - Any, - Callable, - Dict, - Iterable, - List, - Optional, - Tuple, - Union, - cast, -) - -from ._encryption import Encryption, PasswordType -from ._page import PageObject, _VirtualList -from ._utils import ( - StrByteType, - StreamType, - b_, - deprecate_no_replacement, - deprecation_no_replacement, - deprecation_with_replacement, - logger_warning, - read_non_whitespace, - read_previous_line, - read_until_whitespace, - skip_over_comment, - skip_over_whitespace, -) -from .constants import CatalogAttributes as CA -from .constants import CatalogDictionary as CD -from .constants import CheckboxRadioButtonAttributes -from .constants import Core as CO -from .constants import DocumentInformationAttributes as DI -from .constants import FieldDictionaryAttributes, GoToActionArguments -from .constants import PageAttributes as PG -from .constants import PagesAttributes as PA -from .constants import TrailerKeys as TK -from .errors import ( - EmptyFileError, - FileNotDecryptedError, - PdfReadError, - PdfStreamError, - WrongPasswordError, -) -from .generic import ( - ArrayObject, - ContentStream, - DecodedStreamObject, - Destination, - DictionaryObject, - EncodedStreamObject, - Field, - Fit, - FloatObject, - IndirectObject, - NameObject, - NullObject, - NumberObject, - PdfObject, - TextStringObject, - TreeObject, - read_object, -) -from .types import OutlineType, PagemodeType -from .xmp import XmpInformation - - -def convert_to_int(d: bytes, size: int) -> Union[int, Tuple[Any, ...]]: - if size > 8: - raise PdfReadError("invalid size in convert_to_int") - d = b"\x00\x00\x00\x00\x00\x00\x00\x00" + d - d = d[-8:] - return struct.unpack(">q", d)[0] - - -def convertToInt( - d: bytes, size: int -) -> Union[int, Tuple[Any, ...]]: # pragma: no cover - deprecation_with_replacement("convertToInt", "convert_to_int") - return convert_to_int(d, size) - - -class DocumentInformation(DictionaryObject): - """ - A class representing the basic document metadata provided in a PDF File. - This class is accessible through :py:class:`PdfReader.metadata`. - - All text properties of the document metadata have - *two* properties, eg. author and author_raw. The non-raw property will - always return a ``TextStringObject``, making it ideal for a case where - the metadata is being displayed. The raw property can sometimes return - a ``ByteStringObject``, if PyPDF2 was unable to decode the string's - text encoding; this requires additional safety in the caller and - therefore is not as commonly accessed. - """ - - def __init__(self) -> None: - DictionaryObject.__init__(self) - - def _get_text(self, key: str) -> Optional[str]: - retval = self.get(key, None) - if isinstance(retval, TextStringObject): - return retval - return None - - def getText(self, key: str) -> Optional[str]: # pragma: no cover - """ - The text value of the specified key or None. - - .. deprecated:: 1.28.0 - - Use the attributes (e.g. :py:attr:`title` / :py:attr:`author`). - """ - deprecation_no_replacement("getText", "3.0.0") - return self._get_text(key) - - @property - def title(self) -> Optional[str]: - """ - Read-only property accessing the document's **title**. - - Returns a unicode string (``TextStringObject``) or ``None`` - if the title is not specified. - """ - return ( - self._get_text(DI.TITLE) or self.get(DI.TITLE).get_object() # type: ignore - if self.get(DI.TITLE) - else None - ) - - @property - def title_raw(self) -> Optional[str]: - """The "raw" version of title; can return a ``ByteStringObject``.""" - return self.get(DI.TITLE) - - @property - def author(self) -> Optional[str]: - """ - Read-only property accessing the document's **author**. - - Returns a unicode string (``TextStringObject``) or ``None`` - if the author is not specified. - """ - return self._get_text(DI.AUTHOR) - - @property - def author_raw(self) -> Optional[str]: - """The "raw" version of author; can return a ``ByteStringObject``.""" - return self.get(DI.AUTHOR) - - @property - def subject(self) -> Optional[str]: - """ - Read-only property accessing the document's **subject**. - - Returns a unicode string (``TextStringObject``) or ``None`` - if the subject is not specified. - """ - return self._get_text(DI.SUBJECT) - - @property - def subject_raw(self) -> Optional[str]: - """The "raw" version of subject; can return a ``ByteStringObject``.""" - return self.get(DI.SUBJECT) - - @property - def creator(self) -> Optional[str]: - """ - Read-only property accessing the document's **creator**. - - If the document was converted to PDF from another format, this is the - name of the application (e.g. OpenOffice) that created the original - document from which it was converted. Returns a unicode string - (``TextStringObject``) or ``None`` if the creator is not specified. - """ - return self._get_text(DI.CREATOR) - - @property - def creator_raw(self) -> Optional[str]: - """The "raw" version of creator; can return a ``ByteStringObject``.""" - return self.get(DI.CREATOR) - - @property - def producer(self) -> Optional[str]: - """ - Read-only property accessing the document's **producer**. - - If the document was converted to PDF from another format, this is - the name of the application (for example, OSX Quartz) that converted - it to PDF. Returns a unicode string (``TextStringObject``) - or ``None`` if the producer is not specified. - """ - return self._get_text(DI.PRODUCER) - - @property - def producer_raw(self) -> Optional[str]: - """The "raw" version of producer; can return a ``ByteStringObject``.""" - return self.get(DI.PRODUCER) - - @property - def creation_date(self) -> Optional[datetime]: - """ - Read-only property accessing the document's **creation date**. - """ - text = self._get_text(DI.CREATION_DATE) - if text is None: - return None - return datetime.strptime(text.replace("'", ""), "D:%Y%m%d%H%M%S%z") - - @property - def creation_date_raw(self) -> Optional[str]: - """ - The "raw" version of creation date; can return a ``ByteStringObject``. - - Typically in the format D:YYYYMMDDhhmmss[+-]hh'mm where the suffix is the - offset from UTC. - """ - return self.get(DI.CREATION_DATE) - - @property - def modification_date(self) -> Optional[datetime]: - """ - Read-only property accessing the document's **modification date**. - - The date and time the document was most recently modified. - """ - text = self._get_text(DI.MOD_DATE) - if text is None: - return None - return datetime.strptime(text.replace("'", ""), "D:%Y%m%d%H%M%S%z") - - @property - def modification_date_raw(self) -> Optional[str]: - """ - The "raw" version of modification date; can return a ``ByteStringObject``. - - Typically in the format D:YYYYMMDDhhmmss[+-]hh'mm where the suffix is the - offset from UTC. - """ - return self.get(DI.MOD_DATE) - - -class PdfReader: - """ - Initialize a PdfReader object. - - This operation can take some time, as the PDF stream's cross-reference - tables are read into memory. - - :param stream: A File object or an object that supports the standard read - and seek methods similar to a File object. Could also be a - string representing a path to a PDF file. - :param bool strict: Determines whether user should be warned of all - problems and also causes some correctable problems to be fatal. - Defaults to ``False``. - :param None/str/bytes password: Decrypt PDF file at initialization. If the - password is None, the file will not be decrypted. - Defaults to ``None`` - """ - - def __init__( - self, - stream: Union[StrByteType, Path], - strict: bool = False, - password: Union[None, str, bytes] = None, - ) -> None: - self.strict = strict - self.flattened_pages: Optional[List[PageObject]] = None - self.resolved_objects: Dict[Tuple[Any, Any], Optional[PdfObject]] = {} - self.xref_index = 0 - self._page_id2num: Optional[ - Dict[Any, Any] - ] = None # map page indirect_reference number to Page Number - if hasattr(stream, "mode") and "b" not in stream.mode: # type: ignore - logger_warning( - "PdfReader stream/file object is not in binary mode. " - "It may not be read correctly.", - __name__, - ) - if isinstance(stream, (str, Path)): - with open(stream, "rb") as fh: - stream = BytesIO(fh.read()) - self.read(stream) - self.stream = stream - - self._override_encryption = False - self._encryption: Optional[Encryption] = None - if self.is_encrypted: - self._override_encryption = True - # Some documents may not have a /ID, use two empty - # byte strings instead. Solves - # https://github.com/mstamy2/PyPDF2/issues/608 - id_entry = self.trailer.get(TK.ID) - id1_entry = id_entry[0].get_object().original_bytes if id_entry else b"" - encrypt_entry = cast( - DictionaryObject, self.trailer[TK.ENCRYPT].get_object() - ) - self._encryption = Encryption.read(encrypt_entry, id1_entry) - - # try empty password if no password provided - pwd = password if password is not None else b"" - if ( - self._encryption.verify(pwd) == PasswordType.NOT_DECRYPTED - and password is not None - ): - # raise if password provided - raise WrongPasswordError("Wrong password") - self._override_encryption = False - else: - if password is not None: - raise PdfReadError("Not encrypted file") - - @property - def pdf_header(self) -> str: - # TODO: Make this return a bytes object for consistency - # but that needs a deprecation - loc = self.stream.tell() - self.stream.seek(0, 0) - pdf_file_version = self.stream.read(8).decode("utf-8") - self.stream.seek(loc, 0) # return to where it was - return pdf_file_version - - @property - def metadata(self) -> Optional[DocumentInformation]: - """ - Retrieve the PDF file's document information dictionary, if it exists. - Note that some PDF files use metadata streams instead of docinfo - dictionaries, and these metadata streams will not be accessed by this - function. - - :return: the document information of this PDF file - """ - if TK.INFO not in self.trailer: - return None - obj = self.trailer[TK.INFO] - retval = DocumentInformation() - if isinstance(obj, type(None)): - raise PdfReadError( - "trailer not found or does not point to document information directory" - ) - retval.update(obj) # type: ignore - return retval - - def getDocumentInfo(self) -> Optional[DocumentInformation]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use the attribute :py:attr:`metadata` instead. - """ - deprecation_with_replacement("getDocumentInfo", "metadata", "3.0.0") - return self.metadata - - @property - def documentInfo(self) -> Optional[DocumentInformation]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use the attribute :py:attr:`metadata` instead. - """ - deprecation_with_replacement("documentInfo", "metadata", "3.0.0") - return self.metadata - - @property - def xmp_metadata(self) -> Optional[XmpInformation]: - """ - XMP (Extensible Metadata Platform) data - - :return: a :class:`XmpInformation` - instance that can be used to access XMP metadata from the document. - or ``None`` if no metadata was found on the document root. - """ - try: - self._override_encryption = True - return self.trailer[TK.ROOT].xmp_metadata # type: ignore - finally: - self._override_encryption = False - - def getXmpMetadata(self) -> Optional[XmpInformation]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use the attribute :py:attr:`xmp_metadata` instead. - """ - deprecation_with_replacement("getXmpMetadata", "xmp_metadata", "3.0.0") - return self.xmp_metadata - - @property - def xmpMetadata(self) -> Optional[XmpInformation]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use the attribute :py:attr:`xmp_metadata` instead. - """ - deprecation_with_replacement("xmpMetadata", "xmp_metadata", "3.0.0") - return self.xmp_metadata - - def _get_num_pages(self) -> int: - """ - Calculate the number of pages in this PDF file. - - :return: number of pages - :raises PdfReadError: if file is encrypted and restrictions prevent - this action. - """ - # Flattened pages will not work on an Encrypted PDF; - # the PDF file's page count is used in this case. Otherwise, - # the original method (flattened page count) is used. - if self.is_encrypted: - return self.trailer[TK.ROOT]["/Pages"]["/Count"] # type: ignore - else: - if self.flattened_pages is None: - self._flatten() - return len(self.flattened_pages) # type: ignore - - def getNumPages(self) -> int: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :code:`len(reader.pages)` instead. - """ - deprecation_with_replacement("reader.getNumPages", "len(reader.pages)", "3.0.0") - return self._get_num_pages() - - @property - def numPages(self) -> int: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :code:`len(reader.pages)` instead. - """ - deprecation_with_replacement("reader.numPages", "len(reader.pages)", "3.0.0") - return self._get_num_pages() - - def getPage(self, pageNumber: int) -> PageObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :code:`reader.pages[page_number]` instead. - """ - deprecation_with_replacement( - "reader.getPage(pageNumber)", "reader.pages[page_number]", "3.0.0" - ) - return self._get_page(pageNumber) - - def _get_page(self, page_number: int) -> PageObject: - """ - Retrieve a page by number from this PDF file. - - :param int page_number: The page number to retrieve - (pages begin at zero) - :return: a :class:`PageObject` instance. - """ - # ensure that we're not trying to access an encrypted PDF - # assert not self.trailer.has_key(TK.ENCRYPT) - if self.flattened_pages is None: - self._flatten() - assert self.flattened_pages is not None, "hint for mypy" - return self.flattened_pages[page_number] - - @property - def namedDestinations(self) -> Dict[str, Any]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`named_destinations` instead. - """ - deprecation_with_replacement("namedDestinations", "named_destinations", "3.0.0") - return self.named_destinations - - @property - def named_destinations(self) -> Dict[str, Any]: - """ - A read-only dictionary which maps names to - :class:`Destinations` - """ - return self._get_named_destinations() - - # A select group of relevant field attributes. For the complete list, - # see section 8.6.2 of the PDF 1.7 reference. - - def get_fields( - self, - tree: Optional[TreeObject] = None, - retval: Optional[Dict[Any, Any]] = None, - fileobj: Optional[Any] = None, - ) -> Optional[Dict[str, Any]]: - """ - Extract field data if this PDF contains interactive form fields. - - The *tree* and *retval* parameters are for recursive use. - - :param fileobj: A file object (usually a text file) to write - a report to on all interactive form fields found. - :return: A dictionary where each key is a field name, and each - value is a :class:`Field` object. By - default, the mapping name is used for keys. - ``None`` if form data could not be located. - """ - field_attributes = FieldDictionaryAttributes.attributes_dict() - field_attributes.update(CheckboxRadioButtonAttributes.attributes_dict()) - if retval is None: - retval = {} - catalog = cast(DictionaryObject, self.trailer[TK.ROOT]) - # get the AcroForm tree - if CD.ACRO_FORM in catalog: - tree = cast(Optional[TreeObject], catalog[CD.ACRO_FORM]) - else: - return None - if tree is None: - return retval - self._check_kids(tree, retval, fileobj) - for attr in field_attributes: - if attr in tree: - # Tree is a field - self._build_field(tree, retval, fileobj, field_attributes) - break - - if "/Fields" in tree: - fields = cast(ArrayObject, tree["/Fields"]) - for f in fields: - field = f.get_object() - self._build_field(field, retval, fileobj, field_attributes) - - return retval - - def getFields( - self, - tree: Optional[TreeObject] = None, - retval: Optional[Dict[Any, Any]] = None, - fileobj: Optional[Any] = None, - ) -> Optional[Dict[str, Any]]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_fields` instead. - """ - deprecation_with_replacement("getFields", "get_fields", "3.0.0") - return self.get_fields(tree, retval, fileobj) - - def _build_field( - self, - field: Union[TreeObject, DictionaryObject], - retval: Dict[Any, Any], - fileobj: Any, - field_attributes: Any, - ) -> None: - self._check_kids(field, retval, fileobj) - try: - key = field["/TM"] - except KeyError: - try: - key = field["/T"] - except KeyError: - # Ignore no-name field for now - return - if fileobj: - self._write_field(fileobj, field, field_attributes) - fileobj.write("\n") - retval[key] = Field(field) - - def _check_kids( - self, tree: Union[TreeObject, DictionaryObject], retval: Any, fileobj: Any - ) -> None: - if PA.KIDS in tree: - # recurse down the tree - for kid in tree[PA.KIDS]: # type: ignore - self.get_fields(kid.get_object(), retval, fileobj) - - def _write_field(self, fileobj: Any, field: Any, field_attributes: Any) -> None: - field_attributes_tuple = FieldDictionaryAttributes.attributes() - field_attributes_tuple = ( - field_attributes_tuple + CheckboxRadioButtonAttributes.attributes() - ) - - for attr in field_attributes_tuple: - if attr in ( - FieldDictionaryAttributes.Kids, - FieldDictionaryAttributes.AA, - ): - continue - attr_name = field_attributes[attr] - try: - if attr == FieldDictionaryAttributes.FT: - # Make the field type value more clear - types = { - "/Btn": "Button", - "/Tx": "Text", - "/Ch": "Choice", - "/Sig": "Signature", - } - if field[attr] in types: - fileobj.write(attr_name + ": " + types[field[attr]] + "\n") - elif attr == FieldDictionaryAttributes.Parent: - # Let's just write the name of the parent - try: - name = field[attr][FieldDictionaryAttributes.TM] - except KeyError: - name = field[attr][FieldDictionaryAttributes.T] - fileobj.write(attr_name + ": " + name + "\n") - else: - fileobj.write(attr_name + ": " + str(field[attr]) + "\n") - except KeyError: - # Field attribute is N/A or unknown, so don't write anything - pass - - def get_form_text_fields(self) -> Dict[str, Any]: - """ - Retrieve form fields from the document with textual data. - - The key is the name of the form field, the value is the content of the - field. - - If the document contains multiple form fields with the same name, the - second and following will get the suffix _2, _3, ... - """ - # Retrieve document form fields - formfields = self.get_fields() - if formfields is None: - return {} - return { - formfields[field]["/T"]: formfields[field].get("/V") - for field in formfields - if formfields[field].get("/FT") == "/Tx" - } - - def getFormTextFields(self) -> Dict[str, Any]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_form_text_fields` instead. - """ - deprecation_with_replacement( - "getFormTextFields", "get_form_text_fields", "3.0.0" - ) - return self.get_form_text_fields() - - def _get_named_destinations( - self, - tree: Union[TreeObject, None] = None, - retval: Optional[Any] = None, - ) -> Dict[str, Any]: - """ - Retrieve the named destinations present in the document. - - :return: a dictionary which maps names to - :class:`Destinations`. - """ - if retval is None: - retval = {} - catalog = cast(DictionaryObject, self.trailer[TK.ROOT]) - - # get the name tree - if CA.DESTS in catalog: - tree = cast(TreeObject, catalog[CA.DESTS]) - elif CA.NAMES in catalog: - names = cast(DictionaryObject, catalog[CA.NAMES]) - if CA.DESTS in names: - tree = cast(TreeObject, names[CA.DESTS]) - - if tree is None: - return retval - - if PA.KIDS in tree: - # recurse down the tree - for kid in cast(ArrayObject, tree[PA.KIDS]): - self._get_named_destinations(kid.get_object(), retval) - # TABLE 3.33 Entries in a name tree node dictionary (PDF 1.7 specs) - elif CA.NAMES in tree: # KIDS and NAMES are exclusives (PDF 1.7 specs p 162) - names = cast(DictionaryObject, tree[CA.NAMES]) - for i in range(0, len(names), 2): - key = cast(str, names[i].get_object()) - value = names[i + 1].get_object() - if isinstance(value, DictionaryObject) and "/D" in value: - value = value["/D"] - dest = self._build_destination(key, value) # type: ignore - if dest is not None: - retval[key] = dest - else: # case where Dests is in root catalog (PDF 1.7 specs, Β§2 about PDF1.1 - for k__, v__ in tree.items(): - val = v__.get_object() - dest = self._build_destination(k__, val) - if dest is not None: - retval[k__] = dest - return retval - - def getNamedDestinations( - self, - tree: Union[TreeObject, None] = None, - retval: Optional[Any] = None, - ) -> Dict[str, Any]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`named_destinations` instead. - """ - deprecation_with_replacement( - "getNamedDestinations", "named_destinations", "3.0.0" - ) - return self._get_named_destinations(tree, retval) - - @property - def outline(self) -> OutlineType: - """ - Read-only property for the outline (i.e., a collection of 'outline items' - which are also known as 'bookmarks') present in the document. - - :return: a nested list of :class:`Destinations`. - """ - return self._get_outline() - - @property - def outlines(self) -> OutlineType: # pragma: no cover - """ - .. deprecated:: 2.9.0 - - Use :py:attr:`outline` instead. - """ - deprecation_with_replacement("outlines", "outline", "3.0.0") - return self.outline - - def _get_outline( - self, node: Optional[DictionaryObject] = None, outline: Optional[Any] = None - ) -> OutlineType: - if outline is None: - outline = [] - catalog = cast(DictionaryObject, self.trailer[TK.ROOT]) - - # get the outline dictionary and named destinations - if CO.OUTLINES in catalog: - lines = cast(DictionaryObject, catalog[CO.OUTLINES]) - - if isinstance(lines, NullObject): - return outline - - # TABLE 8.3 Entries in the outline dictionary - if lines is not None and "/First" in lines: - node = cast(DictionaryObject, lines["/First"]) - self._namedDests = self._get_named_destinations() - - if node is None: - return outline - - # see if there are any more outline items - while True: - outline_obj = self._build_outline_item(node) - if outline_obj: - outline.append(outline_obj) - - # check for sub-outline - if "/First" in node: - sub_outline: List[Any] = [] - self._get_outline(cast(DictionaryObject, node["/First"]), sub_outline) - if sub_outline: - outline.append(sub_outline) - - if "/Next" not in node: - break - node = cast(DictionaryObject, node["/Next"]) - - return outline - - def getOutlines( - self, node: Optional[DictionaryObject] = None, outline: Optional[Any] = None - ) -> OutlineType: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`outline` instead. - """ - deprecation_with_replacement("getOutlines", "outline", "3.0.0") - return self._get_outline(node, outline) - - @property - def threads(self) -> Optional[ArrayObject]: - """ - Read-only property for the list of threads see Β§8.3.2 from PDF 1.7 spec - - :return: an Array of Dictionnaries with "/F" and "/I" properties - or None if no articles. - """ - catalog = cast(DictionaryObject, self.trailer[TK.ROOT]) - if CO.THREADS in catalog: - return cast("ArrayObject", catalog[CO.THREADS]) - else: - return None - - def _get_page_number_by_indirect( - self, indirect_reference: Union[None, int, NullObject, IndirectObject] - ) -> int: - """Generate _page_id2num""" - if self._page_id2num is None: - self._page_id2num = { - x.indirect_reference.idnum: i for i, x in enumerate(self.pages) # type: ignore - } - - if indirect_reference is None or isinstance(indirect_reference, NullObject): - return -1 - if isinstance(indirect_reference, int): - idnum = indirect_reference - else: - idnum = indirect_reference.idnum - assert self._page_id2num is not None, "hint for mypy" - ret = self._page_id2num.get(idnum, -1) - return ret - - def get_page_number(self, page: PageObject) -> int: - """ - Retrieve page number of a given PageObject - - :param PageObject page: The page to get page number. Should be - an instance of :class:`PageObject` - :return: the page number or -1 if page not found - """ - return self._get_page_number_by_indirect(page.indirect_reference) - - def getPageNumber(self, page: PageObject) -> int: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_page_number` instead. - """ - deprecation_with_replacement("getPageNumber", "get_page_number", "3.0.0") - return self.get_page_number(page) - - def get_destination_page_number(self, destination: Destination) -> int: - """ - Retrieve page number of a given Destination object. - - :param Destination destination: The destination to get page number. - :return: the page number or -1 if page not found - """ - return self._get_page_number_by_indirect(destination.page) - - def getDestinationPageNumber( - self, destination: Destination - ) -> int: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_destination_page_number` instead. - """ - deprecation_with_replacement( - "getDestinationPageNumber", "get_destination_page_number", "3.0.0" - ) - return self.get_destination_page_number(destination) - - def _build_destination( - self, - title: str, - array: Optional[ - List[ - Union[NumberObject, IndirectObject, None, NullObject, DictionaryObject] - ] - ], - ) -> Destination: - page, typ = None, None - # handle outline items with missing or invalid destination - if ( - isinstance(array, (NullObject, str)) - or (isinstance(array, ArrayObject) and len(array) == 0) - or array is None - ): - - page = NullObject() - return Destination(title, page, Fit.fit()) - else: - page, typ = array[0:2] # type: ignore - array = array[2:] - try: - return Destination(title, page, Fit(fit_type=typ, fit_args=array)) # type: ignore - except PdfReadError: - logger_warning(f"Unknown destination: {title} {array}", __name__) - if self.strict: - raise - # create a link to first Page - tmp = self.pages[0].indirect_reference - indirect_reference = NullObject() if tmp is None else tmp - return Destination(title, indirect_reference, Fit.fit()) # type: ignore - - def _build_outline_item(self, node: DictionaryObject) -> Optional[Destination]: - dest, title, outline_item = None, None, None - - # title required for valid outline - # PDF Reference 1.7: TABLE 8.4 Entries in an outline item dictionary - try: - title = cast("str", node["/Title"]) - except KeyError: - if self.strict: - raise PdfReadError(f"Outline Entry Missing /Title attribute: {node!r}") - title = "" # type: ignore - - if "/A" in node: - # Action, PDFv1.7 Section 12.6 (only type GoTo supported) - action = cast(DictionaryObject, node["/A"]) - action_type = cast(NameObject, action[GoToActionArguments.S]) - if action_type == "/GoTo": - dest = action[GoToActionArguments.D] - elif "/Dest" in node: - # Destination, PDFv1.7 Section 12.3.2 - dest = node["/Dest"] - # if array was referenced in another object, will be a dict w/ key "/D" - if isinstance(dest, DictionaryObject) and "/D" in dest: - dest = dest["/D"] - - if isinstance(dest, ArrayObject): - outline_item = self._build_destination(title, dest) - elif isinstance(dest, str): - # named destination, addresses NameObject Issue #193 - # TODO : keep named destination instead of replacing it ? - try: - outline_item = self._build_destination( - title, self._namedDests[dest].dest_array - ) - except KeyError: - # named destination not found in Name Dict - outline_item = self._build_destination(title, None) - elif dest is None: - # outline item not required to have destination or action - # PDFv1.7 Table 153 - outline_item = self._build_destination(title, dest) - else: - if self.strict: - raise PdfReadError(f"Unexpected destination {dest!r}") - else: - logger_warning( - f"Removed unexpected destination {dest!r} from destination", - __name__, - ) - outline_item = self._build_destination(title, None) # type: ignore - - # if outline item created, add color, format, and child count if present - if outline_item: - if "/C" in node: - # Color of outline item font in (R, G, B) with values ranging 0.0-1.0 - outline_item[NameObject("/C")] = ArrayObject(FloatObject(c) for c in node["/C"]) # type: ignore - if "/F" in node: - # specifies style characteristics bold and/or italic - # 1=italic, 2=bold, 3=both - outline_item[NameObject("/F")] = node["/F"] - if "/Count" in node: - # absolute value = num. visible children - # positive = open/unfolded, negative = closed/folded - outline_item[NameObject("/Count")] = node["/Count"] - outline_item.node = node - return outline_item - - @property - def pages(self) -> List[PageObject]: - """Read-only property that emulates a list of :py:class:`Page` objects.""" - return _VirtualList(self._get_num_pages, self._get_page) # type: ignore - - @property - def page_layout(self) -> Optional[str]: - """ - Get the page layout. - - :return: Page layout currently being used. - - .. list-table:: Valid ``layout`` values - :widths: 50 200 - - * - /NoLayout - - Layout explicitly not specified - * - /SinglePage - - Show one page at a time - * - /OneColumn - - Show one column at a time - * - /TwoColumnLeft - - Show pages in two columns, odd-numbered pages on the left - * - /TwoColumnRight - - Show pages in two columns, odd-numbered pages on the right - * - /TwoPageLeft - - Show two pages at a time, odd-numbered pages on the left - * - /TwoPageRight - - Show two pages at a time, odd-numbered pages on the right - """ - trailer = cast(DictionaryObject, self.trailer[TK.ROOT]) - if CD.PAGE_LAYOUT in trailer: - return cast(NameObject, trailer[CD.PAGE_LAYOUT]) - return None - - def getPageLayout(self) -> Optional[str]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_layout` instead. - """ - deprecation_with_replacement("getPageLayout", "page_layout", "3.0.0") - return self.page_layout - - @property - def pageLayout(self) -> Optional[str]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_layout` instead. - """ - deprecation_with_replacement("pageLayout", "page_layout", "3.0.0") - return self.page_layout - - @property - def page_mode(self) -> Optional[PagemodeType]: - """ - Get the page mode. - - :return: Page mode currently being used. - - .. list-table:: Valid ``mode`` values - :widths: 50 200 - - * - /UseNone - - Do not show outline or thumbnails panels - * - /UseOutlines - - Show outline (aka bookmarks) panel - * - /UseThumbs - - Show page thumbnails panel - * - /FullScreen - - Fullscreen view - * - /UseOC - - Show Optional Content Group (OCG) panel - * - /UseAttachments - - Show attachments panel - """ - try: - return self.trailer[TK.ROOT]["/PageMode"] # type: ignore - except KeyError: - return None - - def getPageMode(self) -> Optional[PagemodeType]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - deprecation_with_replacement("getPageMode", "page_mode", "3.0.0") - return self.page_mode - - @property - def pageMode(self) -> Optional[PagemodeType]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - deprecation_with_replacement("pageMode", "page_mode", "3.0.0") - return self.page_mode - - def _flatten( - self, - pages: Union[None, DictionaryObject, PageObject] = None, - inherit: Optional[Dict[str, Any]] = None, - indirect_reference: Optional[IndirectObject] = None, - ) -> None: - inheritable_page_attributes = ( - NameObject(PG.RESOURCES), - NameObject(PG.MEDIABOX), - NameObject(PG.CROPBOX), - NameObject(PG.ROTATE), - ) - if inherit is None: - inherit = {} - if pages is None: - # Fix issue 327: set flattened_pages attribute only for - # decrypted file - catalog = self.trailer[TK.ROOT].get_object() - pages = catalog["/Pages"].get_object() # type: ignore - self.flattened_pages = [] - - t = "/Pages" - if PA.TYPE in pages: - t = pages[PA.TYPE] # type: ignore - - if t == "/Pages": - for attr in inheritable_page_attributes: - if attr in pages: - inherit[attr] = pages[attr] - for page in pages[PA.KIDS]: # type: ignore - addt = {} - if isinstance(page, IndirectObject): - addt["indirect_reference"] = page - self._flatten(page.get_object(), inherit, **addt) - elif t == "/Page": - for attr_in, value in list(inherit.items()): - # if the page has it's own value, it does not inherit the - # parent's value: - if attr_in not in pages: - pages[attr_in] = value - page_obj = PageObject(self, indirect_reference) - page_obj.update(pages) - - # TODO: Could flattened_pages be None at this point? - self.flattened_pages.append(page_obj) # type: ignore - - def _get_object_from_stream( - self, indirect_reference: IndirectObject - ) -> Union[int, PdfObject, str]: - # indirect reference to object in object stream - # read the entire object stream into memory - stmnum, idx = self.xref_objStm[indirect_reference.idnum] - obj_stm: EncodedStreamObject = IndirectObject(stmnum, 0, self).get_object() # type: ignore - # This is an xref to a stream, so its type better be a stream - assert cast(str, obj_stm["/Type"]) == "/ObjStm" - # /N is the number of indirect objects in the stream - assert idx < obj_stm["/N"] - stream_data = BytesIO(b_(obj_stm.get_data())) # type: ignore - for i in range(obj_stm["/N"]): # type: ignore - read_non_whitespace(stream_data) - stream_data.seek(-1, 1) - objnum = NumberObject.read_from_stream(stream_data) - read_non_whitespace(stream_data) - stream_data.seek(-1, 1) - offset = NumberObject.read_from_stream(stream_data) - read_non_whitespace(stream_data) - stream_data.seek(-1, 1) - if objnum != indirect_reference.idnum: - # We're only interested in one object - continue - if self.strict and idx != i: - raise PdfReadError("Object is in wrong index.") - stream_data.seek(int(obj_stm["/First"] + offset), 0) # type: ignore - - # to cope with some case where the 'pointer' is on a white space - read_non_whitespace(stream_data) - stream_data.seek(-1, 1) - - try: - obj = read_object(stream_data, self) - except PdfStreamError as exc: - # Stream object cannot be read. Normally, a critical error, but - # Adobe Reader doesn't complain, so continue (in strict mode?) - logger_warning( - f"Invalid stream (index {i}) within object " - f"{indirect_reference.idnum} {indirect_reference.generation}: " - f"{exc}", - __name__, - ) - - if self.strict: - raise PdfReadError(f"Can't read object stream: {exc}") - # Replace with null. Hopefully it's nothing important. - obj = NullObject() - return obj - - if self.strict: - raise PdfReadError("This is a fatal error in strict mode.") - return NullObject() - - def _get_indirect_object(self, num: int, gen: int) -> Optional[PdfObject]: - """ - used to ease development - equivalent to generic.IndirectObject(num,gen,self).get_object() - """ - return IndirectObject(num, gen, self).get_object() - - def get_object( - self, indirect_reference: Union[int, IndirectObject] - ) -> Optional[PdfObject]: - if isinstance(indirect_reference, int): - indirect_reference = IndirectObject(indirect_reference, 0, self) - retval = self.cache_get_indirect_object( - indirect_reference.generation, indirect_reference.idnum - ) - if retval is not None: - return retval - if ( - indirect_reference.generation == 0 - and indirect_reference.idnum in self.xref_objStm - ): - retval = self._get_object_from_stream(indirect_reference) # type: ignore - elif ( - indirect_reference.generation in self.xref - and indirect_reference.idnum in self.xref[indirect_reference.generation] - ): - if self.xref_free_entry.get(indirect_reference.generation, {}).get( - indirect_reference.idnum, False - ): - return NullObject() - start = self.xref[indirect_reference.generation][indirect_reference.idnum] - self.stream.seek(start, 0) - try: - idnum, generation = self.read_object_header(self.stream) - except Exception: - if hasattr(self.stream, "getbuffer"): - buf = bytes(self.stream.getbuffer()) # type: ignore - else: - p = self.stream.tell() - self.stream.seek(0, 0) - buf = self.stream.read(-1) - self.stream.seek(p, 0) - m = re.search( - rf"\s{indirect_reference.idnum}\s+{indirect_reference.generation}\s+obj".encode(), - buf, - ) - if m is not None: - logger_warning( - f"Object ID {indirect_reference.idnum},{indirect_reference.generation} ref repaired", - __name__, - ) - self.xref[indirect_reference.generation][ - indirect_reference.idnum - ] = (m.start(0) + 1) - self.stream.seek(m.start(0) + 1) - idnum, generation = self.read_object_header(self.stream) - else: - idnum = -1 # exception will be raised below - if idnum != indirect_reference.idnum and self.xref_index: - # Xref table probably had bad indexes due to not being zero-indexed - if self.strict: - raise PdfReadError( - f"Expected object ID ({indirect_reference.idnum} {indirect_reference.generation}) " - f"does not match actual ({idnum} {generation}); " - "xref table not zero-indexed." - ) - # xref table is corrected in non-strict mode - elif idnum != indirect_reference.idnum and self.strict: - # some other problem - raise PdfReadError( - f"Expected object ID ({indirect_reference.idnum} " - f"{indirect_reference.generation}) does not match actual " - f"({idnum} {generation})." - ) - if self.strict: - assert generation == indirect_reference.generation - retval = read_object(self.stream, self) # type: ignore - - # override encryption is used for the /Encrypt dictionary - if not self._override_encryption and self._encryption is not None: - # if we don't have the encryption key: - if not self._encryption.is_decrypted(): - raise FileNotDecryptedError("File has not been decrypted") - # otherwise, decrypt here... - retval = cast(PdfObject, retval) - retval = self._encryption.decrypt_object( - retval, indirect_reference.idnum, indirect_reference.generation - ) - else: - if hasattr(self.stream, "getbuffer"): - buf = bytes(self.stream.getbuffer()) # type: ignore - else: - p = self.stream.tell() - self.stream.seek(0, 0) - buf = self.stream.read(-1) - self.stream.seek(p, 0) - m = re.search( - rf"\s{indirect_reference.idnum}\s+{indirect_reference.generation}\s+obj".encode(), - buf, - ) - if m is not None: - logger_warning( - f"Object {indirect_reference.idnum} {indirect_reference.generation} found", - __name__, - ) - if indirect_reference.generation not in self.xref: - self.xref[indirect_reference.generation] = {} - self.xref[indirect_reference.generation][indirect_reference.idnum] = ( - m.start(0) + 1 - ) - self.stream.seek(m.end(0) + 1) - skip_over_whitespace(self.stream) - self.stream.seek(-1, 1) - retval = read_object(self.stream, self) # type: ignore - - # override encryption is used for the /Encrypt dictionary - if not self._override_encryption and self._encryption is not None: - # if we don't have the encryption key: - if not self._encryption.is_decrypted(): - raise FileNotDecryptedError("File has not been decrypted") - # otherwise, decrypt here... - retval = cast(PdfObject, retval) - retval = self._encryption.decrypt_object( - retval, indirect_reference.idnum, indirect_reference.generation - ) - else: - logger_warning( - f"Object {indirect_reference.idnum} {indirect_reference.generation} not defined.", - __name__, - ) - if self.strict: - raise PdfReadError("Could not find object.") - self.cache_indirect_object( - indirect_reference.generation, indirect_reference.idnum, retval - ) - return retval - - def getObject( - self, indirectReference: IndirectObject - ) -> Optional[PdfObject]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_object` instead. - """ - deprecation_with_replacement("getObject", "get_object", "3.0.0") - return self.get_object(indirectReference) - - def read_object_header(self, stream: StreamType) -> Tuple[int, int]: - # Should never be necessary to read out whitespace, since the - # cross-reference table should put us in the right spot to read the - # object header. In reality... some files have stupid cross reference - # tables that are off by whitespace bytes. - extra = False - skip_over_comment(stream) - extra |= skip_over_whitespace(stream) - stream.seek(-1, 1) - idnum = read_until_whitespace(stream) - extra |= skip_over_whitespace(stream) - stream.seek(-1, 1) - generation = read_until_whitespace(stream) - extra |= skip_over_whitespace(stream) - stream.seek(-1, 1) - - # although it's not used, it might still be necessary to read - _obj = stream.read(3) # noqa: F841 - - read_non_whitespace(stream) - stream.seek(-1, 1) - if extra and self.strict: - logger_warning( - f"Superfluous whitespace found in object header {idnum} {generation}", # type: ignore - __name__, - ) - return int(idnum), int(generation) - - def readObjectHeader( - self, stream: StreamType - ) -> Tuple[int, int]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`read_object_header` instead. - """ - deprecation_with_replacement("readObjectHeader", "read_object_header", "3.0.0") - return self.read_object_header(stream) - - def cache_get_indirect_object( - self, generation: int, idnum: int - ) -> Optional[PdfObject]: - return self.resolved_objects.get((generation, idnum)) - - def cacheGetIndirectObject( - self, generation: int, idnum: int - ) -> Optional[PdfObject]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`cache_get_indirect_object` instead. - """ - deprecation_with_replacement( - "cacheGetIndirectObject", "cache_get_indirect_object", "3.0.0" - ) - return self.cache_get_indirect_object(generation, idnum) - - def cache_indirect_object( - self, generation: int, idnum: int, obj: Optional[PdfObject] - ) -> Optional[PdfObject]: - if (generation, idnum) in self.resolved_objects: - msg = f"Overwriting cache for {generation} {idnum}" - if self.strict: - raise PdfReadError(msg) - logger_warning(msg, __name__) - self.resolved_objects[(generation, idnum)] = obj - if obj is not None: - obj.indirect_reference = IndirectObject(idnum, generation, self) - return obj - - def cacheIndirectObject( - self, generation: int, idnum: int, obj: Optional[PdfObject] - ) -> Optional[PdfObject]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`cache_indirect_object` instead. - """ - deprecation_with_replacement("cacheIndirectObject", "cache_indirect_object") - return self.cache_indirect_object(generation, idnum, obj) - - def read(self, stream: StreamType) -> None: - self._basic_validation(stream) - self._find_eof_marker(stream) - startxref = self._find_startxref_pos(stream) - - # check and eventually correct the startxref only in not strict - xref_issue_nr = self._get_xref_issues(stream, startxref) - if xref_issue_nr != 0: - if self.strict and xref_issue_nr: - raise PdfReadError("Broken xref table") - logger_warning(f"incorrect startxref pointer({xref_issue_nr})", __name__) - - # read all cross reference tables and their trailers - self._read_xref_tables_and_trailers(stream, startxref, xref_issue_nr) - - # if not zero-indexed, verify that the table is correct; change it if necessary - if self.xref_index and not self.strict: - loc = stream.tell() - for gen, xref_entry in self.xref.items(): - if gen == 65535: - continue - xref_k = sorted( - xref_entry.keys() - ) # must ensure ascendant to prevent damange - for id in xref_k: - stream.seek(xref_entry[id], 0) - try: - pid, _pgen = self.read_object_header(stream) - except ValueError: - break - if pid == id - self.xref_index: - # fixing index item per item is required for revised PDF. - self.xref[gen][pid] = self.xref[gen][id] - del self.xref[gen][id] - # if not, then either it's just plain wrong, or the - # non-zero-index is actually correct - stream.seek(loc, 0) # return to where it was - - def _basic_validation(self, stream: StreamType) -> None: - # start at the end: - stream.seek(0, os.SEEK_END) - if not stream.tell(): - raise EmptyFileError("Cannot read an empty file") - if self.strict: - stream.seek(0, os.SEEK_SET) - header_byte = stream.read(5) - if header_byte != b"%PDF-": - raise PdfReadError( - f"PDF starts with '{header_byte.decode('utf8')}', " - "but '%PDF-' expected" - ) - stream.seek(0, os.SEEK_END) - - def _find_eof_marker(self, stream: StreamType) -> None: - last_mb = 8 # to parse whole file - line = b"" - while line[:5] != b"%%EOF": - if stream.tell() < last_mb: - raise PdfReadError("EOF marker not found") - line = read_previous_line(stream) - - def _find_startxref_pos(self, stream: StreamType) -> int: - """Find startxref entry - the location of the xref table""" - line = read_previous_line(stream) - try: - startxref = int(line) - except ValueError: - # 'startxref' may be on the same line as the location - if not line.startswith(b"startxref"): - raise PdfReadError("startxref not found") - startxref = int(line[9:].strip()) - logger_warning("startxref on same line as offset", __name__) - else: - line = read_previous_line(stream) - if line[:9] != b"startxref": - raise PdfReadError("startxref not found") - return startxref - - def _read_standard_xref_table(self, stream: StreamType) -> None: - # standard cross-reference table - ref = stream.read(4) - if ref[:3] != b"ref": - raise PdfReadError("xref table read error") - read_non_whitespace(stream) - stream.seek(-1, 1) - firsttime = True # check if the first time looking at the xref table - while True: - num = cast(int, read_object(stream, self)) - if firsttime and num != 0: - self.xref_index = num - if self.strict: - logger_warning( - "Xref table not zero-indexed. ID numbers for objects will be corrected.", - __name__, - ) - # if table not zero indexed, could be due to error from when PDF was created - # which will lead to mismatched indices later on, only warned and corrected if self.strict==True - firsttime = False - read_non_whitespace(stream) - stream.seek(-1, 1) - size = cast(int, read_object(stream, self)) - read_non_whitespace(stream) - stream.seek(-1, 1) - cnt = 0 - while cnt < size: - line = stream.read(20) - - # It's very clear in section 3.4.3 of the PDF spec - # that all cross-reference table lines are a fixed - # 20 bytes (as of PDF 1.7). However, some files have - # 21-byte entries (or more) due to the use of \r\n - # (CRLF) EOL's. Detect that case, and adjust the line - # until it does not begin with a \r (CR) or \n (LF). - while line[0] in b"\x0D\x0A": - stream.seek(-20 + 1, 1) - line = stream.read(20) - - # On the other hand, some malformed PDF files - # use a single character EOL without a preceding - # space. Detect that case, and seek the stream - # back one character. (0-9 means we've bled into - # the next xref entry, t means we've bled into the - # text "trailer"): - if line[-1] in b"0123456789t": - stream.seek(-1, 1) - - try: - offset_b, generation_b = line[:16].split(b" ") - entry_type_b = line[17:18] - - offset, generation = int(offset_b), int(generation_b) - except Exception: - # if something wrong occured - if hasattr(stream, "getbuffer"): - buf = bytes(stream.getbuffer()) # type: ignore - else: - p = stream.tell() - stream.seek(0, 0) - buf = stream.read(-1) - stream.seek(p) - - f = re.search(f"{num}\\s+(\\d+)\\s+obj".encode(), buf) - if f is None: - logger_warning( - f"entry {num} in Xref table invalid; object not found", - __name__, - ) - generation = 65535 - offset = -1 - else: - logger_warning( - f"entry {num} in Xref table invalid but object found", - __name__, - ) - generation = int(f.group(1)) - offset = f.start() - - if generation not in self.xref: - self.xref[generation] = {} - self.xref_free_entry[generation] = {} - if num in self.xref[generation]: - # It really seems like we should allow the last - # xref table in the file to override previous - # ones. Since we read the file backwards, assume - # any existing key is already set correctly. - pass - else: - self.xref[generation][num] = offset - try: - self.xref_free_entry[generation][num] = entry_type_b == b"f" - except Exception: - pass - try: - self.xref_free_entry[65535][num] = entry_type_b == b"f" - except Exception: - pass - cnt += 1 - num += 1 - read_non_whitespace(stream) - stream.seek(-1, 1) - trailertag = stream.read(7) - if trailertag != b"trailer": - # more xrefs! - stream.seek(-7, 1) - else: - break - - def _read_xref_tables_and_trailers( - self, stream: StreamType, startxref: Optional[int], xref_issue_nr: int - ) -> None: - self.xref: Dict[int, Dict[Any, Any]] = {} - self.xref_free_entry: Dict[int, Dict[Any, Any]] = {} - self.xref_objStm: Dict[int, Tuple[Any, Any]] = {} - self.trailer = DictionaryObject() - while startxref is not None: - # load the xref table - stream.seek(startxref, 0) - x = stream.read(1) - if x in b"\r\n": - x = stream.read(1) - if x == b"x": - startxref = self._read_xref(stream) - elif xref_issue_nr: - try: - self._rebuild_xref_table(stream) - break - except Exception: - xref_issue_nr = 0 - elif x.isdigit(): - try: - xrefstream = self._read_pdf15_xref_stream(stream) - except Exception as e: - if TK.ROOT in self.trailer: - logger_warning( - f"Previous trailer can not be read {e.args}", - __name__, - ) - break - else: - raise PdfReadError(f"trailer can not be read {e.args}") - trailer_keys = TK.ROOT, TK.ENCRYPT, TK.INFO, TK.ID - for key in trailer_keys: - if key in xrefstream and key not in self.trailer: - self.trailer[NameObject(key)] = xrefstream.raw_get(key) - if "/XRefStm" in xrefstream: - p = stream.tell() - stream.seek(cast(int, xrefstream["/XRefStm"]) + 1, 0) - self._read_pdf15_xref_stream(stream) - stream.seek(p, 0) - if "/Prev" in xrefstream: - startxref = cast(int, xrefstream["/Prev"]) - else: - break - else: - startxref = self._read_xref_other_error(stream, startxref) - - def _read_xref(self, stream: StreamType) -> Optional[int]: - self._read_standard_xref_table(stream) - read_non_whitespace(stream) - stream.seek(-1, 1) - new_trailer = cast(Dict[str, Any], read_object(stream, self)) - for key, value in new_trailer.items(): - if key not in self.trailer: - self.trailer[key] = value - if "/XRefStm" in new_trailer: - p = stream.tell() - stream.seek(cast(int, new_trailer["/XRefStm"]) + 1, 0) - try: - self._read_pdf15_xref_stream(stream) - except Exception: - logger_warning( - f"XRef object at {new_trailer['/XRefStm']} can not be read, some object may be missing", - __name__, - ) - stream.seek(p, 0) - if "/Prev" in new_trailer: - startxref = new_trailer["/Prev"] - return startxref - else: - return None - - def _read_xref_other_error( - self, stream: StreamType, startxref: int - ) -> Optional[int]: - # some PDFs have /Prev=0 in the trailer, instead of no /Prev - if startxref == 0: - if self.strict: - raise PdfReadError( - "/Prev=0 in the trailer (try opening with strict=False)" - ) - logger_warning( - "/Prev=0 in the trailer - assuming there is no previous xref table", - __name__, - ) - return None - # bad xref character at startxref. Let's see if we can find - # the xref table nearby, as we've observed this error with an - # off-by-one before. - stream.seek(-11, 1) - tmp = stream.read(20) - xref_loc = tmp.find(b"xref") - if xref_loc != -1: - startxref -= 10 - xref_loc - return startxref - # No explicit xref table, try finding a cross-reference stream. - stream.seek(startxref, 0) - for look in range(5): - if stream.read(1).isdigit(): - # This is not a standard PDF, consider adding a warning - startxref += look - return startxref - # no xref table found at specified location - if "/Root" in self.trailer and not self.strict: - # if Root has been already found, just raise warning - logger_warning("Invalid parent xref., rebuild xref", __name__) - try: - self._rebuild_xref_table(stream) - return None - except Exception: - raise PdfReadError("can not rebuild xref") - raise PdfReadError("Could not find xref table at specified location") - - def _read_pdf15_xref_stream( - self, stream: StreamType - ) -> Union[ContentStream, EncodedStreamObject, DecodedStreamObject]: - # PDF 1.5+ Cross-Reference Stream - stream.seek(-1, 1) - idnum, generation = self.read_object_header(stream) - xrefstream = cast(ContentStream, read_object(stream, self)) - assert cast(str, xrefstream["/Type"]) == "/XRef" - self.cache_indirect_object(generation, idnum, xrefstream) - stream_data = BytesIO(b_(xrefstream.get_data())) - # Index pairs specify the subsections in the dictionary. If - # none create one subsection that spans everything. - idx_pairs = xrefstream.get("/Index", [0, xrefstream.get("/Size")]) - entry_sizes = cast(Dict[Any, Any], xrefstream.get("/W")) - assert len(entry_sizes) >= 3 - if self.strict and len(entry_sizes) > 3: - raise PdfReadError(f"Too many entry sizes: {entry_sizes}") - - def get_entry(i: int) -> Union[int, Tuple[int, ...]]: - # Reads the correct number of bytes for each entry. See the - # discussion of the W parameter in PDF spec table 17. - if entry_sizes[i] > 0: - d = stream_data.read(entry_sizes[i]) - return convert_to_int(d, entry_sizes[i]) - - # PDF Spec Table 17: A value of zero for an element in the - # W array indicates...the default value shall be used - if i == 0: - return 1 # First value defaults to 1 - else: - return 0 - - def used_before(num: int, generation: Union[int, Tuple[int, ...]]) -> bool: - # We move backwards through the xrefs, don't replace any. - return num in self.xref.get(generation, []) or num in self.xref_objStm # type: ignore - - # Iterate through each subsection - self._read_xref_subsections(idx_pairs, get_entry, used_before) - return xrefstream - - @staticmethod - def _get_xref_issues(stream: StreamType, startxref: int) -> int: - """Return an int which indicates an issue. 0 means there is no issue.""" - stream.seek(startxref - 1, 0) # -1 to check character before - line = stream.read(1) - if line not in b"\r\n \t": - return 1 - line = stream.read(4) - if line != b"xref": - # not an xref so check if it is an XREF object - line = b"" - while line in b"0123456789 \t": - line = stream.read(1) - if line == b"": - return 2 - line += stream.read(2) # 1 char already read, +2 to check "obj" - if line.lower() != b"obj": - return 3 - # while stream.read(1) in b" \t\r\n": - # pass - # line = stream.read(256) # check that it is xref obj - # if b"/xref" not in line.lower(): - # return 4 - return 0 - - def _rebuild_xref_table(self, stream: StreamType) -> None: - self.xref = {} - stream.seek(0, 0) - f_ = stream.read(-1) - - for m in re.finditer(rb"[\r\n \t][ \t]*(\d+)[ \t]+(\d+)[ \t]+obj", f_): - idnum = int(m.group(1)) - generation = int(m.group(2)) - if generation not in self.xref: - self.xref[generation] = {} - self.xref[generation][idnum] = m.start(1) - stream.seek(0, 0) - for m in re.finditer(rb"[\r\n \t][ \t]*trailer[\r\n \t]*(<<)", f_): - stream.seek(m.start(1), 0) - new_trailer = cast(Dict[Any, Any], read_object(stream, self)) - # Here, we are parsing the file from start to end, the new data have to erase the existing. - for key, value in list(new_trailer.items()): - self.trailer[key] = value - - def _read_xref_subsections( - self, - idx_pairs: List[int], - get_entry: Callable[[int], Union[int, Tuple[int, ...]]], - used_before: Callable[[int, Union[int, Tuple[int, ...]]], bool], - ) -> None: - last_end = 0 - for start, size in self._pairs(idx_pairs): - # The subsections must increase - assert start >= last_end - last_end = start + size - for num in range(start, start + size): - # The first entry is the type - xref_type = get_entry(0) - # The rest of the elements depend on the xref_type - if xref_type == 0: - # linked list of free objects - next_free_object = get_entry(1) # noqa: F841 - next_generation = get_entry(2) # noqa: F841 - elif xref_type == 1: - # objects that are in use but are not compressed - byte_offset = get_entry(1) - generation = get_entry(2) - if generation not in self.xref: - self.xref[generation] = {} # type: ignore - if not used_before(num, generation): - self.xref[generation][num] = byte_offset # type: ignore - elif xref_type == 2: - # compressed objects - objstr_num = get_entry(1) - obstr_idx = get_entry(2) - generation = 0 # PDF spec table 18, generation is 0 - if not used_before(num, generation): - self.xref_objStm[num] = (objstr_num, obstr_idx) - elif self.strict: - raise PdfReadError(f"Unknown xref type: {xref_type}") - - def _pairs(self, array: List[int]) -> Iterable[Tuple[int, int]]: - i = 0 - while True: - yield array[i], array[i + 1] - i += 2 - if (i + 1) >= len(array): - break - - def read_next_end_line( - self, stream: StreamType, limit_offset: int = 0 - ) -> bytes: # pragma: no cover - """.. deprecated:: 2.1.0""" - deprecate_no_replacement("read_next_end_line", removed_in="4.0.0") - line_parts = [] - while True: - # Prevent infinite loops in malformed PDFs - if stream.tell() == 0 or stream.tell() == limit_offset: - raise PdfReadError("Could not read malformed PDF file") - x = stream.read(1) - if stream.tell() < 2: - raise PdfReadError("EOL marker not found") - stream.seek(-2, 1) - if x in (b"\n", b"\r"): # \n = LF; \r = CR - crlf = False - while x in (b"\n", b"\r"): - x = stream.read(1) - if x in (b"\n", b"\r"): # account for CR+LF - stream.seek(-1, 1) - crlf = True - if stream.tell() < 2: - raise PdfReadError("EOL marker not found") - stream.seek(-2, 1) - stream.seek( - 2 if crlf else 1, 1 - ) # if using CR+LF, go back 2 bytes, else 1 - break - else: - line_parts.append(x) - line_parts.reverse() - return b"".join(line_parts) - - def readNextEndLine( - self, stream: StreamType, limit_offset: int = 0 - ) -> bytes: # pragma: no cover - """.. deprecated:: 1.28.0""" - deprecation_no_replacement("readNextEndLine", "3.0.0") - return self.read_next_end_line(stream, limit_offset) - - def decrypt(self, password: Union[str, bytes]) -> PasswordType: - """ - When using an encrypted / secured PDF file with the PDF Standard - encryption handler, this function will allow the file to be decrypted. - It checks the given password against the document's user password and - owner password, and then stores the resulting decryption key if either - password is correct. - - It does not matter which password was matched. Both passwords provide - the correct decryption key that will allow the document to be used with - this library. - - :param str password: The password to match. - :return: `PasswordType`. - """ - if not self._encryption: - raise PdfReadError("Not encrypted file") - # TODO: raise Exception for wrong password - return self._encryption.verify(password) - - def decode_permissions(self, permissions_code: int) -> Dict[str, bool]: - # Takes the permissions as an integer, returns the allowed access - permissions = {} - permissions["print"] = permissions_code & (1 << 3 - 1) != 0 # bit 3 - permissions["modify"] = permissions_code & (1 << 4 - 1) != 0 # bit 4 - permissions["copy"] = permissions_code & (1 << 5 - 1) != 0 # bit 5 - permissions["annotations"] = permissions_code & (1 << 6 - 1) != 0 # bit 6 - permissions["forms"] = permissions_code & (1 << 9 - 1) != 0 # bit 9 - permissions["accessability"] = permissions_code & (1 << 10 - 1) != 0 # bit 10 - permissions["assemble"] = permissions_code & (1 << 11 - 1) != 0 # bit 11 - permissions["print_high_quality"] = ( - permissions_code & (1 << 12 - 1) != 0 - ) # bit 12 - return permissions - - @property - def is_encrypted(self) -> bool: - """ - Read-only boolean property showing whether this PDF file is encrypted. - Note that this property, if true, will remain true even after the - :meth:`decrypt()` method is called. - """ - return TK.ENCRYPT in self.trailer - - def getIsEncrypted(self) -> bool: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`is_encrypted` instead. - """ - deprecation_with_replacement("getIsEncrypted", "is_encrypted", "3.0.0") - return self.is_encrypted - - @property - def isEncrypted(self) -> bool: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`is_encrypted` instead. - """ - deprecation_with_replacement("isEncrypted", "is_encrypted", "3.0.0") - return self.is_encrypted - - @property - def xfa(self) -> Optional[Dict[str, Any]]: - tree: Optional[TreeObject] = None - retval: Dict[str, Any] = {} - catalog = cast(DictionaryObject, self.trailer[TK.ROOT]) - - if "/AcroForm" not in catalog or not catalog["/AcroForm"]: - return None - - tree = cast(TreeObject, catalog["/AcroForm"]) - - if "/XFA" in tree: - fields = cast(ArrayObject, tree["/XFA"]) - i = iter(fields) - for f in i: - tag = f - f = next(i) - if isinstance(f, IndirectObject): - field = cast(Optional[EncodedStreamObject], f.get_object()) - if field: - es = zlib.decompress(field._data) - retval[tag] = es - return retval - - -class PdfFileReader(PdfReader): # pragma: no cover - def __init__(self, *args: Any, **kwargs: Any) -> None: - deprecation_with_replacement("PdfFileReader", "PdfReader", "3.0.0") - if "strict" not in kwargs and len(args) < 2: - kwargs["strict"] = True # maintain the default - super().__init__(*args, **kwargs) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_security.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_security.py deleted file mode 100644 index 47e5c373..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_security.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# Copyright (c) 2007, Ashish Kulkarni -# -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -"""Anything related to encryption / decryption.""" - -import struct -from hashlib import md5 -from typing import Tuple, Union - -from ._utils import b_, ord_, str_ -from .generic import ByteStringObject - -try: - from typing import Literal # type: ignore[attr-defined] -except ImportError: - # PEP 586 introduced typing.Literal with Python 3.8 - # For older Python versions, the backport typing_extensions is necessary: - from typing_extensions import Literal # type: ignore[misc] - -# ref: pdf1.8 spec section 3.5.2 algorithm 3.2 -_encryption_padding = ( - b"\x28\xbf\x4e\x5e\x4e\x75\x8a\x41\x64\x00\x4e\x56" - b"\xff\xfa\x01\x08\x2e\x2e\x00\xb6\xd0\x68\x3e\x80\x2f\x0c" - b"\xa9\xfe\x64\x53\x69\x7a" -) - - -def _alg32( - password: str, - rev: Literal[2, 3, 4], - keylen: int, - owner_entry: ByteStringObject, - p_entry: int, - id1_entry: ByteStringObject, - metadata_encrypt: bool = True, -) -> bytes: - """ - Implementation of algorithm 3.2 of the PDF standard security handler. - - See section 3.5.2 of the PDF 1.6 reference. - """ - # 1. Pad or truncate the password string to exactly 32 bytes. If the - # password string is more than 32 bytes long, use only its first 32 bytes; - # if it is less than 32 bytes long, pad it by appending the required number - # of additional bytes from the beginning of the padding string - # (_encryption_padding). - password_bytes = b_((str_(password) + str_(_encryption_padding))[:32]) - # 2. Initialize the MD5 hash function and pass the result of step 1 as - # input to this function. - m = md5(password_bytes) - # 3. Pass the value of the encryption dictionary's /O entry to the MD5 hash - # function. - m.update(owner_entry.original_bytes) - # 4. Treat the value of the /P entry as an unsigned 4-byte integer and pass - # these bytes to the MD5 hash function, low-order byte first. - p_entry_bytes = struct.pack("= 3 and not metadata_encrypt: - m.update(b"\xff\xff\xff\xff") - # 7. Finish the hash. - md5_hash = m.digest() - # 8. (Revision 3 or greater) Do the following 50 times: Take the output - # from the previous MD5 hash and pass the first n bytes of the output as - # input into a new MD5 hash, where n is the number of bytes of the - # encryption key as defined by the value of the encryption dictionary's - # /Length entry. - if rev >= 3: - for _ in range(50): - md5_hash = md5(md5_hash[:keylen]).digest() - # 9. Set the encryption key to the first n bytes of the output from the - # final MD5 hash, where n is always 5 for revision 2 but, for revision 3 or - # greater, depends on the value of the encryption dictionary's /Length - # entry. - return md5_hash[:keylen] - - -def _alg33( - owner_password: str, user_password: str, rev: Literal[2, 3, 4], keylen: int -) -> bytes: - """ - Implementation of algorithm 3.3 of the PDF standard security handler, - section 3.5.2 of the PDF 1.6 reference. - """ - # steps 1 - 4 - key = _alg33_1(owner_password, rev, keylen) - # 5. Pad or truncate the user password string as described in step 1 of - # algorithm 3.2. - user_password_bytes = b_((user_password + str_(_encryption_padding))[:32]) - # 6. Encrypt the result of step 5, using an RC4 encryption function with - # the encryption key obtained in step 4. - val = RC4_encrypt(key, user_password_bytes) - # 7. (Revision 3 or greater) Do the following 19 times: Take the output - # from the previous invocation of the RC4 function and pass it as input to - # a new invocation of the function; use an encryption key generated by - # taking each byte of the encryption key obtained in step 4 and performing - # an XOR operation between that byte and the single-byte value of the - # iteration counter (from 1 to 19). - if rev >= 3: - for i in range(1, 20): - new_key = "" - for key_char in key: - new_key += chr(ord_(key_char) ^ i) - val = RC4_encrypt(new_key, val) - # 8. Store the output from the final invocation of the RC4 as the value of - # the /O entry in the encryption dictionary. - return val - - -def _alg33_1(password: str, rev: Literal[2, 3, 4], keylen: int) -> bytes: - """Steps 1-4 of algorithm 3.3""" - # 1. Pad or truncate the owner password string as described in step 1 of - # algorithm 3.2. If there is no owner password, use the user password - # instead. - password_bytes = b_((password + str_(_encryption_padding))[:32]) - # 2. Initialize the MD5 hash function and pass the result of step 1 as - # input to this function. - m = md5(password_bytes) - # 3. (Revision 3 or greater) Do the following 50 times: Take the output - # from the previous MD5 hash and pass it as input into a new MD5 hash. - md5_hash = m.digest() - if rev >= 3: - for _ in range(50): - md5_hash = md5(md5_hash).digest() - # 4. Create an RC4 encryption key using the first n bytes of the output - # from the final MD5 hash, where n is always 5 for revision 2 but, for - # revision 3 or greater, depends on the value of the encryption - # dictionary's /Length entry. - key = md5_hash[:keylen] - return key - - -def _alg34( - password: str, - owner_entry: ByteStringObject, - p_entry: int, - id1_entry: ByteStringObject, -) -> Tuple[bytes, bytes]: - """ - Implementation of algorithm 3.4 of the PDF standard security handler. - - See section 3.5.2 of the PDF 1.6 reference. - """ - # 1. Create an encryption key based on the user password string, as - # described in algorithm 3.2. - rev: Literal[2] = 2 - keylen = 5 - key = _alg32(password, rev, keylen, owner_entry, p_entry, id1_entry) - # 2. Encrypt the 32-byte padding string shown in step 1 of algorithm 3.2, - # using an RC4 encryption function with the encryption key from the - # preceding step. - U = RC4_encrypt(key, _encryption_padding) - # 3. Store the result of step 2 as the value of the /U entry in the - # encryption dictionary. - return U, key - - -def _alg35( - password: str, - rev: Literal[2, 3, 4], - keylen: int, - owner_entry: ByteStringObject, - p_entry: int, - id1_entry: ByteStringObject, - metadata_encrypt: bool, -) -> Tuple[bytes, bytes]: - """ - Implementation of algorithm 3.4 of the PDF standard security handler. - - See section 3.5.2 of the PDF 1.6 reference. - """ - # 1. Create an encryption key based on the user password string, as - # described in Algorithm 3.2. - key = _alg32(password, rev, keylen, owner_entry, p_entry, id1_entry) - # 2. Initialize the MD5 hash function and pass the 32-byte padding string - # shown in step 1 of Algorithm 3.2 as input to this function. - m = md5() - m.update(_encryption_padding) - # 3. Pass the first element of the file's file identifier array (the value - # of the ID entry in the document's trailer dictionary; see Table 3.13 on - # page 73) to the hash function and finish the hash. (See implementation - # note 25 in Appendix H.) - m.update(id1_entry.original_bytes) - md5_hash = m.digest() - # 4. Encrypt the 16-byte result of the hash, using an RC4 encryption - # function with the encryption key from step 1. - val = RC4_encrypt(key, md5_hash) - # 5. Do the following 19 times: Take the output from the previous - # invocation of the RC4 function and pass it as input to a new invocation - # of the function; use an encryption key generated by taking each byte of - # the original encryption key (obtained in step 2) and performing an XOR - # operation between that byte and the single-byte value of the iteration - # counter (from 1 to 19). - for i in range(1, 20): - new_key = b"" - for k in key: - new_key += b_(chr(ord_(k) ^ i)) - val = RC4_encrypt(new_key, val) - # 6. Append 16 bytes of arbitrary padding to the output from the final - # invocation of the RC4 function and store the 32-byte result as the value - # of the U entry in the encryption dictionary. - # (implementer note: I don't know what "arbitrary padding" is supposed to - # mean, so I have used null bytes. This seems to match a few other - # people's implementations) - return val + (b"\x00" * 16), key - - -def RC4_encrypt(key: Union[str, bytes], plaintext: bytes) -> bytes: # TODO - S = list(range(256)) - j = 0 - for i in range(256): - j = (j + S[i] + ord_(key[i % len(key)])) % 256 - S[i], S[j] = S[j], S[i] - i, j = 0, 0 - retval = [] - for plaintext_char in plaintext: - i = (i + 1) % 256 - j = (j + S[i]) % 256 - S[i], S[j] = S[j], S[i] - t = S[(S[i] + S[j]) % 256] - retval.append(b_(chr(ord_(plaintext_char) ^ t))) - return b"".join(retval) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_utils.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_utils.py deleted file mode 100644 index b6f090b8..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_utils.py +++ /dev/null @@ -1,471 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -"""Utility functions for PDF library.""" -__author__ = "Mathieu Fenniak" -__author_email__ = "biziqe@mathieu.fenniak.net" - -import functools -import logging -import warnings -from codecs import getencoder -from dataclasses import dataclass -from io import DEFAULT_BUFFER_SIZE -from os import SEEK_CUR -from typing import ( - IO, - Any, - Callable, - Dict, - Optional, - Pattern, - Tuple, - Union, - overload, -) - -try: - # Python 3.10+: https://www.python.org/dev/peps/pep-0484/ - from typing import TypeAlias # type: ignore[attr-defined] -except ImportError: - from typing_extensions import TypeAlias - -from .errors import ( - STREAM_TRUNCATED_PREMATURELY, - DeprecationError, - PdfStreamError, -) - -TransformationMatrixType: TypeAlias = Tuple[ - Tuple[float, float, float], Tuple[float, float, float], Tuple[float, float, float] -] -CompressedTransformationMatrix: TypeAlias = Tuple[ - float, float, float, float, float, float -] - -StreamType = IO -StrByteType = Union[str, StreamType] - -DEPR_MSG_NO_REPLACEMENT = "{} is deprecated and will be removed in PyPDF2 {}." -DEPR_MSG_NO_REPLACEMENT_HAPPENED = "{} is deprecated and was removed in PyPDF2 {}." -DEPR_MSG = "{} is deprecated and will be removed in PyPDF2 3.0.0. Use {} instead." -DEPR_MSG_HAPPENED = "{} is deprecated and was removed in PyPDF2 {}. Use {} instead." - - -def _get_max_pdf_version_header(header1: bytes, header2: bytes) -> bytes: - versions = ( - b"%PDF-1.3", - b"%PDF-1.4", - b"%PDF-1.5", - b"%PDF-1.6", - b"%PDF-1.7", - b"%PDF-2.0", - ) - pdf_header_indices = [] - if header1 in versions: - pdf_header_indices.append(versions.index(header1)) - if header2 in versions: - pdf_header_indices.append(versions.index(header2)) - if len(pdf_header_indices) == 0: - raise ValueError(f"neither {header1!r} nor {header2!r} are proper headers") - return versions[max(pdf_header_indices)] - - -def read_until_whitespace(stream: StreamType, maxchars: Optional[int] = None) -> bytes: - """ - Read non-whitespace characters and return them. - - Stops upon encountering whitespace or when maxchars is reached. - """ - txt = b"" - while True: - tok = stream.read(1) - if tok.isspace() or not tok: - break - txt += tok - if len(txt) == maxchars: - break - return txt - - -def read_non_whitespace(stream: StreamType) -> bytes: - """Find and read the next non-whitespace character (ignores whitespace).""" - tok = stream.read(1) - while tok in WHITESPACES: - tok = stream.read(1) - return tok - - -def skip_over_whitespace(stream: StreamType) -> bool: - """ - Similar to read_non_whitespace, but return a Boolean if more than - one whitespace character was read. - """ - tok = WHITESPACES[0] - cnt = 0 - while tok in WHITESPACES: - tok = stream.read(1) - cnt += 1 - return cnt > 1 - - -def skip_over_comment(stream: StreamType) -> None: - tok = stream.read(1) - stream.seek(-1, 1) - if tok == b"%": - while tok not in (b"\n", b"\r"): - tok = stream.read(1) - - -def read_until_regex( - stream: StreamType, regex: Pattern[bytes], ignore_eof: bool = False -) -> bytes: - """ - Read until the regular expression pattern matched (ignore the match). - - :raises PdfStreamError: on premature end-of-file - :param bool ignore_eof: If true, ignore end-of-line and return immediately - :param regex: re.Pattern - """ - name = b"" - while True: - tok = stream.read(16) - if not tok: - if ignore_eof: - return name - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - m = regex.search(tok) - if m is not None: - name += tok[: m.start()] - stream.seek(m.start() - len(tok), 1) - break - name += tok - return name - - -def read_block_backwards(stream: StreamType, to_read: int) -> bytes: - """ - Given a stream at position X, read a block of size to_read ending at position X. - - This changes the stream's position to the beginning of where the block was - read. - """ - if stream.tell() < to_read: - raise PdfStreamError("Could not read malformed PDF file") - # Seek to the start of the block we want to read. - stream.seek(-to_read, SEEK_CUR) - read = stream.read(to_read) - # Seek to the start of the block we read after reading it. - stream.seek(-to_read, SEEK_CUR) - return read - - -def read_previous_line(stream: StreamType) -> bytes: - """ - Given a byte stream with current position X, return the previous line. - - All characters between the first CR/LF byte found before X - (or, the start of the file, if no such byte is found) and position X - After this call, the stream will be positioned one byte after the - first non-CRLF character found beyond the first CR/LF byte before X, - or, if no such byte is found, at the beginning of the stream. - """ - line_content = [] - found_crlf = False - if stream.tell() == 0: - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - while True: - to_read = min(DEFAULT_BUFFER_SIZE, stream.tell()) - if to_read == 0: - break - # Read the block. After this, our stream will be one - # beyond the initial position. - block = read_block_backwards(stream, to_read) - idx = len(block) - 1 - if not found_crlf: - # We haven't found our first CR/LF yet. - # Read off characters until we hit one. - while idx >= 0 and block[idx] not in b"\r\n": - idx -= 1 - if idx >= 0: - found_crlf = True - if found_crlf: - # We found our first CR/LF already (on this block or - # a previous one). - # Our combined line is the remainder of the block - # plus any previously read blocks. - line_content.append(block[idx + 1 :]) - # Continue to read off any more CRLF characters. - while idx >= 0 and block[idx] in b"\r\n": - idx -= 1 - else: - # Didn't find CR/LF yet - add this block to our - # previously read blocks and continue. - line_content.append(block) - if idx >= 0: - # We found the next non-CRLF character. - # Set the stream position correctly, then break - stream.seek(idx + 1, SEEK_CUR) - break - # Join all the blocks in the line (which are in reverse order) - return b"".join(line_content[::-1]) - - -def matrix_multiply( - a: TransformationMatrixType, b: TransformationMatrixType -) -> TransformationMatrixType: - return tuple( # type: ignore[return-value] - tuple(sum(float(i) * float(j) for i, j in zip(row, col)) for col in zip(*b)) - for row in a - ) - - -def mark_location(stream: StreamType) -> None: - """Create text file showing current location in context.""" - # Mainly for debugging - radius = 5000 - stream.seek(-radius, 1) - with open("PyPDF2_pdfLocation.txt", "wb") as output_fh: - output_fh.write(stream.read(radius)) - output_fh.write(b"HERE") - output_fh.write(stream.read(radius)) - stream.seek(-radius, 1) - - -B_CACHE: Dict[Union[str, bytes], bytes] = {} - - -def b_(s: Union[str, bytes]) -> bytes: - bc = B_CACHE - if s in bc: - return bc[s] - if isinstance(s, bytes): - return s - try: - r = s.encode("latin-1") - if len(s) < 2: - bc[s] = r - return r - except Exception: - r = s.encode("utf-8") - if len(s) < 2: - bc[s] = r - return r - - -@overload -def str_(b: str) -> str: - ... - - -@overload -def str_(b: bytes) -> str: - ... - - -def str_(b: Union[str, bytes]) -> str: - if isinstance(b, bytes): - return b.decode("latin-1") - else: - return b - - -@overload -def ord_(b: str) -> int: - ... - - -@overload -def ord_(b: bytes) -> bytes: - ... - - -@overload -def ord_(b: int) -> int: - ... - - -def ord_(b: Union[int, str, bytes]) -> Union[int, bytes]: - if isinstance(b, str): - return ord(b) - return b - - -def hexencode(b: bytes) -> bytes: - - coder = getencoder("hex_codec") - coded = coder(b) # type: ignore - return coded[0] - - -def hex_str(num: int) -> str: - return hex(num).replace("L", "") - - -WHITESPACES = (b" ", b"\n", b"\r", b"\t", b"\x00") - - -def paeth_predictor(left: int, up: int, up_left: int) -> int: - p = left + up - up_left - dist_left = abs(p - left) - dist_up = abs(p - up) - dist_up_left = abs(p - up_left) - - if dist_left <= dist_up and dist_left <= dist_up_left: - return left - elif dist_up <= dist_up_left: - return up - else: - return up_left - - -def deprecate(msg: str, stacklevel: int = 3) -> None: - warnings.warn(msg, DeprecationWarning, stacklevel=stacklevel) - - -def deprecation(msg: str) -> None: - raise DeprecationError(msg) - - -def deprecate_with_replacement( - old_name: str, new_name: str, removed_in: str = "3.0.0" -) -> None: - """ - Raise an exception that a feature will be removed, but has a replacement. - """ - deprecate(DEPR_MSG.format(old_name, new_name, removed_in), 4) - - -def deprecation_with_replacement( - old_name: str, new_name: str, removed_in: str = "3.0.0" -) -> None: - """ - Raise an exception that a feature was already removed, but has a replacement. - """ - deprecation(DEPR_MSG_HAPPENED.format(old_name, removed_in, new_name)) - - -def deprecate_no_replacement(name: str, removed_in: str = "3.0.0") -> None: - """ - Raise an exception that a feature will be removed without replacement. - """ - deprecate(DEPR_MSG_NO_REPLACEMENT.format(name, removed_in), 4) - - -def deprecation_no_replacement(name: str, removed_in: str = "3.0.0") -> None: - """ - Raise an exception that a feature was already removed without replacement. - """ - deprecation(DEPR_MSG_NO_REPLACEMENT_HAPPENED.format(name, removed_in)) - - -def logger_warning(msg: str, src: str) -> None: - """ - Use this instead of logger.warning directly. - - That allows people to overwrite it more easily. - - ## Exception, warnings.warn, logger_warning - - Exceptions should be used if the user should write code that deals with - an error case, e.g. the PDF being completely broken. - - warnings.warn should be used if the user needs to fix their code, e.g. - DeprecationWarnings - - logger_warning should be used if the user needs to know that an issue was - handled by PyPDF2, e.g. a non-compliant PDF being read in a way that - PyPDF2 could apply a robustness fix to still read it. This applies mainly - to strict=False mode. - """ - logging.getLogger(src).warning(msg) - - -def deprecation_bookmark(**aliases: str) -> Callable: - """ - Decorator for deprecated term "bookmark" - To be used for methods and function arguments - outline_item = a bookmark - outline = a collection of outline items - """ - - def decoration(func: Callable): # type: ignore - @functools.wraps(func) - def wrapper(*args, **kwargs): # type: ignore - rename_kwargs(func.__name__, kwargs, aliases, fail=True) - return func(*args, **kwargs) - - return wrapper - - return decoration - - -def rename_kwargs( # type: ignore - func_name: str, kwargs: Dict[str, Any], aliases: Dict[str, str], fail: bool = False -): - """ - Helper function to deprecate arguments. - """ - - for old_term, new_term in aliases.items(): - if old_term in kwargs: - if fail: - raise DeprecationError( - f"{old_term} is deprecated as an argument. Use {new_term} instead" - ) - if new_term in kwargs: - raise TypeError( - f"{func_name} received both {old_term} and {new_term} as an argument. " - f"{old_term} is deprecated. Use {new_term} instead." - ) - kwargs[new_term] = kwargs.pop(old_term) - warnings.warn( - message=( - f"{old_term} is deprecated as an argument. Use {new_term} instead" - ), - category=DeprecationWarning, - ) - - -def _human_readable_bytes(bytes: int) -> str: - if bytes < 10**3: - return f"{bytes} Byte" - elif bytes < 10**6: - return f"{bytes / 10**3:.1f} kB" - elif bytes < 10**9: - return f"{bytes / 10**6:.1f} MB" - else: - return f"{bytes / 10**9:.1f} GB" - - -@dataclass -class File: - name: str - data: bytes - - def __str__(self) -> str: - return f"File(name={self.name}, data: {_human_readable_bytes(len(self.data))})" - - def __repr__(self) -> str: - return f"File(name={self.name}, data: {_human_readable_bytes(len(self.data))}, hash: {hash(self.data)})" diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_version.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_version.py deleted file mode 100644 index 05527687..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "3.0.1" diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/_writer.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/_writer.py deleted file mode 100644 index b2e92cdb..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/_writer.py +++ /dev/null @@ -1,2822 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# Copyright (c) 2007, Ashish Kulkarni -# -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import codecs -import collections -import decimal -import logging -import random -import re -import struct -import time -import uuid -import warnings -from hashlib import md5 -from io import BytesIO, FileIO, IOBase -from pathlib import Path -from types import TracebackType -from typing import ( - IO, - Any, - Callable, - Deque, - Dict, - Iterable, - List, - Optional, - Pattern, - Tuple, - Type, - Union, - cast, -) - -from ._encryption import Encryption -from ._page import PageObject, _VirtualList -from ._reader import PdfReader -from ._security import _alg33, _alg34, _alg35 -from ._utils import ( - StrByteType, - StreamType, - _get_max_pdf_version_header, - b_, - deprecate_with_replacement, - deprecation_bookmark, - deprecation_with_replacement, - logger_warning, -) -from .constants import AnnotationDictionaryAttributes -from .constants import CatalogAttributes as CA -from .constants import CatalogDictionary -from .constants import Core as CO -from .constants import EncryptionDictAttributes as ED -from .constants import ( - FieldDictionaryAttributes, - FieldFlag, - FileSpecificationDictionaryEntries, - GoToActionArguments, - InteractiveFormDictEntries, -) -from .constants import PageAttributes as PG -from .constants import PagesAttributes as PA -from .constants import StreamAttributes as SA -from .constants import TrailerKeys as TK -from .constants import TypFitArguments, UserAccessPermissions -from .generic import ( - PAGE_FIT, - AnnotationBuilder, - ArrayObject, - BooleanObject, - ByteStringObject, - ContentStream, - DecodedStreamObject, - Destination, - DictionaryObject, - Fit, - FloatObject, - IndirectObject, - NameObject, - NullObject, - NumberObject, - PdfObject, - RectangleObject, - StreamObject, - TextStringObject, - TreeObject, - create_string_object, - hex_to_rgb, -) -from .pagerange import PageRange, PageRangeSpec -from .types import ( - BorderArrayType, - FitType, - LayoutType, - OutlineItemType, - OutlineType, - PagemodeType, - ZoomArgType, -) - -logger = logging.getLogger(__name__) - - -OPTIONAL_READ_WRITE_FIELD = FieldFlag(0) -ALL_DOCUMENT_PERMISSIONS = UserAccessPermissions((2**31 - 1) - 3) - - -class PdfWriter: - """ - This class supports writing PDF files out, given pages produced by another - class (typically :class:`PdfReader`). - """ - - def __init__(self, fileobj: StrByteType = "") -> None: - self._header = b"%PDF-1.3" - self._objects: List[PdfObject] = [] # array of indirect objects - self._idnum_hash: Dict[bytes, IndirectObject] = {} - self._id_translated: Dict[int, Dict[int, int]] = {} - - # The root of our page tree node. - pages = DictionaryObject() - pages.update( - { - NameObject(PA.TYPE): NameObject("/Pages"), - NameObject(PA.COUNT): NumberObject(0), - NameObject(PA.KIDS): ArrayObject(), - } - ) - self._pages = self._add_object(pages) - - # info object - info = DictionaryObject() - info.update( - { - NameObject("/Producer"): create_string_object( - codecs.BOM_UTF16_BE + "PyPDF2".encode("utf-16be") - ) - } - ) - self._info = self._add_object(info) - - # root object - self._root_object = DictionaryObject() - self._root_object.update( - { - NameObject(PA.TYPE): NameObject(CO.CATALOG), - NameObject(CO.PAGES): self._pages, - } - ) - self._root = self._add_object(self._root_object) - self.fileobj = fileobj - self.with_as_usage = False - - def __enter__(self) -> "PdfWriter": - """Store that writer is initialized by 'with'.""" - self.with_as_usage = True - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> None: - """Write data to the fileobj.""" - if self.fileobj: - self.write(self.fileobj) - - @property - def pdf_header(self) -> bytes: - """ - Header of the PDF document that is written. - - This should be something like b'%PDF-1.5'. It is recommended to set the - lowest version that supports all features which are used within the - PDF file. - """ - return self._header - - @pdf_header.setter - def pdf_header(self, new_header: bytes) -> None: - self._header = new_header - - def _add_object(self, obj: PdfObject) -> IndirectObject: - if hasattr(obj, "indirect_reference") and obj.indirect_reference.pdf == self: # type: ignore - return obj.indirect_reference # type: ignore - self._objects.append(obj) - obj.indirect_reference = IndirectObject(len(self._objects), 0, self) - return obj.indirect_reference - - def get_object( - self, - indirect_reference: Union[None, int, IndirectObject] = None, - ido: Optional[IndirectObject] = None, - ) -> PdfObject: - if ido is not None: # deprecated - if indirect_reference is not None: - raise ValueError( - "Please only set 'indirect_reference'. The 'ido' argument is deprecated." - ) - else: - indirect_reference = ido - warnings.warn( - "The parameter 'ido' is depreciated and will be removed in PyPDF2 4.0.0.", - DeprecationWarning, - ) - assert ( - indirect_reference is not None - ) # the None value is only there to keep the deprecated name - if isinstance(indirect_reference, int): - return self._objects[indirect_reference - 1] - if indirect_reference.pdf != self: - raise ValueError("pdf must be self") - return self._objects[indirect_reference.idnum - 1] # type: ignore - - def getObject( - self, ido: Union[int, IndirectObject] - ) -> PdfObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_object` instead. - """ - deprecation_with_replacement("getObject", "get_object", "3.0.0") - return self.get_object(ido) - - def _add_page( - self, - page: PageObject, - action: Callable[[Any, IndirectObject], None], - excluded_keys: Iterable[str] = (), - ) -> PageObject: - assert cast(str, page[PA.TYPE]) == CO.PAGE - page_org = page - excluded_keys = list(excluded_keys) - excluded_keys += [PA.PARENT, "/StructParents"] - # acrobat does not accept to have two indirect ref pointing on the same page; - # therefore in order to add easily multiple copies of the same page, we need to create a new - # dictionary for the page, however the objects below (including content) is not duplicated - try: # delete an already existing page - del self._id_translated[id(page_org.indirect_reference.pdf)][ # type: ignore - page_org.indirect_reference.idnum # type: ignore - ] - except Exception: - pass - page = cast("PageObject", page_org.clone(self, False, excluded_keys)) - # page_ind = self._add_object(page) - if page_org.pdf is not None: - other = page_org.pdf.pdf_header - if isinstance(other, str): - other = other.encode() # type: ignore - self.pdf_header = _get_max_pdf_version_header(self.pdf_header, other) # type: ignore - page[NameObject(PA.PARENT)] = self._pages - pages = cast(DictionaryObject, self.get_object(self._pages)) - assert page.indirect_reference is not None - action(pages[PA.KIDS], page.indirect_reference) - page_count = cast(int, pages[PA.COUNT]) - pages[NameObject(PA.COUNT)] = NumberObject(page_count + 1) - return page - - def set_need_appearances_writer(self) -> None: - # See 12.7.2 and 7.7.2 for more information: - # http://www.adobe.com/content/dam/acom/en/devnet/acrobat/pdfs/PDF32000_2008.pdf - try: - catalog = self._root_object - # get the AcroForm tree - if CatalogDictionary.ACRO_FORM not in catalog: - self._root_object.update( - { - NameObject(CatalogDictionary.ACRO_FORM): IndirectObject( - len(self._objects), 0, self - ) - } - ) - - need_appearances = NameObject(InteractiveFormDictEntries.NeedAppearances) - self._root_object[CatalogDictionary.ACRO_FORM][need_appearances] = BooleanObject(True) # type: ignore - except Exception as exc: - logger.error("set_need_appearances_writer() catch : ", repr(exc)) - - def add_page( - self, - page: PageObject, - excluded_keys: Iterable[str] = (), - ) -> PageObject: - """ - Add a page to this PDF file. - Recommended for advanced usage including the adequate excluded_keys - - The page is usually acquired from a :class:`PdfReader` - instance. - - :param PageObject page: The page to add to the document. Should be - an instance of :class:`PageObject` - """ - return self._add_page(page, list.append, excluded_keys) - - def addPage( - self, - page: PageObject, - excluded_keys: Iterable[str] = (), - ) -> PageObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_page` instead. - """ - deprecation_with_replacement("addPage", "add_page", "3.0.0") - return self.add_page(page, excluded_keys) - - def insert_page( - self, - page: PageObject, - index: int = 0, - excluded_keys: Iterable[str] = (), - ) -> PageObject: - """ - Insert a page in this PDF file. The page is usually acquired from a - :class:`PdfReader` instance. - - :param PageObject page: The page to add to the document. - :param int index: Position at which the page will be inserted. - """ - return self._add_page(page, lambda l, p: l.insert(index, p)) - - def insertPage( - self, - page: PageObject, - index: int = 0, - excluded_keys: Iterable[str] = (), - ) -> PageObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`insert_page` instead. - """ - deprecation_with_replacement("insertPage", "insert_page", "3.0.0") - return self.insert_page(page, index, excluded_keys) - - def get_page( - self, page_number: Optional[int] = None, pageNumber: Optional[int] = None - ) -> PageObject: - """ - Retrieve a page by number from this PDF file. - - :param int page_number: The page number to retrieve - (pages begin at zero) - :return: the page at the index given by *page_number* - """ - if pageNumber is not None: # pragma: no cover - if page_number is not None: - raise ValueError("Please only use the page_number parameter") - deprecate_with_replacement( - "get_page(pageNumber)", "get_page(page_number)", "4.0.0" - ) - page_number = pageNumber - if page_number is None and pageNumber is None: # pragma: no cover - raise ValueError("Please specify the page_number") - pages = cast(Dict[str, Any], self.get_object(self._pages)) - # TODO: crude hack - return cast(PageObject, pages[PA.KIDS][page_number].get_object()) - - def getPage(self, pageNumber: int) -> PageObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :code:`writer.pages[page_number]` instead. - """ - deprecation_with_replacement("getPage", "writer.pages[page_number]", "3.0.0") - return self.get_page(pageNumber) - - def _get_num_pages(self) -> int: - pages = cast(Dict[str, Any], self.get_object(self._pages)) - return int(pages[NameObject("/Count")]) - - def getNumPages(self) -> int: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :code:`len(writer.pages)` instead. - """ - deprecation_with_replacement("getNumPages", "len(writer.pages)", "3.0.0") - return self._get_num_pages() - - @property - def pages(self) -> List[PageObject]: - """Property that emulates a list of :class:`PageObject`.""" - return _VirtualList(self._get_num_pages, self.get_page) # type: ignore - - def add_blank_page( - self, width: Optional[float] = None, height: Optional[float] = None - ) -> PageObject: - """ - Append a blank page to this PDF file and returns it. If no page size - is specified, use the size of the last page. - - :param float width: The width of the new page expressed in default user - space units. - :param float height: The height of the new page expressed in default - user space units. - :return: the newly appended page - :raises PageSizeNotDefinedError: if width and height are not defined - and previous page does not exist. - """ - page = PageObject.create_blank_page(self, width, height) - self.add_page(page) - return page - - def addBlankPage( - self, width: Optional[float] = None, height: Optional[float] = None - ) -> PageObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_blank_page` instead. - """ - deprecation_with_replacement("addBlankPage", "add_blank_page", "3.0.0") - return self.add_blank_page(width, height) - - def insert_blank_page( - self, - width: Optional[decimal.Decimal] = None, - height: Optional[decimal.Decimal] = None, - index: int = 0, - ) -> PageObject: - """ - Insert a blank page to this PDF file and returns it. If no page size - is specified, use the size of the last page. - - :param float width: The width of the new page expressed in default user - space units. - :param float height: The height of the new page expressed in default - user space units. - :param int index: Position to add the page. - :return: the newly appended page - :raises PageSizeNotDefinedError: if width and height are not defined - and previous page does not exist. - """ - if width is None or height is None and (self._get_num_pages() - 1) >= index: - oldpage = self.pages[index] - width = oldpage.mediabox.width - height = oldpage.mediabox.height - page = PageObject.create_blank_page(self, width, height) - self.insert_page(page, index) - return page - - def insertBlankPage( - self, - width: Optional[decimal.Decimal] = None, - height: Optional[decimal.Decimal] = None, - index: int = 0, - ) -> PageObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`insertBlankPage` instead. - """ - deprecation_with_replacement("insertBlankPage", "insert_blank_page", "3.0.0") - return self.insert_blank_page(width, height, index) - - @property - def open_destination( - self, - ) -> Union[None, Destination, TextStringObject, ByteStringObject]: - """ - Property to access the opening destination ("/OpenAction" entry in the - PDF catalog). - it returns `None` if the entry does not exist is not set. - - :param destination:. - the property can be set to a Destination, a Page or an string(NamedDest) or - None (to remove "/OpenAction") - - (value stored in "/OpenAction" entry in the Pdf Catalog) - """ - if "/OpenAction" not in self._root_object: - return None - oa = self._root_object["/OpenAction"] - if isinstance(oa, (str, bytes)): - return create_string_object(str(oa)) - elif isinstance(oa, ArrayObject): - try: - page, typ = oa[0:2] # type: ignore - array = oa[2:] - fit = Fit(typ, tuple(array)) - return Destination("OpenAction", page, fit) - except Exception as exc: - raise Exception(f"Invalid Destination {oa}: {exc}") - else: - return None - - @open_destination.setter - def open_destination(self, dest: Union[None, str, Destination, PageObject]) -> None: - if dest is None: - try: - del self._root_object["/OpenAction"] - except KeyError: - pass - elif isinstance(dest, str): - self._root_object[NameObject("/OpenAction")] = TextStringObject(dest) - elif isinstance(dest, Destination): - self._root_object[NameObject("/OpenAction")] = dest.dest_array - elif isinstance(dest, PageObject): - self._root_object[NameObject("/OpenAction")] = Destination( - "Opening", - dest.indirect_reference - if dest.indirect_reference is not None - else NullObject(), - PAGE_FIT, - ).dest_array - - def add_js(self, javascript: str) -> None: - """ - Add Javascript which will launch upon opening this PDF. - - :param str javascript: Your Javascript. - - >>> output.add_js("this.print({bUI:true,bSilent:false,bShrinkToFit:true});") - # Example: This will launch the print window when the PDF is opened. - """ - # Names / JavaScript prefered to be able to add multiple scripts - if "/Names" not in self._root_object: - self._root_object[NameObject(CA.NAMES)] = DictionaryObject() - names = cast(DictionaryObject, self._root_object[CA.NAMES]) - if "/JavaScript" not in names: - names[NameObject("/JavaScript")] = DictionaryObject( - {NameObject("/Names"): ArrayObject()} - ) - # cast(DictionaryObject, names[NameObject("/JavaScript")])[NameObject("/Names")] = ArrayObject() - js_list = cast( - ArrayObject, cast(DictionaryObject, names["/JavaScript"])["/Names"] - ) - - js = DictionaryObject() - js.update( - { - NameObject(PA.TYPE): NameObject("/Action"), - NameObject("/S"): NameObject("/JavaScript"), - NameObject("/JS"): TextStringObject(f"{javascript}"), - } - ) - # We need a name for parameterized javascript in the pdf file, but it can be anything. - js_list.append(create_string_object(str(uuid.uuid4()))) - js_list.append(self._add_object(js)) - - def addJS(self, javascript: str) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_js` instead. - """ - deprecation_with_replacement("addJS", "add_js", "3.0.0") - return self.add_js(javascript) - - def add_attachment(self, filename: str, data: Union[str, bytes]) -> None: - """ - Embed a file inside the PDF. - - :param str filename: The filename to display. - :param str data: The data in the file. - - Reference: - https://www.adobe.com/content/dam/Adobe/en/devnet/acrobat/pdfs/PDF32000_2008.pdf - Section 7.11.3 - """ - # We need three entries: - # * The file's data - # * The /Filespec entry - # * The file's name, which goes in the Catalog - - # The entry for the file - # Sample: - # 8 0 obj - # << - # /Length 12 - # /Type /EmbeddedFile - # >> - # stream - # Hello world! - # endstream - # endobj - - file_entry = DecodedStreamObject() - file_entry.set_data(data) - file_entry.update({NameObject(PA.TYPE): NameObject("/EmbeddedFile")}) - - # The Filespec entry - # Sample: - # 7 0 obj - # << - # /Type /Filespec - # /F (hello.txt) - # /EF << /F 8 0 R >> - # >> - - ef_entry = DictionaryObject() - ef_entry.update({NameObject("/F"): file_entry}) - - filespec = DictionaryObject() - filespec.update( - { - NameObject(PA.TYPE): NameObject("/Filespec"), - NameObject(FileSpecificationDictionaryEntries.F): create_string_object( - filename - ), # Perhaps also try TextStringObject - NameObject(FileSpecificationDictionaryEntries.EF): ef_entry, - } - ) - - # Then create the entry for the root, as it needs a reference to the Filespec - # Sample: - # 1 0 obj - # << - # /Type /Catalog - # /Outlines 2 0 R - # /Pages 3 0 R - # /Names << /EmbeddedFiles << /Names [(hello.txt) 7 0 R] >> >> - # >> - # endobj - - embedded_files_names_dictionary = DictionaryObject() - embedded_files_names_dictionary.update( - { - NameObject(CA.NAMES): ArrayObject( - [create_string_object(filename), filespec] - ) - } - ) - - embedded_files_dictionary = DictionaryObject() - embedded_files_dictionary.update( - {NameObject("/EmbeddedFiles"): embedded_files_names_dictionary} - ) - # Update the root - self._root_object.update({NameObject(CA.NAMES): embedded_files_dictionary}) - - def addAttachment( - self, fname: str, fdata: Union[str, bytes] - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_attachment` instead. - """ - deprecation_with_replacement("addAttachment", "add_attachment", "3.0.0") - return self.add_attachment(fname, fdata) - - def append_pages_from_reader( - self, - reader: PdfReader, - after_page_append: Optional[Callable[[PageObject], None]] = None, - ) -> None: - """ - Copy pages from reader to writer. Includes an optional callback parameter - which is invoked after pages are appended to the writer. - - :param PdfReader reader: a PdfReader object from which to copy page - annotations to this writer object. The writer's annots - will then be updated - :param Callable[[PageObject], None] after_page_append: - Callback function that is invoked after each page is appended to - the writer. Signature includes a reference to the appended page - (delegates to append_pages_from_reader). The single parameter of the - callback is a reference to the page just appended to the document. - """ - # Get page count from writer and reader - reader_num_pages = len(reader.pages) - # Copy pages from reader to writer - for reader_page_number in range(reader_num_pages): - reader_page = reader.pages[reader_page_number] - writer_page = self.add_page(reader_page) - # Trigger callback, pass writer page as parameter - if callable(after_page_append): - after_page_append(writer_page) - - def appendPagesFromReader( - self, - reader: PdfReader, - after_page_append: Optional[Callable[[PageObject], None]] = None, - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`append_pages_from_reader` instead. - """ - deprecation_with_replacement( - "appendPagesFromReader", "append_pages_from_reader", "3.0.0" - ) - self.append_pages_from_reader(reader, after_page_append) - - def update_page_form_field_values( - self, - page: PageObject, - fields: Dict[str, Any], - flags: FieldFlag = OPTIONAL_READ_WRITE_FIELD, - ) -> None: - """ - Update the form field values for a given page from a fields dictionary. - - Copy field texts and values from fields to page. - If the field links to a parent object, add the information to the parent. - - :param PageObject page: Page reference from PDF writer where the - annotations and field data will be updated. - :param dict fields: a Python dictionary of field names (/T) and text - values (/V) - :param int flags: An integer (0 to 7). The first bit sets ReadOnly, the - second bit sets Required, the third bit sets NoExport. See - PDF Reference Table 8.70 for details. - """ - self.set_need_appearances_writer() - # Iterate through pages, update field values - if PG.ANNOTS not in page: - logger_warning("No fields to update on this page", __name__) - return - for j in range(len(page[PG.ANNOTS])): # type: ignore - writer_annot = page[PG.ANNOTS][j].get_object() # type: ignore - # retrieve parent field values, if present - writer_parent_annot = {} # fallback if it's not there - if PG.PARENT in writer_annot: - writer_parent_annot = writer_annot[PG.PARENT] - for field in fields: - if writer_annot.get(FieldDictionaryAttributes.T) == field: - if writer_annot.get(FieldDictionaryAttributes.FT) == "/Btn": - writer_annot.update( - { - NameObject( - AnnotationDictionaryAttributes.AS - ): NameObject(fields[field]) - } - ) - writer_annot.update( - { - NameObject(FieldDictionaryAttributes.V): TextStringObject( - fields[field] - ) - } - ) - if flags: - writer_annot.update( - { - NameObject(FieldDictionaryAttributes.Ff): NumberObject( - flags - ) - } - ) - elif writer_parent_annot.get(FieldDictionaryAttributes.T) == field: - writer_parent_annot.update( - { - NameObject(FieldDictionaryAttributes.V): TextStringObject( - fields[field] - ) - } - ) - - def updatePageFormFieldValues( - self, - page: PageObject, - fields: Dict[str, Any], - flags: FieldFlag = OPTIONAL_READ_WRITE_FIELD, - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`update_page_form_field_values` instead. - """ - deprecation_with_replacement( - "updatePageFormFieldValues", "update_page_form_field_values", "3.0.0" - ) - return self.update_page_form_field_values(page, fields, flags) - - def clone_reader_document_root(self, reader: PdfReader) -> None: - """ - Copy the reader document root to the writer. - - :param reader: PdfReader from the document root should be copied. - """ - self._root_object = cast(DictionaryObject, reader.trailer[TK.ROOT]) - - def cloneReaderDocumentRoot(self, reader: PdfReader) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`clone_reader_document_root` instead. - """ - deprecation_with_replacement( - "cloneReaderDocumentRoot", "clone_reader_document_root", "3.0.0" - ) - self.clone_reader_document_root(reader) - - def clone_document_from_reader( - self, - reader: PdfReader, - after_page_append: Optional[Callable[[PageObject], None]] = None, - ) -> None: - """ - Create a copy (clone) of a document from a PDF file reader - - :param reader: PDF file reader instance from which the clone - should be created. - :param Callable[[PageObject], None] after_page_append: - Callback function that is invoked after each page is appended to - the writer. Signature includes a reference to the appended page - (delegates to append_pages_from_reader). The single parameter of the - callback is a reference to the page just appended to the document. - """ - # TODO : ppZZ may be limited because we do not copy all info... - self.clone_reader_document_root(reader) - self.append_pages_from_reader(reader, after_page_append) - - def cloneDocumentFromReader( - self, - reader: PdfReader, - after_page_append: Optional[Callable[[PageObject], None]] = None, - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`clone_document_from_reader` instead. - """ - deprecation_with_replacement( - "cloneDocumentFromReader", "clone_document_from_reader", "3.0.0" - ) - self.clone_document_from_reader(reader, after_page_append) - - def encrypt( - self, - user_password: Optional[str] = None, - owner_password: Optional[str] = None, - use_128bit: bool = True, - permissions_flag: UserAccessPermissions = ALL_DOCUMENT_PERMISSIONS, - user_pwd: Optional[str] = None, # deprecated - owner_pwd: Optional[str] = None, # deprecated - ) -> None: - """ - Encrypt this PDF file with the PDF Standard encryption handler. - - :param str user_password: The "user password", which allows for opening - and reading the PDF file with the restrictions provided. - :param str owner_password: The "owner password", which allows for - opening the PDF files without any restrictions. By default, - the owner password is the same as the user password. - :param bool use_128bit: flag as to whether to use 128bit - encryption. When false, 40bit encryption will be used. By default, - this flag is on. - :param unsigned int permissions_flag: permissions as described in - TABLE 3.20 of the PDF 1.7 specification. A bit value of 1 means the - permission is grantend. Hence an integer value of -1 will set all - flags. - Bit position 3 is for printing, 4 is for modifying content, 5 and 6 - control annotations, 9 for form fields, 10 for extraction of - text and graphics. - """ - if user_pwd is not None: - if user_password is not None: - raise ValueError( - "Please only set 'user_password'. " - "The 'user_pwd' argument is deprecated." - ) - else: - warnings.warn( - "Please use 'user_password' instead of 'user_pwd'. " - "The 'user_pwd' argument is deprecated and " - "will be removed in PyPDF2 4.0.0." - ) - user_password = user_pwd - if user_password is None: # deprecated - # user_password is only Optional for due to the deprecated user_pwd - raise ValueError("user_password may not be None") - - if owner_pwd is not None: # deprecated - if owner_password is not None: - raise ValueError( - "The argument owner_pwd of encrypt is deprecated. Use owner_password only." - ) - else: - old_term = "owner_pwd" - new_term = "owner_password" - warnings.warn( - message=( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2 4.0.0. Use {new_term} instead" - ), - category=DeprecationWarning, - ) - owner_password = owner_pwd - - if owner_password is None: - owner_password = user_password - if use_128bit: - V = 2 - rev = 3 - keylen = int(128 / 8) - else: - V = 1 - rev = 2 - keylen = int(40 / 8) - P = permissions_flag - O = ByteStringObject(_alg33(owner_password, user_password, rev, keylen)) # type: ignore[arg-type] - ID_1 = ByteStringObject(md5((repr(time.time())).encode("utf8")).digest()) - ID_2 = ByteStringObject(md5((repr(random.random())).encode("utf8")).digest()) - self._ID = ArrayObject((ID_1, ID_2)) - if rev == 2: - U, key = _alg34(user_password, O, P, ID_1) - else: - assert rev == 3 - U, key = _alg35(user_password, rev, keylen, O, P, ID_1, False) # type: ignore[arg-type] - encrypt = DictionaryObject() - encrypt[NameObject(SA.FILTER)] = NameObject("/Standard") - encrypt[NameObject("/V")] = NumberObject(V) - if V == 2: - encrypt[NameObject(SA.LENGTH)] = NumberObject(keylen * 8) - encrypt[NameObject(ED.R)] = NumberObject(rev) - encrypt[NameObject(ED.O)] = ByteStringObject(O) - encrypt[NameObject(ED.U)] = ByteStringObject(U) - encrypt[NameObject(ED.P)] = NumberObject(P) - self._encrypt = self._add_object(encrypt) - self._encrypt_key = key - - def write_stream(self, stream: StreamType) -> None: - if hasattr(stream, "mode") and "b" not in stream.mode: - logger_warning( - f"File <{stream.name}> to write to is not in binary mode. " # type: ignore - "It may not be written to correctly.", - __name__, - ) - - if not self._root: - self._root = self._add_object(self._root_object) - - # PDF objects sometimes have circular references to their /Page objects - # inside their object tree (for example, annotations). Those will be - # indirect references to objects that we've recreated in this PDF. To - # address this problem, PageObject's store their original object - # reference number, and we add it to the external reference map before - # we sweep for indirect references. This forces self-page-referencing - # trees to reference the correct new object location, rather than - # copying in a new copy of the page object. - self._sweep_indirect_references(self._root) - - object_positions = self._write_header(stream) - xref_location = self._write_xref_table(stream, object_positions) - self._write_trailer(stream) - stream.write(b_(f"\nstartxref\n{xref_location}\n%%EOF\n")) # eof - - def write(self, stream: Union[Path, StrByteType]) -> Tuple[bool, IO]: - """ - Write the collection of pages added to this object out as a PDF file. - - :param stream: An object to write the file to. The object can support - the write method and the tell method, similar to a file object, or - be a file path, just like the fileobj, just named it stream to keep - existing workflow. - """ - my_file = False - - if stream == "": - raise ValueError(f"Output(stream={stream}) is empty.") - - if isinstance(stream, (str, Path)): - stream = FileIO(stream, "wb") - self.with_as_usage = True # - my_file = True - - self.write_stream(stream) - - if self.with_as_usage: - stream.close() - - return my_file, stream - - def _write_header(self, stream: StreamType) -> List[int]: - object_positions = [] - stream.write(self.pdf_header + b"\n") - stream.write(b"%\xE2\xE3\xCF\xD3\n") - for i, obj in enumerate(self._objects): - obj = self._objects[i] - # If the obj is None we can't write anything - if obj is not None: - idnum = i + 1 - object_positions.append(stream.tell()) - stream.write(b_(str(idnum)) + b" 0 obj\n") - key = None - if hasattr(self, "_encrypt") and idnum != self._encrypt.idnum: - pack1 = struct.pack(" int: - xref_location = stream.tell() - stream.write(b"xref\n") - stream.write(b_(f"0 {len(self._objects) + 1}\n")) - stream.write(b_(f"{0:0>10} {65535:0>5} f \n")) - for offset in object_positions: - stream.write(b_(f"{offset:0>10} {0:0>5} n \n")) - return xref_location - - def _write_trailer(self, stream: StreamType) -> None: - stream.write(b"trailer\n") - trailer = DictionaryObject() - trailer.update( - { - NameObject(TK.SIZE): NumberObject(len(self._objects) + 1), - NameObject(TK.ROOT): self._root, - NameObject(TK.INFO): self._info, - } - ) - if hasattr(self, "_ID"): - trailer[NameObject(TK.ID)] = self._ID - if hasattr(self, "_encrypt"): - trailer[NameObject(TK.ENCRYPT)] = self._encrypt - trailer.write_to_stream(stream, None) - - def add_metadata(self, infos: Dict[str, Any]) -> None: - """ - Add custom metadata to the output. - - :param dict infos: a Python dictionary where each key is a field - and each value is your new metadata. - """ - args = {} - for key, value in list(infos.items()): - args[NameObject(key)] = create_string_object(value) - self.get_object(self._info).update(args) # type: ignore - - def addMetadata(self, infos: Dict[str, Any]) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_metadata` instead. - """ - deprecation_with_replacement("addMetadata", "add_metadata", "3.0.0") - self.add_metadata(infos) - - def _sweep_indirect_references( - self, - root: Union[ - ArrayObject, - BooleanObject, - DictionaryObject, - FloatObject, - IndirectObject, - NameObject, - PdfObject, - NumberObject, - TextStringObject, - NullObject, - ], - ) -> None: - stack: Deque[ - Tuple[ - Any, - Optional[Any], - Any, - List[PdfObject], - ] - ] = collections.deque() - discovered = [] - parent = None - grant_parents: List[PdfObject] = [] - key_or_id = None - - # Start from root - stack.append((root, parent, key_or_id, grant_parents)) - - while len(stack): - data, parent, key_or_id, grant_parents = stack.pop() - - # Build stack for a processing depth-first - if isinstance(data, (ArrayObject, DictionaryObject)): - for key, value in data.items(): - stack.append( - ( - value, - data, - key, - grant_parents + [parent] if parent is not None else [], - ) - ) - elif isinstance(data, IndirectObject): - if data.pdf != self: - data = self._resolve_indirect_object(data) - - if str(data) not in discovered: - discovered.append(str(data)) - stack.append((data.get_object(), None, None, [])) - - # Check if data has a parent and if it is a dict or an array update the value - if isinstance(parent, (DictionaryObject, ArrayObject)): - if isinstance(data, StreamObject): - # a dictionary value is a stream. streams must be indirect - # objects, so we need to change this value. - data = self._resolve_indirect_object(self._add_object(data)) - - update_hashes = [] - - # Data changed and thus the hash value changed - if parent[key_or_id] != data: - update_hashes = [parent.hash_value()] + [ - grant_parent.hash_value() for grant_parent in grant_parents - ] - parent[key_or_id] = data - - # Update old hash value to new hash value - for old_hash in update_hashes: - indirect_reference = self._idnum_hash.pop(old_hash, None) - - if indirect_reference is not None: - indirect_reference_obj = indirect_reference.get_object() - - if indirect_reference_obj is not None: - self._idnum_hash[ - indirect_reference_obj.hash_value() - ] = indirect_reference - - def _resolve_indirect_object(self, data: IndirectObject) -> IndirectObject: - """ - Resolves indirect object to this pdf indirect objects. - - If it is a new object then it is added to self._objects - and new idnum is given and generation is always 0. - """ - if hasattr(data.pdf, "stream") and data.pdf.stream.closed: - raise ValueError(f"I/O operation on closed file: {data.pdf.stream.name}") - - if data.pdf == self: - return data - - # Get real object indirect object - real_obj = data.pdf.get_object(data) - - if real_obj is None: - logger_warning( - f"Unable to resolve [{data.__class__.__name__}: {data}], " - "returning NullObject instead", - __name__, - ) - real_obj = NullObject() - - hash_value = real_obj.hash_value() - - # Check if object is handled - if hash_value in self._idnum_hash: - return self._idnum_hash[hash_value] - - if data.pdf == self: - self._idnum_hash[hash_value] = IndirectObject(data.idnum, 0, self) - # This is new object in this pdf - else: - self._idnum_hash[hash_value] = self._add_object(real_obj) - - return self._idnum_hash[hash_value] - - def get_reference(self, obj: PdfObject) -> IndirectObject: - idnum = self._objects.index(obj) + 1 - ref = IndirectObject(idnum, 0, self) - assert ref.get_object() == obj - return ref - - def getReference(self, obj: PdfObject) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_reference` instead. - """ - deprecation_with_replacement("getReference", "get_reference", "3.0.0") - return self.get_reference(obj) - - def get_outline_root(self) -> TreeObject: - if CO.OUTLINES in self._root_object: - # TABLE 3.25 Entries in the catalog dictionary - outline = cast(TreeObject, self._root_object[CO.OUTLINES]) - idnum = self._objects.index(outline) + 1 - outline_ref = IndirectObject(idnum, 0, self) - assert outline_ref.get_object() == outline - else: - outline = TreeObject() - outline.update({}) - outline_ref = self._add_object(outline) - self._root_object[NameObject(CO.OUTLINES)] = outline_ref - - return outline - - def get_threads_root(self) -> ArrayObject: - """ - the list of threads see Β§8.3.2 from PDF 1.7 spec - - :return: an Array (possibly empty) of Dictionaries with "/F" and "/I" properties - """ - if CO.THREADS in self._root_object: - # TABLE 3.25 Entries in the catalog dictionary - threads = cast(ArrayObject, self._root_object[CO.THREADS]) - else: - threads = ArrayObject() - self._root_object[NameObject(CO.THREADS)] = threads - return threads - - @property - def threads(self) -> ArrayObject: - """ - Read-only property for the list of threads see Β§8.3.2 from PDF 1.7 spec - - :return: an Array (possibly empty) of Dictionaries with "/F" and "/I" properties - """ - return self.get_threads_root() - - def getOutlineRoot(self) -> TreeObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_outline_root` instead. - """ - deprecation_with_replacement("getOutlineRoot", "get_outline_root", "3.0.0") - return self.get_outline_root() - - def get_named_dest_root(self) -> ArrayObject: - if CA.NAMES in self._root_object and isinstance( - self._root_object[CA.NAMES], DictionaryObject - ): - names = cast(DictionaryObject, self._root_object[CA.NAMES]) - names_ref = names.indirect_reference - if CA.DESTS in names and isinstance(names[CA.DESTS], DictionaryObject): - # 3.6.3 Name Dictionary (PDF spec 1.7) - dests = cast(DictionaryObject, names[CA.DESTS]) - dests_ref = dests.indirect_reference - if CA.NAMES in dests: - # TABLE 3.33 Entries in a name tree node dictionary - nd = cast(ArrayObject, dests[CA.NAMES]) - else: - nd = ArrayObject() - dests[NameObject(CA.NAMES)] = nd - else: - dests = DictionaryObject() - dests_ref = self._add_object(dests) - names[NameObject(CA.DESTS)] = dests_ref - nd = ArrayObject() - dests[NameObject(CA.NAMES)] = nd - - else: - names = DictionaryObject() - names_ref = self._add_object(names) - self._root_object[NameObject(CA.NAMES)] = names_ref - dests = DictionaryObject() - dests_ref = self._add_object(dests) - names[NameObject(CA.DESTS)] = dests_ref - nd = ArrayObject() - dests[NameObject(CA.NAMES)] = nd - - return nd - - def getNamedDestRoot(self) -> ArrayObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_named_dest_root` instead. - """ - deprecation_with_replacement("getNamedDestRoot", "get_named_dest_root", "3.0.0") - return self.get_named_dest_root() - - def add_outline_item_destination( - self, - page_destination: Union[None, PageObject, TreeObject] = None, - parent: Union[None, TreeObject, IndirectObject] = None, - before: Union[None, TreeObject, IndirectObject] = None, - dest: Union[None, PageObject, TreeObject] = None, # deprecated - ) -> IndirectObject: - if page_destination is not None and dest is not None: # deprecated - raise ValueError( - "The argument dest of add_outline_item_destination is deprecated. Use page_destination only." - ) - if dest is not None: # deprecated - old_term = "dest" - new_term = "page_destination" - warnings.warn( - message=( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2 4.0.0. Use {new_term} instead" - ), - category=DeprecationWarning, - ) - page_destination = dest - if page_destination is None: # deprecated - # argument is only Optional due to deprecated argument. - raise ValueError("page_destination may not be None") - - if parent is None: - parent = self.get_outline_root() - - parent = cast(TreeObject, parent.get_object()) - page_destination_ref = self._add_object(page_destination) - if before is not None: - before = before.indirect_reference - parent.insert_child(page_destination_ref, before, self) - - return page_destination_ref - - def add_bookmark_destination( - self, - dest: Union[PageObject, TreeObject], - parent: Union[None, TreeObject, IndirectObject] = None, - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 2.9.0 - - Use :meth:`add_outline_item_destination` instead. - """ - deprecation_with_replacement( - "add_bookmark_destination", "add_outline_item_destination", "3.0.0" - ) - return self.add_outline_item_destination(dest, parent) - - def addBookmarkDestination( - self, dest: PageObject, parent: Optional[TreeObject] = None - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_outline_item_destination` instead. - """ - deprecation_with_replacement( - "addBookmarkDestination", "add_outline_item_destination", "3.0.0" - ) - return self.add_outline_item_destination(dest, parent) - - @deprecation_bookmark(bookmark="outline_item") - def add_outline_item_dict( - self, - outline_item: OutlineItemType, - parent: Union[None, TreeObject, IndirectObject] = None, - before: Union[None, TreeObject, IndirectObject] = None, - ) -> IndirectObject: - outline_item_object = TreeObject() - for k, v in list(outline_item.items()): - outline_item_object[NameObject(str(k))] = v - outline_item_object.update(outline_item) - - if "/A" in outline_item: - action = DictionaryObject() - a_dict = cast(DictionaryObject, outline_item["/A"]) - for k, v in list(a_dict.items()): - action[NameObject(str(k))] = v - action_ref = self._add_object(action) - outline_item_object[NameObject("/A")] = action_ref - - return self.add_outline_item_destination(outline_item_object, parent, before) - - @deprecation_bookmark(bookmark="outline_item") - def add_bookmark_dict( - self, outline_item: OutlineItemType, parent: Optional[TreeObject] = None - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 2.9.0 - - Use :meth:`add_outline_item_dict` instead. - """ - deprecation_with_replacement( - "add_bookmark_dict", "add_outline_item_dict", "3.0.0" - ) - return self.add_outline_item_dict(outline_item, parent) - - @deprecation_bookmark(bookmark="outline_item") - def addBookmarkDict( - self, outline_item: OutlineItemType, parent: Optional[TreeObject] = None - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_outline_item_dict` instead. - """ - deprecation_with_replacement( - "addBookmarkDict", "add_outline_item_dict", "3.0.0" - ) - return self.add_outline_item_dict(outline_item, parent) - - def add_outline_item( - self, - title: str, - page_number: Union[None, PageObject, IndirectObject, int], - parent: Union[None, TreeObject, IndirectObject] = None, - before: Union[None, TreeObject, IndirectObject] = None, - color: Optional[Union[Tuple[float, float, float], str]] = None, - bold: bool = False, - italic: bool = False, - fit: Fit = PAGE_FIT, - pagenum: Optional[int] = None, # deprecated - ) -> IndirectObject: - """ - Add an outline item (commonly referred to as a "Bookmark") to this PDF file. - - :param str title: Title to use for this outline item. - :param int page_number: Page number this outline item will point to. - :param parent: A reference to a parent outline item to create nested - outline items. - :param parent: A reference to a parent outline item to create nested - outline items. - :param tuple color: Color of the outline item's font as a red, green, blue tuple - from 0.0 to 1.0 or as a Hex String (#RRGGBB) - :param bool bold: Outline item font is bold - :param bool italic: Outline item font is italic - :param Fit fit: The fit of the destination page. - """ - page_ref: Union[None, NullObject, IndirectObject, NumberObject] - if isinstance(italic, Fit): # it means that we are on the old params - if fit is not None and page_number is None: - page_number = fit # type: ignore - return self.add_outline_item( - title, page_number, parent, None, before, color, bold, italic # type: ignore - ) - if page_number is not None and pagenum is not None: - raise ValueError( - "The argument pagenum of add_outline_item is deprecated. Use page_number only." - ) - if page_number is None: - action_ref = None - else: - if isinstance(page_number, IndirectObject): - page_ref = page_number - elif isinstance(page_number, PageObject): - page_ref = page_number.indirect_reference - elif isinstance(page_number, int): - try: - page_ref = self.pages[page_number].indirect_reference - except IndexError: - page_ref = NumberObject(page_number) - if page_ref is None: - logger_warning( - f"can not find reference of page {page_number}", - __name__, - ) - page_ref = NullObject() - dest = Destination( - NameObject("/" + title + " outline item"), - page_ref, - fit, - ) - - action_ref = self._add_object( - DictionaryObject( - { - NameObject(GoToActionArguments.D): dest.dest_array, - NameObject(GoToActionArguments.S): NameObject("/GoTo"), - } - ) - ) - outline_item = _create_outline_item(action_ref, title, color, italic, bold) - - if parent is None: - parent = self.get_outline_root() - return self.add_outline_item_destination(outline_item, parent, before) - - def add_bookmark( - self, - title: str, - pagenum: int, # deprecated, but the whole method is deprecated - parent: Union[None, TreeObject, IndirectObject] = None, - color: Optional[Tuple[float, float, float]] = None, - bold: bool = False, - italic: bool = False, - fit: FitType = "/Fit", - *args: ZoomArgType, - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 2.9.0 - - Use :meth:`add_outline_item` instead. - """ - deprecation_with_replacement("add_bookmark", "add_outline_item", "3.0.0") - return self.add_outline_item( - title, - pagenum, - parent, - color, # type: ignore - bold, # type: ignore - italic, - Fit(fit_type=fit, fit_args=args), # type: ignore - ) - - def addBookmark( - self, - title: str, - pagenum: int, - parent: Union[None, TreeObject, IndirectObject] = None, - color: Optional[Tuple[float, float, float]] = None, - bold: bool = False, - italic: bool = False, - fit: FitType = "/Fit", - *args: ZoomArgType, - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_outline_item` instead. - """ - deprecation_with_replacement("addBookmark", "add_outline_item", "3.0.0") - return self.add_outline_item( - title, - pagenum, - parent, - None, - color, - bold, - italic, - Fit(fit_type=fit, fit_args=args), - ) - - def add_outline(self) -> None: - raise NotImplementedError( - "This method is not yet implemented. Use :meth:`add_outline_item` instead." - ) - - def add_named_destination_array( - self, title: TextStringObject, destination: Union[IndirectObject, ArrayObject] - ) -> None: - nd = self.get_named_dest_root() - i = 0 - while i < len(nd): - if title < nd[i]: - nd.insert(i, destination) - nd.insert(i, TextStringObject(title)) - return - else: - i += 2 - nd.extend([TextStringObject(title), destination]) - return - - def add_named_destination_object( - self, - page_destination: Optional[PdfObject] = None, - dest: Optional[PdfObject] = None, - ) -> IndirectObject: - if page_destination is not None and dest is not None: - raise ValueError( - "The argument dest of add_named_destination_object is deprecated. Use page_destination only." - ) - if dest is not None: # deprecated - old_term = "dest" - new_term = "page_destination" - warnings.warn( - message=( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2 4.0.0. Use {new_term} instead" - ), - category=DeprecationWarning, - ) - page_destination = dest - if page_destination is None: # deprecated - raise ValueError("page_destination may not be None") - - page_destination_ref = self._add_object(page_destination.dest_array) # type: ignore - self.add_named_destination_array( - cast("TextStringObject", page_destination["/Title"]), page_destination_ref # type: ignore - ) - - return page_destination_ref - - def addNamedDestinationObject( - self, dest: Destination - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_named_destination_object` instead. - """ - deprecation_with_replacement( - "addNamedDestinationObject", "add_named_destination_object", "3.0.0" - ) - return self.add_named_destination_object(dest) - - def add_named_destination( - self, - title: str, - page_number: Optional[int] = None, - pagenum: Optional[int] = None, # deprecated - ) -> IndirectObject: - if page_number is not None and pagenum is not None: - raise ValueError( - "The argument pagenum of add_outline_item is deprecated. Use page_number only." - ) - if pagenum is not None: - old_term = "pagenum" - new_term = "page_number" - warnings.warn( - message=( - f"{old_term} is deprecated as an argument and will be " - f"removed in PyPDF2 4.0.0. Use {new_term} instead" - ), - category=DeprecationWarning, - ) - page_number = pagenum - if page_number is None: - raise ValueError("page_number may not be None") - page_ref = self.get_object(self._pages)[PA.KIDS][page_number] # type: ignore - dest = DictionaryObject() - dest.update( - { - NameObject(GoToActionArguments.D): ArrayObject( - [page_ref, NameObject(TypFitArguments.FIT_H), NumberObject(826)] - ), - NameObject(GoToActionArguments.S): NameObject("/GoTo"), - } - ) - - dest_ref = self._add_object(dest) - nd = self.get_named_dest_root() - if not isinstance(title, TextStringObject): - title = TextStringObject(str(title)) - nd.extend([title, dest_ref]) - return dest_ref - - def addNamedDestination( - self, title: str, pagenum: int - ) -> IndirectObject: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_named_destination` instead. - """ - deprecation_with_replacement( - "addNamedDestination", "add_named_destination", "3.0.0" - ) - return self.add_named_destination(title, pagenum) - - def remove_links(self) -> None: - """Remove links and annotations from this output.""" - pg_dict = cast(DictionaryObject, self.get_object(self._pages)) - pages = cast(ArrayObject, pg_dict[PA.KIDS]) - for page in pages: - page_ref = cast(DictionaryObject, self.get_object(page)) - if PG.ANNOTS in page_ref: - del page_ref[PG.ANNOTS] - - def removeLinks(self) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`remove_links` instead. - """ - deprecation_with_replacement("removeLinks", "remove_links", "3.0.0") - return self.remove_links() - - def remove_images(self, ignore_byte_string_object: bool = False) -> None: - """ - Remove images from this output. - - :param bool ignore_byte_string_object: optional parameter - to ignore ByteString Objects. - """ - pg_dict = cast(DictionaryObject, self.get_object(self._pages)) - pages = cast(ArrayObject, pg_dict[PA.KIDS]) - jump_operators = ( - b"cm", - b"w", - b"J", - b"j", - b"M", - b"d", - b"ri", - b"i", - b"gs", - b"W", - b"b", - b"s", - b"S", - b"f", - b"F", - b"n", - b"m", - b"l", - b"c", - b"v", - b"y", - b"h", - b"B", - b"Do", - b"sh", - ) - for page in pages: - page_ref = cast(DictionaryObject, self.get_object(page)) - content = page_ref["/Contents"].get_object() - if not isinstance(content, ContentStream): - content = ContentStream(content, page_ref) - - _operations = [] - seq_graphics = False - for operands, operator in content.operations: - if operator in [b"Tj", b"'"]: - text = operands[0] - if ignore_byte_string_object and not isinstance( - text, TextStringObject - ): - operands[0] = TextStringObject() - elif operator == b'"': - text = operands[2] - if ignore_byte_string_object and not isinstance( - text, TextStringObject - ): - operands[2] = TextStringObject() - elif operator == b"TJ": - for i in range(len(operands[0])): - if ignore_byte_string_object and not isinstance( - operands[0][i], TextStringObject - ): - operands[0][i] = TextStringObject() - - if operator == b"q": - seq_graphics = True - if operator == b"Q": - seq_graphics = False - if seq_graphics and operator in jump_operators: - continue - if operator == b"re": - continue - _operations.append((operands, operator)) - - content.operations = _operations - page_ref.__setitem__(NameObject("/Contents"), content) - - def removeImages( - self, ignoreByteStringObject: bool = False - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`remove_images` instead. - """ - deprecation_with_replacement("removeImages", "remove_images", "3.0.0") - return self.remove_images(ignoreByteStringObject) - - def remove_text(self, ignore_byte_string_object: bool = False) -> None: - """ - Remove text from this output. - - :param bool ignore_byte_string_object: optional parameter - to ignore ByteString Objects. - """ - pg_dict = cast(DictionaryObject, self.get_object(self._pages)) - pages = cast(List[IndirectObject], pg_dict[PA.KIDS]) - for page in pages: - page_ref = cast(PageObject, self.get_object(page)) - content = page_ref["/Contents"].get_object() - if not isinstance(content, ContentStream): - content = ContentStream(content, page_ref) - for operands, operator in content.operations: - if operator in [b"Tj", b"'"]: - text = operands[0] - if not ignore_byte_string_object: - if isinstance(text, TextStringObject): - operands[0] = TextStringObject() - else: - if isinstance(text, (TextStringObject, ByteStringObject)): - operands[0] = TextStringObject() - elif operator == b'"': - text = operands[2] - if not ignore_byte_string_object: - if isinstance(text, TextStringObject): - operands[2] = TextStringObject() - else: - if isinstance(text, (TextStringObject, ByteStringObject)): - operands[2] = TextStringObject() - elif operator == b"TJ": - for i in range(len(operands[0])): - if not ignore_byte_string_object: - if isinstance(operands[0][i], TextStringObject): - operands[0][i] = TextStringObject() - else: - if isinstance( - operands[0][i], (TextStringObject, ByteStringObject) - ): - operands[0][i] = TextStringObject() - - page_ref.__setitem__(NameObject("/Contents"), content) - - def removeText( - self, ignoreByteStringObject: bool = False - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`remove_text` instead. - """ - deprecation_with_replacement("removeText", "remove_text", "3.0.0") - return self.remove_text(ignoreByteStringObject) - - def add_uri( - self, - page_number: int, - uri: str, - rect: RectangleObject, - border: Optional[ArrayObject] = None, - pagenum: Optional[int] = None, - ) -> None: - """ - Add an URI from a rectangular area to the specified page. - This uses the basic structure of :meth:`add_link` - - :param int page_number: index of the page on which to place the URI action. - :param str uri: URI of resource to link to. - :param Tuple[int, int, int, int] rect: :class:`RectangleObject` or array of four - integers specifying the clickable rectangular area - ``[xLL, yLL, xUR, yUR]``, or string in the form ``"[ xLL yLL xUR yUR ]"``. - :param ArrayObject border: if provided, an array describing border-drawing - properties. See the PDF spec for details. No border will be - drawn if this argument is omitted. - """ - if pagenum is not None: - warnings.warn( - "The 'pagenum' argument of add_uri is deprecated and will be " - "removed in PyPDF2 4.0.0. Use 'page_number' instead.", - category=DeprecationWarning, - ) - page_number = pagenum - page_link = self.get_object(self._pages)[PA.KIDS][page_number] # type: ignore - page_ref = cast(Dict[str, Any], self.get_object(page_link)) - - border_arr: BorderArrayType - if border is not None: - border_arr = [NameObject(n) for n in border[:3]] - if len(border) == 4: - dash_pattern = ArrayObject([NameObject(n) for n in border[3]]) - border_arr.append(dash_pattern) - else: - border_arr = [NumberObject(2)] * 3 - - if isinstance(rect, str): - rect = NameObject(rect) - elif isinstance(rect, RectangleObject): - pass - else: - rect = RectangleObject(rect) - - lnk2 = DictionaryObject() - lnk2.update( - { - NameObject("/S"): NameObject("/URI"), - NameObject("/URI"): TextStringObject(uri), - } - ) - lnk = DictionaryObject() - lnk.update( - { - NameObject(AnnotationDictionaryAttributes.Type): NameObject(PG.ANNOTS), - NameObject(AnnotationDictionaryAttributes.Subtype): NameObject("/Link"), - NameObject(AnnotationDictionaryAttributes.P): page_link, - NameObject(AnnotationDictionaryAttributes.Rect): rect, - NameObject("/H"): NameObject("/I"), - NameObject(AnnotationDictionaryAttributes.Border): ArrayObject( - border_arr - ), - NameObject("/A"): lnk2, - } - ) - lnk_ref = self._add_object(lnk) - - if PG.ANNOTS in page_ref: - page_ref[PG.ANNOTS].append(lnk_ref) - else: - page_ref[NameObject(PG.ANNOTS)] = ArrayObject([lnk_ref]) - - def addURI( - self, - pagenum: int, # deprecated, but method is deprecated already - uri: str, - rect: RectangleObject, - border: Optional[ArrayObject] = None, - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_uri` instead. - """ - deprecation_with_replacement("addURI", "add_uri", "3.0.0") - return self.add_uri(pagenum, uri, rect, border) - - def add_link( - self, - pagenum: int, # deprecated, but method is deprecated already - page_destination: int, - rect: RectangleObject, - border: Optional[ArrayObject] = None, - fit: FitType = "/Fit", - *args: ZoomArgType, - ) -> None: - deprecation_with_replacement( - "add_link", "add_annotation(AnnotationBuilder.link(...))" - ) - - if isinstance(rect, str): - rect = rect.strip()[1:-1] - rect = RectangleObject( - [float(num) for num in rect.split(" ") if len(num) > 0] - ) - elif isinstance(rect, RectangleObject): - pass - else: - rect = RectangleObject(rect) - - annotation = AnnotationBuilder.link( - rect=rect, - border=border, - target_page_index=page_destination, - fit=Fit(fit_type=fit, fit_args=args), - ) - return self.add_annotation(page_number=pagenum, annotation=annotation) - - def addLink( - self, - pagenum: int, # deprecated, but method is deprecated already - page_destination: int, - rect: RectangleObject, - border: Optional[ArrayObject] = None, - fit: FitType = "/Fit", - *args: ZoomArgType, - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`add_link` instead. - """ - deprecate_with_replacement( - "addLink", "add_annotation(AnnotationBuilder.link(...))", "4.0.0" - ) - return self.add_link(pagenum, page_destination, rect, border, fit, *args) - - _valid_layouts = ( - "/NoLayout", - "/SinglePage", - "/OneColumn", - "/TwoColumnLeft", - "/TwoColumnRight", - "/TwoPageLeft", - "/TwoPageRight", - ) - - def _get_page_layout(self) -> Optional[LayoutType]: - try: - return cast(LayoutType, self._root_object["/PageLayout"]) - except KeyError: - return None - - def getPageLayout(self) -> Optional[LayoutType]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_layout` instead. - """ - deprecation_with_replacement("getPageLayout", "page_layout", "3.0.0") - return self._get_page_layout() - - def _set_page_layout(self, layout: Union[NameObject, LayoutType]) -> None: - """ - Set the page layout. - - :param str layout: The page layout to be used. - - .. list-table:: Valid ``layout`` arguments - :widths: 50 200 - - * - /NoLayout - - Layout explicitly not specified - * - /SinglePage - - Show one page at a time - * - /OneColumn - - Show one column at a time - * - /TwoColumnLeft - - Show pages in two columns, odd-numbered pages on the left - * - /TwoColumnRight - - Show pages in two columns, odd-numbered pages on the right - * - /TwoPageLeft - - Show two pages at a time, odd-numbered pages on the left - * - /TwoPageRight - - Show two pages at a time, odd-numbered pages on the right - """ - if not isinstance(layout, NameObject): - if layout not in self._valid_layouts: - logger_warning( - f"Layout should be one of: {'', ''.join(self._valid_layouts)}", - __name__, - ) - layout = NameObject(layout) - self._root_object.update({NameObject("/PageLayout"): layout}) - - def set_page_layout(self, layout: LayoutType) -> None: - """ - Set the page layout. - - :param str layout: The page layout to be used - - .. list-table:: Valid ``layout`` arguments - :widths: 50 200 - - * - /NoLayout - - Layout explicitly not specified - * - /SinglePage - - Show one page at a time - * - /OneColumn - - Show one column at a time - * - /TwoColumnLeft - - Show pages in two columns, odd-numbered pages on the left - * - /TwoColumnRight - - Show pages in two columns, odd-numbered pages on the right - * - /TwoPageLeft - - Show two pages at a time, odd-numbered pages on the left - * - /TwoPageRight - - Show two pages at a time, odd-numbered pages on the right - """ - self._set_page_layout(layout) - - def setPageLayout(self, layout: LayoutType) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_layout` instead. - """ - deprecation_with_replacement( - "writer.setPageLayout(val)", "writer.page_layout = val", "3.0.0" - ) - return self._set_page_layout(layout) - - @property - def page_layout(self) -> Optional[LayoutType]: - """ - Page layout property. - - .. list-table:: Valid ``layout`` values - :widths: 50 200 - - * - /NoLayout - - Layout explicitly not specified - * - /SinglePage - - Show one page at a time - * - /OneColumn - - Show one column at a time - * - /TwoColumnLeft - - Show pages in two columns, odd-numbered pages on the left - * - /TwoColumnRight - - Show pages in two columns, odd-numbered pages on the right - * - /TwoPageLeft - - Show two pages at a time, odd-numbered pages on the left - * - /TwoPageRight - - Show two pages at a time, odd-numbered pages on the right - """ - return self._get_page_layout() - - @page_layout.setter - def page_layout(self, layout: LayoutType) -> None: - self._set_page_layout(layout) - - @property - def pageLayout(self) -> Optional[LayoutType]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_layout` instead. - """ - deprecation_with_replacement("pageLayout", "page_layout", "3.0.0") - return self.page_layout - - @pageLayout.setter - def pageLayout(self, layout: LayoutType) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_layout` instead. - """ - deprecation_with_replacement("pageLayout", "page_layout", "3.0.0") - self.page_layout = layout - - _valid_modes = ( - "/UseNone", - "/UseOutlines", - "/UseThumbs", - "/FullScreen", - "/UseOC", - "/UseAttachments", - ) - - def _get_page_mode(self) -> Optional[PagemodeType]: - try: - return cast(PagemodeType, self._root_object["/PageMode"]) - except KeyError: - return None - - def getPageMode(self) -> Optional[PagemodeType]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - deprecation_with_replacement("getPageMode", "page_mode", "3.0.0") - return self._get_page_mode() - - def set_page_mode(self, mode: PagemodeType) -> None: - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - if isinstance(mode, NameObject): - mode_name: NameObject = mode - else: - if mode not in self._valid_modes: - logger_warning( - f"Mode should be one of: {', '.join(self._valid_modes)}", __name__ - ) - mode_name = NameObject(mode) - self._root_object.update({NameObject("/PageMode"): mode_name}) - - def setPageMode(self, mode: PagemodeType) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - deprecation_with_replacement( - "writer.setPageMode(val)", "writer.page_mode = val", "3.0.0" - ) - self.set_page_mode(mode) - - @property - def page_mode(self) -> Optional[PagemodeType]: - """ - Page mode property. - - .. list-table:: Valid ``mode`` values - :widths: 50 200 - - * - /UseNone - - Do not show outline or thumbnails panels - * - /UseOutlines - - Show outline (aka bookmarks) panel - * - /UseThumbs - - Show page thumbnails panel - * - /FullScreen - - Fullscreen view - * - /UseOC - - Show Optional Content Group (OCG) panel - * - /UseAttachments - - Show attachments panel - """ - return self._get_page_mode() - - @page_mode.setter - def page_mode(self, mode: PagemodeType) -> None: - self.set_page_mode(mode) - - @property - def pageMode(self) -> Optional[PagemodeType]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - deprecation_with_replacement("pageMode", "page_mode", "3.0.0") - return self.page_mode - - @pageMode.setter - def pageMode(self, mode: PagemodeType) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :py:attr:`page_mode` instead. - """ - deprecation_with_replacement("pageMode", "page_mode", "3.0.0") - self.page_mode = mode - - def add_annotation(self, page_number: int, annotation: Dict[str, Any]) -> None: - to_add = cast(DictionaryObject, _pdf_objectify(annotation)) - to_add[NameObject("/P")] = self.get_object(self._pages)["/Kids"][page_number] # type: ignore - page = self.pages[page_number] - if page.annotations is None: - page[NameObject("/Annots")] = ArrayObject() - assert page.annotations is not None - - # Internal link annotations need the correct object type for the - # destination - if to_add.get("/Subtype") == "/Link" and NameObject("/Dest") in to_add: - tmp = cast(dict, to_add[NameObject("/Dest")]) - dest = Destination( - NameObject("/LinkName"), - tmp["target_page_index"], - Fit( - fit_type=tmp["fit"], fit_args=dict(tmp)["fit_args"] - ), # I have no clue why this dict-hack is necessary - ) - to_add[NameObject("/Dest")] = dest.dest_array - - ind_obj = self._add_object(to_add) - - page.annotations.append(ind_obj) - - def clean_page(self, page: Union[PageObject, IndirectObject]) -> PageObject: - """ - Perform some clean up in the page. - Currently: convert NameObject nameddestination to TextStringObject (required for names/dests list) - """ - page = cast("PageObject", page.get_object()) - for a in page.get("/Annots", []): - a_obj = a.get_object() - d = a_obj.get("/Dest", None) - act = a_obj.get("/A", None) - if isinstance(d, NameObject): - a_obj[NameObject("/Dest")] = TextStringObject(d) - elif act is not None: - act = act.get_object() - d = act.get("/D", None) - if isinstance(d, NameObject): - act[NameObject("/D")] = TextStringObject(d) - return page - - def _create_stream( - self, fileobj: Union[Path, StrByteType, PdfReader] - ) -> Tuple[IOBase, Optional[Encryption]]: - # If the fileobj parameter is a string, assume it is a path - # and create a file object at that location. If it is a file, - # copy the file's contents into a BytesIO stream object; if - # it is a PdfReader, copy that reader's stream into a - # BytesIO stream. - # If fileobj is none of the above types, it is not modified - encryption_obj = None - stream: IOBase - if isinstance(fileobj, (str, Path)): - with FileIO(fileobj, "rb") as f: - stream = BytesIO(f.read()) - elif isinstance(fileobj, PdfReader): - if fileobj._encryption: - encryption_obj = fileobj._encryption - orig_tell = fileobj.stream.tell() - fileobj.stream.seek(0) - stream = BytesIO(fileobj.stream.read()) - - # reset the stream to its original location - fileobj.stream.seek(orig_tell) - elif hasattr(fileobj, "seek") and hasattr(fileobj, "read"): - fileobj.seek(0) - filecontent = fileobj.read() - stream = BytesIO(filecontent) - else: - raise NotImplementedError( - "PdfMerger.merge requires an object that PdfReader can parse. " - "Typically, that is a Path or a string representing a Path, " - "a file object, or an object implementing .seek and .read. " - "Passing a PdfReader directly works as well." - ) - return stream, encryption_obj - - def append( - self, - fileobj: Union[StrByteType, PdfReader, Path], - outline_item: Union[ - str, None, PageRange, Tuple[int, int], Tuple[int, int, int], List[int] - ] = None, - pages: Union[ - None, PageRange, Tuple[int, int], Tuple[int, int, int], List[int] - ] = None, - import_outline: bool = True, - excluded_fields: Optional[Union[List[str], Tuple[str, ...]]] = None, - ) -> None: - """ - Identical to the :meth:`merge()` method, but assumes you want to - concatenate all pages onto the end of the file instead of specifying a - position. - - :param fileobj: A File Object or an object that supports the standard - read and seek methods similar to a File Object. Could also be a - string representing a path to a PDF file. - - :param str outline_item: Optionally, you may specify a string to build an outline - (aka 'bookmark') to identify the - beginning of the included file. - - :param pages: can be a :class:`PageRange` - or a ``(start, stop[, step])`` tuple - or a list of pages to be processed - to merge only the specified range of pages from the source - document into the output document. - - :param bool import_outline: You may prevent the source document's - outline (collection of outline items, previously referred to as - 'bookmarks') from being imported by specifying this as ``False``. - - :param List excluded_fields: provide the list of fields/keys to be ignored - if "/Annots" is part of the list, the annotation will be ignored - if "/B" is part of the list, the articles will be ignored - """ - if excluded_fields is None: - excluded_fields = () - if isinstance(outline_item, (tuple, list, PageRange)): - if isinstance(pages, bool): - if not isinstance(import_outline, bool): - excluded_fields = import_outline - import_outline = pages - pages = outline_item - self.merge(None, fileobj, None, pages, import_outline, excluded_fields) - else: # if isinstance(outline_item,str): - self.merge( - None, fileobj, outline_item, pages, import_outline, excluded_fields - ) - - @deprecation_bookmark(bookmark="outline_item", import_bookmarks="import_outline") - def merge( - self, - position: Optional[int], - fileobj: Union[Path, StrByteType, PdfReader], - outline_item: Optional[str] = None, - pages: Optional[PageRangeSpec] = None, - import_outline: bool = True, - excluded_fields: Optional[Union[List[str], Tuple[str, ...]]] = (), - ) -> None: - """ - Merge the pages from the given file into the output file at the - specified page number. - - :param int position: The *page number* to insert this file. File will - be inserted after the given number. - - :param fileobj: A File Object or an object that supports the standard - read and seek methods similar to a File Object. Could also be a - string representing a path to a PDF file. - - :param str outline_item: Optionally, you may specify a string to build an outline - (aka 'bookmark') to identify the - beginning of the included file. - - :param pages: can be a :class:`PageRange` - or a ``(start, stop[, step])`` tuple - or a list of pages to be processed - to merge only the specified range of pages from the source - document into the output document. - - :param bool import_outline: You may prevent the source document's - outline (collection of outline items, previously referred to as - 'bookmarks') from being imported by specifying this as ``False``. - - :param List excluded_fields: provide the list of fields/keys to be ignored - if "/Annots" is part of the list, the annotation will be ignored - if "/B" is part of the list, the articles will be ignored - """ - if isinstance(fileobj, PdfReader): - reader = fileobj - else: - stream, encryption_obj = self._create_stream(fileobj) - # Create a new PdfReader instance using the stream - # (either file or BytesIO or StringIO) created above - reader = PdfReader(stream, strict=False) # type: ignore[arg-type] - - if excluded_fields is None: - excluded_fields = () - # Find the range of pages to merge. - if pages is None: - pages = list(range(0, len(reader.pages))) - elif isinstance(pages, PageRange): - pages = list(range(*pages.indices(len(reader.pages)))) - elif isinstance(pages, list): - pass # keep unchanged - elif isinstance(pages, tuple) and len(pages) <= 3: - pages = list(range(*pages)) - elif not isinstance(pages, tuple): - raise TypeError( - '"pages" must be a tuple of (start, stop[, step]) or a list' - ) - - srcpages = {} - for i in pages: - pg = reader.pages[i] - assert pg.indirect_reference is not None - if position is None: - srcpages[pg.indirect_reference.idnum] = self.add_page( - pg, list(excluded_fields) + ["/B", "/Annots"] # type: ignore - ) - else: - srcpages[pg.indirect_reference.idnum] = self.insert_page( - pg, position, list(excluded_fields) + ["/B", "/Annots"] # type: ignore - ) - position += 1 - srcpages[pg.indirect_reference.idnum].original_page = pg - - reader._namedDests = ( - reader.named_destinations - ) # need for the outline processing below - for dest in reader._namedDests.values(): - arr = dest.dest_array - # try: - if isinstance(dest["/Page"], NullObject): - pass # self.add_named_destination_array(dest["/Title"],arr) - elif dest["/Page"].indirect_reference.idnum in srcpages: - arr[NumberObject(0)] = srcpages[ - dest["/Page"].indirect_reference.idnum - ].indirect_reference - self.add_named_destination_array(dest["/Title"], arr) - # except Exception as e: - # logger_warning(f"can not insert {dest} : {e.msg}",__name__) - - outline_item_typ: TreeObject - if outline_item is not None: - outline_item_typ = cast( - "TreeObject", - self.add_outline_item( - TextStringObject(outline_item), - list(srcpages.values())[0].indirect_reference, - fit=PAGE_FIT, - ).get_object(), - ) - else: - outline_item_typ = self.get_outline_root() - - _ro = cast("DictionaryObject", reader.trailer[TK.ROOT]) - if import_outline and CO.OUTLINES in _ro: - outline = self._get_filtered_outline( - _ro.get(CO.OUTLINES, None), srcpages, reader - ) - self._insert_filtered_outline( - outline, outline_item_typ, None - ) # TODO : use before parameter - - if "/Annots" not in excluded_fields: - for pag in srcpages.values(): - lst = self._insert_filtered_annotations( - pag.original_page.get("/Annots", ()), pag, srcpages, reader - ) - if len(lst) > 0: - pag[NameObject("/Annots")] = lst - self.clean_page(pag) - - if "/B" not in excluded_fields: - self.add_filtered_articles("", srcpages, reader) - - return - - def _add_articles_thread( - self, - thread: DictionaryObject, # thread entry from the reader's array of threads - pages: Dict[int, PageObject], - reader: PdfReader, - ) -> IndirectObject: - """ - clone the thread with only the applicable articles - - """ - nthread = thread.clone( - self, force_duplicate=True, ignore_fields=("/F",) - ) # use of clone to keep link between reader and writer - self.threads.append(nthread.indirect_reference) - first_article = cast("DictionaryObject", thread["/F"]) - current_article: Optional[DictionaryObject] = first_article - new_article: Optional[DictionaryObject] = None - while current_article is not None: - pag = self._get_cloned_page( - cast("PageObject", current_article["/P"]), pages, reader - ) - if pag is not None: - if new_article is None: - new_article = cast( - "DictionaryObject", - self._add_object(DictionaryObject()).get_object(), - ) - new_first = new_article - nthread[NameObject("/F")] = new_article.indirect_reference - else: - new_article2 = cast( - "DictionaryObject", - self._add_object( - DictionaryObject( - {NameObject("/V"): new_article.indirect_reference} - ) - ).get_object(), - ) - new_article[NameObject("/N")] = new_article2.indirect_reference - new_article = new_article2 - new_article[NameObject("/P")] = pag - new_article[NameObject("/T")] = nthread.indirect_reference - new_article[NameObject("/R")] = current_article["/R"] - pag_obj = cast("PageObject", pag.get_object()) - if "/B" not in pag_obj: - pag_obj[NameObject("/B")] = ArrayObject() - cast("ArrayObject", pag_obj["/B"]).append( - new_article.indirect_reference - ) - current_article = cast("DictionaryObject", current_article["/N"]) - if current_article == first_article: - new_article[NameObject("/N")] = new_first.indirect_reference # type: ignore - new_first[NameObject("/V")] = new_article.indirect_reference # type: ignore - current_article = None - assert nthread.indirect_reference is not None - return nthread.indirect_reference - - def add_filtered_articles( - self, - fltr: Union[Pattern, str], # thread entry from the reader's array of threads - pages: Dict[int, PageObject], - reader: PdfReader, - ) -> None: - """ - Add articles matching the defined criteria - """ - if isinstance(fltr, str): - fltr = re.compile(fltr) - elif not isinstance(fltr, Pattern): - fltr = re.compile("") - for p in pages.values(): - pp = p.original_page - for a in pp.get("/B", ()): - thr = a.get_object()["/T"] - if thr.indirect_reference.idnum not in self._id_translated[ - id(reader) - ] and fltr.search(thr["/I"]["/Title"]): - self._add_articles_thread(thr, pages, reader) - - def _get_cloned_page( - self, - page: Union[None, int, IndirectObject, PageObject, NullObject], - pages: Dict[int, PageObject], - reader: PdfReader, - ) -> Optional[IndirectObject]: - if isinstance(page, NullObject): - return None - if isinstance(page, int): - _i = reader.pages[page].indirect_reference - # elif isinstance(page, PageObject): - # _i = page.indirect_reference - elif isinstance(page, DictionaryObject) and page.get("/Type", "") == "/Page": - _i = page.indirect_reference - elif isinstance(page, IndirectObject): - _i = page - try: - return pages[_i.idnum].indirect_reference # type: ignore - except Exception: - return None - - def _insert_filtered_annotations( - self, - annots: Union[IndirectObject, List[DictionaryObject]], - page: PageObject, - pages: Dict[int, PageObject], - reader: PdfReader, - ) -> List[Destination]: - outlist = ArrayObject() - if isinstance(annots, IndirectObject): - annots = cast("List", annots.get_object()) - for an in annots: - ano = cast("DictionaryObject", an.get_object()) - if ( - ano["/Subtype"] != "/Link" - or "/A" not in ano - or cast("DictionaryObject", ano["/A"])["/S"] != "/GoTo" - or "/Dest" in ano - ): - if "/Dest" not in ano: - outlist.append(ano.clone(self).indirect_reference) - else: - d = ano["/Dest"] - if isinstance(d, str): - # it is a named dest - if str(d) in self.get_named_dest_root(): - outlist.append(ano.clone(self).indirect_reference) - else: - d = cast("ArrayObject", d) - p = self._get_cloned_page(d[0], pages, reader) - if p is not None: - anc = ano.clone(self, ignore_fields=("/Dest",)) - anc[NameObject("/Dest")] = ArrayObject([p] + d[1:]) - outlist.append(anc.indirect_reference) - else: - d = cast("DictionaryObject", ano["/A"])["/D"] - if isinstance(d, str): - # it is a named dest - if str(d) in self.get_named_dest_root(): - outlist.append(ano.clone(self).indirect_reference) - else: - d = cast("ArrayObject", d) - p = self._get_cloned_page(d[0], pages, reader) - if p is not None: - anc = ano.clone(self, ignore_fields=("/D",)) - anc = cast("DictionaryObject", anc) - cast("DictionaryObject", anc["/A"])[ - NameObject("/D") - ] = ArrayObject([p] + d[1:]) - outlist.append(anc.indirect_reference) - return outlist - - def _get_filtered_outline( - self, - node: Any, - pages: Dict[int, PageObject], - reader: PdfReader, - ) -> List[Destination]: - """Extract outline item entries that are part of the specified page set.""" - new_outline = [] - node = node.get_object() - if node.get("/Type", "") == "/Outlines" or "/Title" not in node: - node = node.get("/First", None) - if node is not None: - node = node.get_object() - new_outline += self._get_filtered_outline(node, pages, reader) - else: - v: Union[None, IndirectObject, NullObject] - while node is not None: - node = node.get_object() - o = cast("Destination", reader._build_outline_item(node)) - v = self._get_cloned_page(cast("PageObject", o["/Page"]), pages, reader) - if v is None: - v = NullObject() - o[NameObject("/Page")] = v - if "/First" in node: - o.childs = self._get_filtered_outline(node["/First"], pages, reader) - else: - o.childs = [] - if not isinstance(o["/Page"], NullObject) or len(o.childs) > 0: - new_outline.append(o) - node = node.get("/Next", None) - return new_outline - - def _clone_outline(self, dest: Destination) -> TreeObject: - n_ol = TreeObject() - self._add_object(n_ol) - n_ol[NameObject("/Title")] = TextStringObject(dest["/Title"]) - if not isinstance(dest["/Page"], NullObject): - if dest.node is not None and "/A" in dest.node: - n_ol[NameObject("/A")] = dest.node["/A"].clone(self) - # elif "/D" in dest.node: - # n_ol[NameObject("/Dest")] = dest.node["/D"].clone(self) - # elif "/Dest" in dest.node: - # n_ol[NameObject("/Dest")] = dest.node["/Dest"].clone(self) - else: - n_ol[NameObject("/Dest")] = dest.dest_array - # TODO: /SE - if dest.node is not None: - n_ol[NameObject("/F")] = NumberObject(dest.node.get("/F", 0)) - n_ol[NameObject("/C")] = ArrayObject( - dest.node.get( - "/C", [FloatObject(0.0), FloatObject(0.0), FloatObject(0.0)] - ) - ) - return n_ol - - def _insert_filtered_outline( - self, - outlines: List[Destination], - parent: Union[TreeObject, IndirectObject], - before: Union[None, TreeObject, IndirectObject] = None, - ) -> None: - for dest in outlines: - # TODO : can be improved to keep A and SE entries (ignored for the moment) - # np=self.add_outline_item_destination(dest,parent,before) - if dest.get("/Type", "") == "/Outlines" or "/Title" not in dest: - np = parent - else: - np = self._clone_outline(dest) - cast(TreeObject, parent.get_object()).insert_child(np, before, self) - self._insert_filtered_outline(dest.childs, np, None) - - def close(self) -> None: - """To match the functions from Merger""" - return - - # @deprecation_bookmark(bookmark="outline_item") - def find_outline_item( - self, - outline_item: Dict[str, Any], - root: Optional[OutlineType] = None, - ) -> Optional[List[int]]: - if root is None: - o = self.get_outline_root() - else: - o = cast("TreeObject", root) - - i = 0 - while o is not None: - if ( - o.indirect_reference == outline_item - or o.get("/Title", None) == outline_item - ): - return [i] - else: - if "/First" in o: - res = self.find_outline_item( - outline_item, cast(OutlineType, o["/First"]) - ) - if res: - return ([i] if "/Title" in o else []) + res - if "/Next" in o: - i += 1 - o = cast(TreeObject, o["/Next"]) - else: - return None - - @deprecation_bookmark(bookmark="outline_item") - def find_bookmark( - self, - outline_item: Dict[str, Any], - root: Optional[OutlineType] = None, - ) -> Optional[List[int]]: # pragma: no cover - """ - .. deprecated:: 2.9.0 - Use :meth:`find_outline_item` instead. - """ - return self.find_outline_item(outline_item, root) - - def reset_translation( - self, reader: Union[None, PdfReader, IndirectObject] = None - ) -> None: - """ - reset the translation table between reader and the writer object. - late cloning will create new independent objects - - :param reader: PdfReader or IndirectObject refering a PdfReader object. - if set to None or omitted, all tables will be reset. - """ - if reader is None: - self._id_translated = {} - elif isinstance(reader, PdfReader): - try: - del self._id_translated[id(reader)] - except Exception: - pass - elif isinstance(reader, IndirectObject): - try: - del self._id_translated[id(reader.pdf)] - except Exception: - pass - else: - raise Exception("invalid parameter {reader}") - - -def _pdf_objectify(obj: Union[Dict[str, Any], str, int, List[Any]]) -> PdfObject: - if isinstance(obj, PdfObject): - return obj - if isinstance(obj, dict): - to_add = DictionaryObject() - for key, value in obj.items(): - name_key = NameObject(key) - casted_value = _pdf_objectify(value) - to_add[name_key] = casted_value - return to_add - elif isinstance(obj, list): - arr = ArrayObject() - for el in obj: - arr.append(_pdf_objectify(el)) - return arr - elif isinstance(obj, str): - if obj.startswith("/"): - return NameObject(obj) - else: - return TextStringObject(obj) - elif isinstance(obj, (int, float)): - return FloatObject(obj) - else: - raise NotImplementedError( - f"type(obj)={type(obj)} could not be casted to PdfObject" - ) - - -def _create_outline_item( - action_ref: Union[None, IndirectObject], - title: str, - color: Union[Tuple[float, float, float], str, None], - italic: bool, - bold: bool, -) -> TreeObject: - outline_item = TreeObject() - if action_ref is not None: - outline_item[NameObject("/A")] = action_ref - outline_item.update( - { - NameObject("/Title"): create_string_object(title), - } - ) - if color: - if isinstance(color, str): - color = hex_to_rgb(color) - prec = decimal.Decimal("1.00000") - outline_item.update( - { - NameObject("/C"): ArrayObject( - [FloatObject(decimal.Decimal(c).quantize(prec)) for c in color] - ) - } - ) - if italic or bold: - format_flag = 0 - if italic: - format_flag += 1 - if bold: - format_flag += 2 - outline_item.update({NameObject("/F"): NumberObject(format_flag)}) - return outline_item - - -class PdfFileWriter(PdfWriter): # pragma: no cover - def __init__(self, *args: Any, **kwargs: Any) -> None: - deprecation_with_replacement("PdfFileWriter", "PdfWriter", "3.0.0") - super().__init__(*args, **kwargs) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/constants.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/constants.py deleted file mode 100644 index a2f8c49e..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/constants.py +++ /dev/null @@ -1,461 +0,0 @@ -""" -See Portable Document Format Reference Manual, 1993. ISBN 0-201-62628-4. - -See https://ia802202.us.archive.org/8/items/pdfy-0vt8s-egqFwDl7L2/PDF%20Reference%201.0.pdf - -PDF Reference, third edition, Version 1.4, 2001. ISBN 0-201-75839-3. - -PDF Reference, sixth edition, Version 1.7, 2006. -""" - -from enum import IntFlag -from typing import Dict, Tuple - - -class Core: - """Keywords that don't quite belong anywhere else.""" - - OUTLINES = "/Outlines" - THREADS = "/Threads" - PAGE = "/Page" - PAGES = "/Pages" - CATALOG = "/Catalog" - - -class TrailerKeys: - ROOT = "/Root" - ENCRYPT = "/Encrypt" - ID = "/ID" - INFO = "/Info" - SIZE = "/Size" - - -class CatalogAttributes: - NAMES = "/Names" - DESTS = "/Dests" - - -class EncryptionDictAttributes: - """ - Additional encryption dictionary entries for the standard security handler. - - TABLE 3.19, Page 122 - """ - - R = "/R" # number, required; revision of the standard security handler - O = "/O" # 32-byte string, required - U = "/U" # 32-byte string, required - P = "/P" # integer flag, required; permitted operations - ENCRYPT_METADATA = "/EncryptMetadata" # boolean flag, optional - - -class UserAccessPermissions(IntFlag): - """TABLE 3.20 User access permissions""" - - R1 = 1 - R2 = 2 - PRINT = 4 - MODIFY = 8 - EXTRACT = 16 - ADD_OR_MODIFY = 32 - R7 = 64 - R8 = 128 - FILL_FORM_FIELDS = 256 - EXTRACT_TEXT_AND_GRAPHICS = 512 - ASSEMBLE_DOC = 1024 - PRINT_TO_REPRESENTATION = 2048 - R13 = 2**12 - R14 = 2**13 - R15 = 2**14 - R16 = 2**15 - R17 = 2**16 - R18 = 2**17 - R19 = 2**18 - R20 = 2**19 - R21 = 2**20 - R22 = 2**21 - R23 = 2**22 - R24 = 2**23 - R25 = 2**24 - R26 = 2**25 - R27 = 2**26 - R28 = 2**27 - R29 = 2**28 - R30 = 2**29 - R31 = 2**30 - R32 = 2**31 - - -class Ressources: - """TABLE 3.30 Entries in a resource dictionary.""" - - EXT_G_STATE = "/ExtGState" # dictionary, optional - COLOR_SPACE = "/ColorSpace" # dictionary, optional - PATTERN = "/Pattern" # dictionary, optional - SHADING = "/Shading" # dictionary, optional - XOBJECT = "/XObject" # dictionary, optional - FONT = "/Font" # dictionary, optional - PROC_SET = "/ProcSet" # array, optional - PROPERTIES = "/Properties" # dictionary, optional - - -class PagesAttributes: - """Page Attributes, Table 6.2, Page 52.""" - - TYPE = "/Type" # name, required; must be /Pages - KIDS = "/Kids" # array, required; List of indirect references - COUNT = "/Count" # integer, required; the number of all nodes und this node - PARENT = "/Parent" # dictionary, required; indirect reference to pages object - - -class PageAttributes: - """TABLE 3.27 Entries in a page object.""" - - TYPE = "/Type" # name, required; must be /Page - PARENT = "/Parent" # dictionary, required; a pages object - LAST_MODIFIED = ( - "/LastModified" # date, optional; date and time of last modification - ) - RESOURCES = "/Resources" # dictionary, required if there are any - MEDIABOX = "/MediaBox" # rectangle, required; rectangle specifying page size - CROPBOX = "/CropBox" # rectangle, optional; rectangle - BLEEDBOX = "/BleedBox" # rectangle, optional; rectangle - TRIMBOX = "/TrimBox" # rectangle, optional; rectangle - ARTBOX = "/ArtBox" # rectangle, optional; rectangle - BOX_COLOR_INFO = "/BoxColorInfo" # dictionary, optional - CONTENTS = "/Contents" # stream or array, optional - ROTATE = "/Rotate" # integer, optional; page rotation in degrees - GROUP = "/Group" # dictionary, optional; page group - THUMB = "/Thumb" # stream, optional; indirect reference to image of the page - B = "/B" # array, optional - DUR = "/Dur" # number, optional - TRANS = "/Trans" # dictionary, optional - ANNOTS = "/Annots" # array, optional; an array of annotations - AA = "/AA" # dictionary, optional - METADATA = "/Metadata" # stream, optional - PIECE_INFO = "/PieceInfo" # dictionary, optional - STRUCT_PARENTS = "/StructParents" # integer, optional - ID = "/ID" # byte string, optional - PZ = "/PZ" # number, optional - TABS = "/Tabs" # name, optional - TEMPLATE_INSTANTIATED = "/TemplateInstantiated" # name, optional - PRES_STEPS = "/PresSteps" # dictionary, optional - USER_UNIT = "/UserUnit" # number, optional - VP = "/VP" # dictionary, optional - - -class FileSpecificationDictionaryEntries: - """TABLE 3.41 Entries in a file specification dictionary""" - - Type = "/Type" - FS = "/FS" # The name of the file system to be used to interpret this file specification - F = "/F" # A file specification string of the form described in Section 3.10.1 - EF = "/EF" # dictionary, containing a subset of the keys F , UF , DOS , Mac , and Unix - - -class StreamAttributes: - """Table 4.2.""" - - LENGTH = "/Length" # integer, required - FILTER = "/Filter" # name or array of names, optional - DECODE_PARMS = "/DecodeParms" # variable, optional -- 'decodeParams is wrong - - -class FilterTypes: - """ - Table 4.3 of the 1.4 Manual. - - Page 354 of the 1.7 Manual - """ - - ASCII_HEX_DECODE = "/ASCIIHexDecode" # abbreviation: AHx - ASCII_85_DECODE = "/ASCII85Decode" # abbreviation: A85 - LZW_DECODE = "/LZWDecode" # abbreviation: LZW - FLATE_DECODE = "/FlateDecode" # abbreviation: Fl, PDF 1.2 - RUN_LENGTH_DECODE = "/RunLengthDecode" # abbreviation: RL - CCITT_FAX_DECODE = "/CCITTFaxDecode" # abbreviation: CCF - DCT_DECODE = "/DCTDecode" # abbreviation: DCT - - -class FilterTypeAbbreviations: - """Table 4.44 of the 1.7 Manual (page 353ff).""" - - AHx = "/AHx" - A85 = "/A85" - LZW = "/LZW" - FL = "/Fl" # FlateDecode - RL = "/RL" - CCF = "/CCF" - DCT = "/DCT" - - -class LzwFilterParameters: - """Table 4.4.""" - - PREDICTOR = "/Predictor" # integer - COLUMNS = "/Columns" # integer - COLORS = "/Colors" # integer - BITS_PER_COMPONENT = "/BitsPerComponent" # integer - EARLY_CHANGE = "/EarlyChange" # integer - - -class CcittFaxDecodeParameters: - """Table 4.5.""" - - K = "/K" # integer - END_OF_LINE = "/EndOfLine" # boolean - ENCODED_BYTE_ALIGN = "/EncodedByteAlign" # boolean - COLUMNS = "/Columns" # integer - ROWS = "/Rows" # integer - END_OF_BLOCK = "/EndOfBlock" # boolean - BLACK_IS_1 = "/BlackIs1" # boolean - DAMAGED_ROWS_BEFORE_ERROR = "/DamagedRowsBeforeError" # integer - - -class ImageAttributes: - """Table 6.20.""" - - TYPE = "/Type" # name, required; must be /XObject - SUBTYPE = "/Subtype" # name, required; must be /Image - NAME = "/Name" # name, required - WIDTH = "/Width" # integer, required - HEIGHT = "/Height" # integer, required - BITS_PER_COMPONENT = "/BitsPerComponent" # integer, required - COLOR_SPACE = "/ColorSpace" # name, required - DECODE = "/Decode" # array, optional - INTERPOLATE = "/Interpolate" # boolean, optional - IMAGE_MASK = "/ImageMask" # boolean, optional - - -class ColorSpaces: - DEVICE_RGB = "/DeviceRGB" - DEVICE_CMYK = "/DeviceCMYK" - DEVICE_GRAY = "/DeviceGray" - - -class TypArguments: - """Table 8.2 of the PDF 1.7 reference.""" - - LEFT = "/Left" - RIGHT = "/Right" - BOTTOM = "/Bottom" - TOP = "/Top" - - -class TypFitArguments: - """Table 8.2 of the PDF 1.7 reference.""" - - FIT = "/Fit" - FIT_V = "/FitV" - FIT_BV = "/FitBV" - FIT_B = "/FitB" - FIT_H = "/FitH" - FIT_BH = "/FitBH" - FIT_R = "/FitR" - XYZ = "/XYZ" - - -class GoToActionArguments: - S = "/S" # name, required: type of action - D = "/D" # name / byte string /array, required: Destination to jump to - - -class AnnotationDictionaryAttributes: - """TABLE 8.15 Entries common to all annotation dictionaries""" - - Type = "/Type" - Subtype = "/Subtype" - Rect = "/Rect" - Contents = "/Contents" - P = "/P" - NM = "/NM" - M = "/M" - F = "/F" - AP = "/AP" - AS = "/AS" - Border = "/Border" - C = "/C" - StructParent = "/StructParent" - OC = "/OC" - - -class InteractiveFormDictEntries: - Fields = "/Fields" - NeedAppearances = "/NeedAppearances" - SigFlags = "/SigFlags" - CO = "/CO" - DR = "/DR" - DA = "/DA" - Q = "/Q" - XFA = "/XFA" - - -class FieldDictionaryAttributes: - """TABLE 8.69 Entries common to all field dictionaries (PDF 1.7 reference).""" - - FT = "/FT" # name, required for terminal fields - Parent = "/Parent" # dictionary, required for children - Kids = "/Kids" # array, sometimes required - T = "/T" # text string, optional - TU = "/TU" # text string, optional - TM = "/TM" # text string, optional - Ff = "/Ff" # integer, optional - V = "/V" # text string, optional - DV = "/DV" # text string, optional - AA = "/AA" # dictionary, optional - - @classmethod - def attributes(cls) -> Tuple[str, ...]: - return ( - cls.TM, - cls.T, - cls.FT, - cls.Parent, - cls.TU, - cls.Ff, - cls.V, - cls.DV, - cls.Kids, - cls.AA, - ) - - @classmethod - def attributes_dict(cls) -> Dict[str, str]: - return { - cls.FT: "Field Type", - cls.Parent: "Parent", - cls.T: "Field Name", - cls.TU: "Alternate Field Name", - cls.TM: "Mapping Name", - cls.Ff: "Field Flags", - cls.V: "Value", - cls.DV: "Default Value", - } - - -class CheckboxRadioButtonAttributes: - """TABLE 8.76 Field flags common to all field types""" - - Opt = "/Opt" # Options, Optional - - @classmethod - def attributes(cls) -> Tuple[str, ...]: - return (cls.Opt,) - - @classmethod - def attributes_dict(cls) -> Dict[str, str]: - return { - cls.Opt: "Options", - } - - -class FieldFlag(IntFlag): - """TABLE 8.70 Field flags common to all field types""" - - READ_ONLY = 1 - REQUIRED = 2 - NO_EXPORT = 4 - - -class DocumentInformationAttributes: - """TABLE 10.2 Entries in the document information dictionary.""" - - TITLE = "/Title" # text string, optional - AUTHOR = "/Author" # text string, optional - SUBJECT = "/Subject" # text string, optional - KEYWORDS = "/Keywords" # text string, optional - CREATOR = "/Creator" # text string, optional - PRODUCER = "/Producer" # text string, optional - CREATION_DATE = "/CreationDate" # date, optional - MOD_DATE = "/ModDate" # date, optional - TRAPPED = "/Trapped" # name, optional - - -class PageLayouts: - """Page 84, PDF 1.4 reference.""" - - SINGLE_PAGE = "/SinglePage" - ONE_COLUMN = "/OneColumn" - TWO_COLUMN_LEFT = "/TwoColumnLeft" - TWO_COLUMN_RIGHT = "/TwoColumnRight" - - -class GraphicsStateParameters: - """Table 4.8 of the 1.7 reference.""" - - TYPE = "/Type" # name, optional - LW = "/LW" # number, optional - # TODO: Many more! - FONT = "/Font" # array, optional - S_MASK = "/SMask" # dictionary or name, optional - - -class CatalogDictionary: - """Table 3.25 in the 1.7 reference.""" - - TYPE = "/Type" # name, required; must be /Catalog - VERSION = "/Version" # name - PAGES = "/Pages" # dictionary, required - PAGE_LABELS = "/PageLabels" # number tree, optional - NAMES = "/Names" # dictionary, optional - DESTS = "/Dests" # dictionary, optional - VIEWER_PREFERENCES = "/ViewerPreferences" # dictionary, optional - PAGE_LAYOUT = "/PageLayout" # name, optional - PAGE_MODE = "/PageMode" # name, optional - OUTLINES = "/Outlines" # dictionary, optional - THREADS = "/Threads" # array, optional - OPEN_ACTION = "/OpenAction" # array or dictionary or name, optional - AA = "/AA" # dictionary, optional - URI = "/URI" # dictionary, optional - ACRO_FORM = "/AcroForm" # dictionary, optional - METADATA = "/Metadata" # stream, optional - STRUCT_TREE_ROOT = "/StructTreeRoot" # dictionary, optional - MARK_INFO = "/MarkInfo" # dictionary, optional - LANG = "/Lang" # text string, optional - SPIDER_INFO = "/SpiderInfo" # dictionary, optional - OUTPUT_INTENTS = "/OutputIntents" # array, optional - PIECE_INFO = "/PieceInfo" # dictionary, optional - OC_PROPERTIES = "/OCProperties" # dictionary, optional - PERMS = "/Perms" # dictionary, optional - LEGAL = "/Legal" # dictionary, optional - REQUIREMENTS = "/Requirements" # array, optional - COLLECTION = "/Collection" # dictionary, optional - NEEDS_RENDERING = "/NeedsRendering" # boolean, optional - - -class OutlineFontFlag(IntFlag): - """ - A class used as an enumerable flag for formatting an outline font - """ - - italic = 1 - bold = 2 - - -PDF_KEYS = ( - AnnotationDictionaryAttributes, - CatalogAttributes, - CatalogDictionary, - CcittFaxDecodeParameters, - CheckboxRadioButtonAttributes, - ColorSpaces, - Core, - DocumentInformationAttributes, - EncryptionDictAttributes, - FieldDictionaryAttributes, - FilterTypeAbbreviations, - FilterTypes, - GoToActionArguments, - GraphicsStateParameters, - ImageAttributes, - FileSpecificationDictionaryEntries, - LzwFilterParameters, - PageAttributes, - PageLayouts, - PagesAttributes, - Ressources, - StreamAttributes, - TrailerKeys, - TypArguments, - TypFitArguments, -) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/errors.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/errors.py deleted file mode 100644 index a84b0569..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/errors.py +++ /dev/null @@ -1,54 +0,0 @@ -""" -All errors/exceptions PyPDF2 raises and all of the warnings it uses. - -Please note that broken PDF files might cause other Exceptions. -""" - - -class DeprecationError(Exception): - """Raised when a deprecated feature is used.""" - - pass - - -class DependencyError(Exception): - pass - - -class PyPdfError(Exception): - pass - - -class PdfReadError(PyPdfError): - pass - - -class PageSizeNotDefinedError(PyPdfError): - pass - - -class PdfReadWarning(UserWarning): - pass - - -class PdfStreamError(PdfReadError): - pass - - -class ParseError(Exception): - pass - - -class FileNotDecryptedError(PdfReadError): - pass - - -class WrongPasswordError(FileNotDecryptedError): - pass - - -class EmptyFileError(PdfReadError): - pass - - -STREAM_TRUNCATED_PREMATURELY = "Stream has ended unexpectedly" diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/filters.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/filters.py deleted file mode 100644 index 11f6a21b..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/filters.py +++ /dev/null @@ -1,645 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - - -""" -Implementation of stream filters for PDF. - -See TABLE H.1 Abbreviations for standard filter names -""" -__author__ = "Mathieu Fenniak" -__author_email__ = "biziqe@mathieu.fenniak.net" - -import math -import struct -import zlib -from io import BytesIO -from typing import Any, Dict, Optional, Tuple, Union, cast - -from .generic import ArrayObject, DictionaryObject, IndirectObject, NameObject - -try: - from typing import Literal # type: ignore[attr-defined] -except ImportError: - # PEP 586 introduced typing.Literal with Python 3.8 - # For older Python versions, the backport typing_extensions is necessary: - from typing_extensions import Literal # type: ignore[misc] - -from ._utils import b_, deprecate_with_replacement, ord_, paeth_predictor -from .constants import CcittFaxDecodeParameters as CCITT -from .constants import ColorSpaces -from .constants import FilterTypeAbbreviations as FTA -from .constants import FilterTypes as FT -from .constants import GraphicsStateParameters as G -from .constants import ImageAttributes as IA -from .constants import LzwFilterParameters as LZW -from .constants import StreamAttributes as SA -from .errors import PdfReadError, PdfStreamError - - -def decompress(data: bytes) -> bytes: - try: - return zlib.decompress(data) - except zlib.error: - d = zlib.decompressobj(zlib.MAX_WBITS | 32) - result_str = b"" - for b in [data[i : i + 1] for i in range(len(data))]: - try: - result_str += d.decompress(b) - except zlib.error: - pass - return result_str - - -class FlateDecode: - @staticmethod - def decode( - data: bytes, - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, - **kwargs: Any, - ) -> bytes: - """ - Decode data which is flate-encoded. - - :param data: flate-encoded data. - :param decode_parms: a dictionary of values, understanding the - "/Predictor": key only - :return: the flate-decoded data. - - :raises PdfReadError: - """ - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] - str_data = decompress(data) - predictor = 1 - - if decode_parms: - try: - if isinstance(decode_parms, ArrayObject): - for decode_parm in decode_parms: - if "/Predictor" in decode_parm: - predictor = decode_parm["/Predictor"] - else: - predictor = decode_parms.get("/Predictor", 1) - except (AttributeError, TypeError): # Type Error is NullObject - pass # Usually an array with a null object was read - # predictor 1 == no predictor - if predictor != 1: - # The /Columns param. has 1 as the default value; see ISO 32000, - # Β§7.4.4.3 LZWDecode and FlateDecode Parameters, Table 8 - DEFAULT_BITS_PER_COMPONENT = 8 - if isinstance(decode_parms, ArrayObject): - columns = 1 - bits_per_component = DEFAULT_BITS_PER_COMPONENT - for decode_parm in decode_parms: - if "/Columns" in decode_parm: - columns = decode_parm["/Columns"] - if LZW.BITS_PER_COMPONENT in decode_parm: - bits_per_component = decode_parm[LZW.BITS_PER_COMPONENT] - else: - columns = ( - 1 if decode_parms is None else decode_parms.get(LZW.COLUMNS, 1) - ) - bits_per_component = ( - decode_parms.get(LZW.BITS_PER_COMPONENT, DEFAULT_BITS_PER_COMPONENT) - if decode_parms - else DEFAULT_BITS_PER_COMPONENT - ) - - # PNG predictor can vary by row and so is the lead byte on each row - rowlength = ( - math.ceil(columns * bits_per_component / 8) + 1 - ) # number of bytes - - # PNG prediction: - if 10 <= predictor <= 15: - str_data = FlateDecode._decode_png_prediction(str_data, columns, rowlength) # type: ignore - else: - # unsupported predictor - raise PdfReadError(f"Unsupported flatedecode predictor {predictor!r}") - return str_data - - @staticmethod - def _decode_png_prediction(data: str, columns: int, rowlength: int) -> bytes: - output = BytesIO() - # PNG prediction can vary from row to row - if len(data) % rowlength != 0: - raise PdfReadError("Image data is not rectangular") - prev_rowdata = (0,) * rowlength - for row in range(len(data) // rowlength): - rowdata = [ - ord_(x) for x in data[(row * rowlength) : ((row + 1) * rowlength)] - ] - filter_byte = rowdata[0] - - if filter_byte == 0: - pass - elif filter_byte == 1: - for i in range(2, rowlength): - rowdata[i] = (rowdata[i] + rowdata[i - 1]) % 256 - elif filter_byte == 2: - for i in range(1, rowlength): - rowdata[i] = (rowdata[i] + prev_rowdata[i]) % 256 - elif filter_byte == 3: - for i in range(1, rowlength): - left = rowdata[i - 1] if i > 1 else 0 - floor = math.floor(left + prev_rowdata[i]) / 2 - rowdata[i] = (rowdata[i] + int(floor)) % 256 - elif filter_byte == 4: - for i in range(1, rowlength): - left = rowdata[i - 1] if i > 1 else 0 - up = prev_rowdata[i] - up_left = prev_rowdata[i - 1] if i > 1 else 0 - paeth = paeth_predictor(left, up, up_left) - rowdata[i] = (rowdata[i] + paeth) % 256 - else: - # unsupported PNG filter - raise PdfReadError(f"Unsupported PNG filter {filter_byte!r}") - prev_rowdata = tuple(rowdata) - output.write(bytearray(rowdata[1:])) - return output.getvalue() - - @staticmethod - def encode(data: bytes) -> bytes: - return zlib.compress(data) - - -class ASCIIHexDecode: - """ - The ASCIIHexDecode filter decodes data that has been encoded in ASCII - hexadecimal form into a base-7 ASCII format. - """ - - @staticmethod - def decode( - data: str, - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, # noqa: F841 - **kwargs: Any, - ) -> str: - """ - :param data: a str sequence of hexadecimal-encoded values to be - converted into a base-7 ASCII string - :param decode_parms: - :return: a string conversion in base-7 ASCII, where each of its values - v is such that 0 <= ord(v) <= 127. - - :raises PdfStreamError: - """ - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] # noqa: F841 - retval = "" - hex_pair = "" - index = 0 - while True: - if index >= len(data): - raise PdfStreamError("Unexpected EOD in ASCIIHexDecode") - char = data[index] - if char == ">": - break - elif char.isspace(): - index += 1 - continue - hex_pair += char - if len(hex_pair) == 2: - retval += chr(int(hex_pair, base=16)) - hex_pair = "" - index += 1 - assert hex_pair == "" - return retval - - -class LZWDecode: - """Taken from: - http://www.java2s.com/Open-Source/Java-Document/PDF/PDF-Renderer/com/sun/pdfview/decode/LZWDecode.java.htm - """ - - class Decoder: - def __init__(self, data: bytes) -> None: - self.STOP = 257 - self.CLEARDICT = 256 - self.data = data - self.bytepos = 0 - self.bitpos = 0 - self.dict = [""] * 4096 - for i in range(256): - self.dict[i] = chr(i) - self.reset_dict() - - def reset_dict(self) -> None: - self.dictlen = 258 - self.bitspercode = 9 - - def next_code(self) -> int: - fillbits = self.bitspercode - value = 0 - while fillbits > 0: - if self.bytepos >= len(self.data): - return -1 - nextbits = ord_(self.data[self.bytepos]) - bitsfromhere = 8 - self.bitpos - bitsfromhere = min(bitsfromhere, fillbits) - value |= ( - (nextbits >> (8 - self.bitpos - bitsfromhere)) - & (0xFF >> (8 - bitsfromhere)) - ) << (fillbits - bitsfromhere) - fillbits -= bitsfromhere - self.bitpos += bitsfromhere - if self.bitpos >= 8: - self.bitpos = 0 - self.bytepos = self.bytepos + 1 - return value - - def decode(self) -> str: - """ - TIFF 6.0 specification explains in sufficient details the steps to - implement the LZW encode() and decode() algorithms. - - algorithm derived from: - http://www.rasip.fer.hr/research/compress/algorithms/fund/lz/lzw.html - and the PDFReference - - :raises PdfReadError: If the stop code is missing - """ - cW = self.CLEARDICT - baos = "" - while True: - pW = cW - cW = self.next_code() - if cW == -1: - raise PdfReadError("Missed the stop code in LZWDecode!") - if cW == self.STOP: - break - elif cW == self.CLEARDICT: - self.reset_dict() - elif pW == self.CLEARDICT: - baos += self.dict[cW] - else: - if cW < self.dictlen: - baos += self.dict[cW] - p = self.dict[pW] + self.dict[cW][0] - self.dict[self.dictlen] = p - self.dictlen += 1 - else: - p = self.dict[pW] + self.dict[pW][0] - baos += p - self.dict[self.dictlen] = p - self.dictlen += 1 - if ( - self.dictlen >= (1 << self.bitspercode) - 1 - and self.bitspercode < 12 - ): - self.bitspercode += 1 - return baos - - @staticmethod - def decode( - data: bytes, - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, - **kwargs: Any, - ) -> str: - """ - :param data: ``bytes`` or ``str`` text to decode. - :param decode_parms: a dictionary of parameter values. - :return: decoded data. - """ - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] # noqa: F841 - return LZWDecode.Decoder(data).decode() - - -class ASCII85Decode: - """Decodes string ASCII85-encoded data into a byte format.""" - - @staticmethod - def decode( - data: Union[str, bytes], - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, - **kwargs: Any, - ) -> bytes: - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] # noqa: F841 - if isinstance(data, str): - data = data.encode("ascii") - group_index = b = 0 - out = bytearray() - for char in data: - if ord("!") <= char and char <= ord("u"): - group_index += 1 - b = b * 85 + (char - 33) - if group_index == 5: - out += struct.pack(b">L", b) - group_index = b = 0 - elif char == ord("z"): - assert group_index == 0 - out += b"\0\0\0\0" - elif char == ord("~"): - if group_index: - for _ in range(5 - group_index): - b = b * 85 + 84 - out += struct.pack(b">L", b)[: group_index - 1] - break - return bytes(out) - - -class DCTDecode: - @staticmethod - def decode( - data: bytes, - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, - **kwargs: Any, - ) -> bytes: - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] # noqa: F841 - return data - - -class JPXDecode: - @staticmethod - def decode( - data: bytes, - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, - **kwargs: Any, - ) -> bytes: - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] # noqa: F841 - return data - - -class CCITParameters: - """TABLE 3.9 Optional parameters for the CCITTFaxDecode filter.""" - - def __init__(self, K: int = 0, columns: int = 0, rows: int = 0) -> None: - self.K = K - self.EndOfBlock = None - self.EndOfLine = None - self.EncodedByteAlign = None - self.columns = columns # width - self.rows = rows # height - self.DamagedRowsBeforeError = None - - @property - def group(self) -> int: - if self.K < 0: - CCITTgroup = 4 - else: - # k == 0: Pure one-dimensional encoding (Group 3, 1-D) - # k > 0: Mixed one- and two-dimensional encoding (Group 3, 2-D) - CCITTgroup = 3 - return CCITTgroup - - -class CCITTFaxDecode: - """ - See 3.3.5 CCITTFaxDecode Filter (PDF 1.7 Standard). - - Either Group 3 or Group 4 CCITT facsimile (fax) encoding. - CCITT encoding is bit-oriented, not byte-oriented. - - See: TABLE 3.9 Optional parameters for the CCITTFaxDecode filter - """ - - @staticmethod - def _get_parameters( - parameters: Union[None, ArrayObject, DictionaryObject], rows: int - ) -> CCITParameters: - # TABLE 3.9 Optional parameters for the CCITTFaxDecode filter - k = 0 - columns = 1728 - if parameters: - if isinstance(parameters, ArrayObject): - for decode_parm in parameters: - if CCITT.COLUMNS in decode_parm: - columns = decode_parm[CCITT.COLUMNS] - if CCITT.K in decode_parm: - k = decode_parm[CCITT.K] - else: - if CCITT.COLUMNS in parameters: - columns = parameters[CCITT.COLUMNS] # type: ignore - if CCITT.K in parameters: - k = parameters[CCITT.K] # type: ignore - - return CCITParameters(k, columns, rows) - - @staticmethod - def decode( - data: bytes, - decode_parms: Union[None, ArrayObject, DictionaryObject] = None, - height: int = 0, - **kwargs: Any, - ) -> bytes: - if "decodeParms" in kwargs: # pragma: no cover - deprecate_with_replacement("decodeParms", "parameters", "4.0.0") - decode_parms = kwargs["decodeParms"] - parms = CCITTFaxDecode._get_parameters(decode_parms, height) - - img_size = len(data) - tiff_header_struct = "<2shlh" + "hhll" * 8 + "h" - tiff_header = struct.pack( - tiff_header_struct, - b"II", # Byte order indication: Little endian - 42, # Version number (always 42) - 8, # Offset to first IFD - 8, # Number of tags in IFD - 256, - 4, - 1, - parms.columns, # ImageWidth, LONG, 1, width - 257, - 4, - 1, - parms.rows, # ImageLength, LONG, 1, length - 258, - 3, - 1, - 1, # BitsPerSample, SHORT, 1, 1 - 259, - 3, - 1, - parms.group, # Compression, SHORT, 1, 4 = CCITT Group 4 fax encoding - 262, - 3, - 1, - 0, # Thresholding, SHORT, 1, 0 = WhiteIsZero - 273, - 4, - 1, - struct.calcsize( - tiff_header_struct - ), # StripOffsets, LONG, 1, length of header - 278, - 4, - 1, - parms.rows, # RowsPerStrip, LONG, 1, length - 279, - 4, - 1, - img_size, # StripByteCounts, LONG, 1, size of image - 0, # last IFD - ) - - return tiff_header + data - - -def decode_stream_data(stream: Any) -> Union[str, bytes]: # utils.StreamObject - filters = stream.get(SA.FILTER, ()) - if isinstance(filters, IndirectObject): - filters = cast(ArrayObject, filters.get_object()) - if len(filters) and not isinstance(filters[0], NameObject): - # we have a single filter instance - filters = (filters,) - data: bytes = stream._data - # If there is not data to decode we should not try to decode the data. - if data: - for filter_type in filters: - if filter_type in (FT.FLATE_DECODE, FTA.FL): - data = FlateDecode.decode(data, stream.get(SA.DECODE_PARMS)) - elif filter_type in (FT.ASCII_HEX_DECODE, FTA.AHx): - data = ASCIIHexDecode.decode(data) # type: ignore - elif filter_type in (FT.LZW_DECODE, FTA.LZW): - data = LZWDecode.decode(data, stream.get(SA.DECODE_PARMS)) # type: ignore - elif filter_type in (FT.ASCII_85_DECODE, FTA.A85): - data = ASCII85Decode.decode(data) - elif filter_type == FT.DCT_DECODE: - data = DCTDecode.decode(data) - elif filter_type == "/JPXDecode": - data = JPXDecode.decode(data) - elif filter_type == FT.CCITT_FAX_DECODE: - height = stream.get(IA.HEIGHT, ()) - data = CCITTFaxDecode.decode(data, stream.get(SA.DECODE_PARMS), height) - elif filter_type == "/Crypt": - decode_parms = stream.get(SA.DECODE_PARMS, {}) - if "/Name" not in decode_parms and "/Type" not in decode_parms: - pass - else: - raise NotImplementedError( - "/Crypt filter with /Name or /Type not supported yet" - ) - else: - # Unsupported filter - raise NotImplementedError(f"unsupported filter {filter_type}") - return data - - -def decodeStreamData(stream: Any) -> Union[str, bytes]: # pragma: no cover - deprecate_with_replacement("decodeStreamData", "decode_stream_data", "4.0.0") - return decode_stream_data(stream) - - -def _xobj_to_image(x_object_obj: Dict[str, Any]) -> Tuple[Optional[str], bytes]: - """ - Users need to have the pillow package installed. - - It's unclear if PyPDF2 will keep this function here, hence it's private. - It might get removed at any point. - - :return: Tuple[file extension, bytes] - """ - try: - from PIL import Image - except ImportError: - raise ImportError( - "pillow is required to do image extraction. " - "It can be installed via 'pip install PyPDF2[image]'" - ) - - size = (x_object_obj[IA.WIDTH], x_object_obj[IA.HEIGHT]) - data = x_object_obj.get_data() # type: ignore - if ( - IA.COLOR_SPACE in x_object_obj - and x_object_obj[IA.COLOR_SPACE] == ColorSpaces.DEVICE_RGB - ): - # https://pillow.readthedocs.io/en/stable/handbook/concepts.html#modes - mode: Literal["RGB", "P"] = "RGB" - else: - mode = "P" - extension = None - if SA.FILTER in x_object_obj: - if x_object_obj[SA.FILTER] == FT.FLATE_DECODE: - extension = ".png" # mime_type = "image/png" - color_space = None - if "/ColorSpace" in x_object_obj: - color_space = x_object_obj["/ColorSpace"].get_object() - if ( - isinstance(color_space, ArrayObject) - and color_space[0] == "/Indexed" - ): - color_space, base, hival, lookup = ( - value.get_object() for value in color_space - ) - - img = Image.frombytes(mode, size, data) - if color_space == "/Indexed": - from .generic import ByteStringObject - - if isinstance(lookup, ByteStringObject): - if base == ColorSpaces.DEVICE_GRAY and len(lookup) == hival + 1: - lookup = b"".join( - [lookup[i : i + 1] * 3 for i in range(len(lookup))] - ) - img.putpalette(lookup) - else: - img.putpalette(lookup.get_data()) - img = img.convert("L" if base == ColorSpaces.DEVICE_GRAY else "RGB") - if G.S_MASK in x_object_obj: # add alpha channel - alpha = Image.frombytes("L", size, x_object_obj[G.S_MASK].get_data()) - img.putalpha(alpha) - img_byte_arr = BytesIO() - img.save(img_byte_arr, format="PNG") - data = img_byte_arr.getvalue() - elif x_object_obj[SA.FILTER] in ( - [FT.LZW_DECODE], - [FT.ASCII_85_DECODE], - [FT.CCITT_FAX_DECODE], - ): - # I'm not sure if the following logic is correct. - # There might not be any relationship between the filters and the - # extension - if x_object_obj[SA.FILTER] in [[FT.LZW_DECODE], [FT.CCITT_FAX_DECODE]]: - extension = ".tiff" # mime_type = "image/tiff" - else: - extension = ".png" # mime_type = "image/png" - data = b_(data) - elif x_object_obj[SA.FILTER] == FT.DCT_DECODE: - extension = ".jpg" # mime_type = "image/jpeg" - elif x_object_obj[SA.FILTER] == "/JPXDecode": - extension = ".jp2" # mime_type = "image/x-jp2" - elif x_object_obj[SA.FILTER] == FT.CCITT_FAX_DECODE: - extension = ".tiff" # mime_type = "image/tiff" - else: - extension = ".png" # mime_type = "image/png" - img = Image.frombytes(mode, size, data) - img_byte_arr = BytesIO() - img.save(img_byte_arr, format="PNG") - data = img_byte_arr.getvalue() - - return extension, data diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__init__.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__init__.py deleted file mode 100644 index 5f0b16dd..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__init__.py +++ /dev/null @@ -1,144 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -"""Implementation of generic PDF objects (dictionary, number, string, ...).""" -__author__ = "Mathieu Fenniak" -__author_email__ = "biziqe@mathieu.fenniak.net" - -from typing import Dict, List, Union - -from .._utils import StreamType, deprecate_with_replacement -from ..constants import OutlineFontFlag -from ._annotations import AnnotationBuilder -from ._base import ( - BooleanObject, - ByteStringObject, - FloatObject, - IndirectObject, - NameObject, - NullObject, - NumberObject, - PdfObject, - TextStringObject, - encode_pdfdocencoding, -) -from ._data_structures import ( - ArrayObject, - ContentStream, - DecodedStreamObject, - Destination, - DictionaryObject, - EncodedStreamObject, - Field, - StreamObject, - TreeObject, - read_object, -) -from ._fit import Fit -from ._outline import Bookmark, OutlineItem -from ._rectangle import RectangleObject -from ._utils import ( - create_string_object, - decode_pdfdocencoding, - hex_to_rgb, - read_hex_string_from_stream, - read_string_from_stream, -) - - -def readHexStringFromStream( - stream: StreamType, -) -> Union["TextStringObject", "ByteStringObject"]: # pragma: no cover - deprecate_with_replacement( - "readHexStringFromStream", "read_hex_string_from_stream", "4.0.0" - ) - return read_hex_string_from_stream(stream) - - -def readStringFromStream( - stream: StreamType, - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, -) -> Union["TextStringObject", "ByteStringObject"]: # pragma: no cover - deprecate_with_replacement( - "readStringFromStream", "read_string_from_stream", "4.0.0" - ) - return read_string_from_stream(stream, forced_encoding) - - -def createStringObject( - string: Union[str, bytes], - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, -) -> Union[TextStringObject, ByteStringObject]: # pragma: no cover - deprecate_with_replacement("createStringObject", "create_string_object", "4.0.0") - return create_string_object(string, forced_encoding) - - -PAGE_FIT = Fit.fit() - - -__all__ = [ - # Base types - "BooleanObject", - "FloatObject", - "NumberObject", - "NameObject", - "IndirectObject", - "NullObject", - "PdfObject", - "TextStringObject", - "ByteStringObject", - # Annotations - "AnnotationBuilder", - # Fit - "Fit", - "PAGE_FIT", - # Data structures - "ArrayObject", - "DictionaryObject", - "TreeObject", - "StreamObject", - "DecodedStreamObject", - "EncodedStreamObject", - "ContentStream", - "RectangleObject", - "Field", - "Destination", - # --- More specific stuff - # Outline - "OutlineItem", - "OutlineFontFlag", - "Bookmark", - # Data structures core functions - "read_object", - # Utility functions - "create_string_object", - "encode_pdfdocencoding", - "decode_pdfdocencoding", - "hex_to_rgb", - "read_hex_string_from_stream", - "read_string_from_stream", -] diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 66fa852d..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_annotations.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_annotations.cpython-312.pyc deleted file mode 100644 index 52b0f0e4..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_annotations.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_base.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_base.cpython-312.pyc deleted file mode 100644 index 0de6d49b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_base.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_data_structures.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_data_structures.cpython-312.pyc deleted file mode 100644 index 30a1039b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_data_structures.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_fit.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_fit.cpython-312.pyc deleted file mode 100644 index 5b533ccb..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_fit.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_outline.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_outline.cpython-312.pyc deleted file mode 100644 index 4ac2e57e..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_outline.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_rectangle.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_rectangle.cpython-312.pyc deleted file mode 100644 index 66df412e..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_rectangle.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_utils.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_utils.cpython-312.pyc deleted file mode 100644 index c6fc8622..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/__pycache__/_utils.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_annotations.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_annotations.py deleted file mode 100644 index bb46dd90..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_annotations.py +++ /dev/null @@ -1,275 +0,0 @@ -from typing import Optional, Tuple, Union - -from ._base import ( - BooleanObject, - FloatObject, - NameObject, - NumberObject, - TextStringObject, -) -from ._data_structures import ArrayObject, DictionaryObject -from ._fit import DEFAULT_FIT, Fit -from ._rectangle import RectangleObject -from ._utils import hex_to_rgb - - -class AnnotationBuilder: - """ - The AnnotationBuilder creates dictionaries representing PDF annotations. - - Those dictionaries can be modified before they are added to a PdfWriter - instance via `writer.add_annotation`. - - See `adding PDF annotations <../user/adding-pdf-annotations.html>`_ for - it's usage combined with PdfWriter. - """ - - from ..types import FitType, ZoomArgType - - @staticmethod - def text( - rect: Union[RectangleObject, Tuple[float, float, float, float]], - text: str, - open: bool = False, - flags: int = 0, - ) -> DictionaryObject: - """ - Add text annotation. - - :param Tuple[int, int, int, int] rect: - or array of four integers specifying the clickable rectangular area - ``[xLL, yLL, xUR, yUR]`` - :param bool open: - :param int flags: - """ - # TABLE 8.23 Additional entries specific to a text annotation - text_obj = DictionaryObject( - { - NameObject("/Type"): NameObject("/Annot"), - NameObject("/Subtype"): NameObject("/Text"), - NameObject("/Rect"): RectangleObject(rect), - NameObject("/Contents"): TextStringObject(text), - NameObject("/Open"): BooleanObject(open), - NameObject("/Flags"): NumberObject(flags), - } - ) - return text_obj - - @staticmethod - def free_text( - text: str, - rect: Union[RectangleObject, Tuple[float, float, float, float]], - font: str = "Helvetica", - bold: bool = False, - italic: bool = False, - font_size: str = "14pt", - font_color: str = "000000", - border_color: str = "000000", - background_color: str = "ffffff", - ) -> DictionaryObject: - """ - Add text in a rectangle to a page. - - :param str text: Text to be added - :param RectangleObject rect: or array of four integers - specifying the clickable rectangular area ``[xLL, yLL, xUR, yUR]`` - :param str font: Name of the Font, e.g. 'Helvetica' - :param bool bold: Print the text in bold - :param bool italic: Print the text in italic - :param str font_size: How big the text will be, e.g. '14pt' - :param str font_color: Hex-string for the color - :param str border_color: Hex-string for the border color - :param str background_color: Hex-string for the background of the annotation - """ - font_str = "font: " - if bold is True: - font_str = font_str + "bold " - if italic is True: - font_str = font_str + "italic " - font_str = font_str + font + " " + font_size - font_str = font_str + ";text-align:left;color:#" + font_color - - bg_color_str = "" - for st in hex_to_rgb(border_color): - bg_color_str = bg_color_str + str(st) + " " - bg_color_str = bg_color_str + "rg" - - free_text = DictionaryObject() - free_text.update( - { - NameObject("/Type"): NameObject("/Annot"), - NameObject("/Subtype"): NameObject("/FreeText"), - NameObject("/Rect"): RectangleObject(rect), - NameObject("/Contents"): TextStringObject(text), - # font size color - NameObject("/DS"): TextStringObject(font_str), - # border color - NameObject("/DA"): TextStringObject(bg_color_str), - # background color - NameObject("/C"): ArrayObject( - [FloatObject(n) for n in hex_to_rgb(background_color)] - ), - } - ) - return free_text - - @staticmethod - def line( - p1: Tuple[float, float], - p2: Tuple[float, float], - rect: Union[RectangleObject, Tuple[float, float, float, float]], - text: str = "", - title_bar: str = "", - ) -> DictionaryObject: - """ - Draw a line on the PDF. - - :param Tuple[float, float] p1: First point - :param Tuple[float, float] p2: Second point - :param RectangleObject rect: or array of four - integers specifying the clickable rectangular area - ``[xLL, yLL, xUR, yUR]`` - :param str text: Text to be displayed as the line annotation - :param str title_bar: Text to be displayed in the title bar of the - annotation; by convention this is the name of the author - """ - line_obj = DictionaryObject( - { - NameObject("/Type"): NameObject("/Annot"), - NameObject("/Subtype"): NameObject("/Line"), - NameObject("/Rect"): RectangleObject(rect), - NameObject("/T"): TextStringObject(title_bar), - NameObject("/L"): ArrayObject( - [ - FloatObject(p1[0]), - FloatObject(p1[1]), - FloatObject(p2[0]), - FloatObject(p2[1]), - ] - ), - NameObject("/LE"): ArrayObject( - [ - NameObject(None), - NameObject(None), - ] - ), - NameObject("/IC"): ArrayObject( - [ - FloatObject(0.5), - FloatObject(0.5), - FloatObject(0.5), - ] - ), - NameObject("/Contents"): TextStringObject(text), - } - ) - return line_obj - - @staticmethod - def rectangle( - rect: Union[RectangleObject, Tuple[float, float, float, float]], - interiour_color: Optional[str] = None, - ) -> DictionaryObject: - """ - Draw a rectangle on the PDF. - - :param RectangleObject rect: or array of four - integers specifying the clickable rectangular area - ``[xLL, yLL, xUR, yUR]`` - """ - square_obj = DictionaryObject( - { - NameObject("/Type"): NameObject("/Annot"), - NameObject("/Subtype"): NameObject("/Square"), - NameObject("/Rect"): RectangleObject(rect), - } - ) - - if interiour_color: - square_obj[NameObject("/IC")] = ArrayObject( - [FloatObject(n) for n in hex_to_rgb(interiour_color)] - ) - - return square_obj - - @staticmethod - def link( - rect: Union[RectangleObject, Tuple[float, float, float, float]], - border: Optional[ArrayObject] = None, - url: Optional[str] = None, - target_page_index: Optional[int] = None, - fit: Fit = DEFAULT_FIT, - ) -> DictionaryObject: - """ - Add a link to the document. - - The link can either be an external link or an internal link. - - An external link requires the URL parameter. - An internal link requires the target_page_index, fit, and fit args. - - - :param RectangleObject rect: or array of four - integers specifying the clickable rectangular area - ``[xLL, yLL, xUR, yUR]`` - :param border: if provided, an array describing border-drawing - properties. See the PDF spec for details. No border will be - drawn if this argument is omitted. - - horizontal corner radius, - - vertical corner radius, and - - border width - - Optionally: Dash - :param str url: Link to a website (if you want to make an external link) - :param int target_page_index: index of the page to which the link should go - (if you want to make an internal link) - :param Fit fit: Page fit or 'zoom' option. - """ - from ..types import BorderArrayType - - is_external = url is not None - is_internal = target_page_index is not None - if not is_external and not is_internal: - raise ValueError( - "Either 'url' or 'target_page_index' have to be provided. Both were None." - ) - if is_external and is_internal: - raise ValueError( - f"Either 'url' or 'target_page_index' have to be provided. url={url}, target_page_index={target_page_index}" - ) - - border_arr: BorderArrayType - if border is not None: - border_arr = [NameObject(n) for n in border[:3]] - if len(border) == 4: - dash_pattern = ArrayObject([NameObject(n) for n in border[3]]) - border_arr.append(dash_pattern) - else: - border_arr = [NumberObject(0)] * 3 - - link_obj = DictionaryObject( - { - NameObject("/Type"): NameObject("/Annot"), - NameObject("/Subtype"): NameObject("/Link"), - NameObject("/Rect"): RectangleObject(rect), - NameObject("/Border"): ArrayObject(border_arr), - } - ) - if is_external: - link_obj[NameObject("/A")] = DictionaryObject( - { - NameObject("/S"): NameObject("/URI"), - NameObject("/Type"): NameObject("/Action"), - NameObject("/URI"): TextStringObject(url), - } - ) - if is_internal: - # This needs to be updated later! - dest_deferred = DictionaryObject( - { - "target_page_index": NumberObject(target_page_index), - "fit": NameObject(fit.fit_type), - "fit_args": fit.fit_args, - } - ) - link_obj[NameObject("/Dest")] = dest_deferred - return link_obj diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_base.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_base.py deleted file mode 100644 index 00b9c17b..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_base.py +++ /dev/null @@ -1,648 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - -import codecs -import decimal -import hashlib -import re -from binascii import unhexlify -from typing import Any, Callable, List, Optional, Tuple, Union, cast - -from .._codecs import _pdfdoc_encoding_rev -from .._protocols import PdfObjectProtocol, PdfWriterProtocol -from .._utils import ( - StreamType, - b_, - deprecation_with_replacement, - hex_str, - hexencode, - logger_warning, - read_non_whitespace, - read_until_regex, - str_, -) -from ..errors import STREAM_TRUNCATED_PREMATURELY, PdfReadError, PdfStreamError - -__author__ = "Mathieu Fenniak" -__author_email__ = "biziqe@mathieu.fenniak.net" - - -class PdfObject(PdfObjectProtocol): - # function for calculating a hash value - hash_func: Callable[..., "hashlib._Hash"] = hashlib.sha1 - indirect_reference: Optional["IndirectObject"] - - def hash_value_data(self) -> bytes: - return ("%s" % self).encode() - - def hash_value(self) -> bytes: - return ( - "%s:%s" - % ( - self.__class__.__name__, - self.hash_func(self.hash_value_data()).hexdigest(), - ) - ).encode() - - def clone( - self, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "PdfObject": - """ - clone object into pdf_dest (PdfWriterProtocol which is an interface for PdfWriter) - force_duplicate: in standard if the object has been already cloned and reference, - the copy is returned; when force_duplicate == True, a new copy is always performed - ignore_fields : list/tuple of Fields names (for dictionaries that will be ignored during cloning (apply also to childs duplication) - in standard, clone function call _reference_clone (see _reference) - """ - raise Exception("clone PdfObject") - - def _reference_clone( - self, clone: Any, pdf_dest: PdfWriterProtocol - ) -> PdfObjectProtocol: - """ - reference the object within the _objects of pdf_dest only if - indirect_reference attribute exists (which means the objects - was already identified in xref/xobjstm) - if object has been already referenced do nothing - """ - try: - if clone.indirect_reference.pdf == pdf_dest: - return clone - except Exception: - pass - if hasattr(self, "indirect_reference"): - ind = self.indirect_reference - i = len(pdf_dest._objects) + 1 - if ind is not None: - if id(ind.pdf) not in pdf_dest._id_translated: - pdf_dest._id_translated[id(ind.pdf)] = {} - if ind.idnum in pdf_dest._id_translated[id(ind.pdf)]: - obj = pdf_dest.get_object( - pdf_dest._id_translated[id(ind.pdf)][ind.idnum] - ) - assert obj is not None - return obj - pdf_dest._id_translated[id(ind.pdf)][ind.idnum] = i - pdf_dest._objects.append(clone) - clone.indirect_reference = IndirectObject(i, 0, pdf_dest) - return clone - - def get_object(self) -> Optional["PdfObject"]: - """Resolve indirect references.""" - return self - - def getObject(self) -> Optional["PdfObject"]: # pragma: no cover - deprecation_with_replacement("getObject", "get_object", "3.0.0") - return self.get_object() - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - raise NotImplementedError - - -class NullObject(PdfObject): - def clone( - self, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "NullObject": - """clone object into pdf_dest""" - return cast("NullObject", self._reference_clone(NullObject(), pdf_dest)) - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b"null") - - @staticmethod - def read_from_stream(stream: StreamType) -> "NullObject": - nulltxt = stream.read(4) - if nulltxt != b"null": - raise PdfReadError("Could not read Null object") - return NullObject() - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - def __repr__(self) -> str: - return "NullObject" - - @staticmethod - def readFromStream(stream: StreamType) -> "NullObject": # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return NullObject.read_from_stream(stream) - - -class BooleanObject(PdfObject): - def __init__(self, value: Any) -> None: - self.value = value - - def clone( - self, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "BooleanObject": - """clone object into pdf_dest""" - return cast( - "BooleanObject", self._reference_clone(BooleanObject(self.value), pdf_dest) - ) - - def __eq__(self, __o: object) -> bool: - if isinstance(__o, BooleanObject): - return self.value == __o.value - elif isinstance(__o, bool): - return self.value == __o - else: - return False - - def __repr__(self) -> str: - return "True" if self.value else "False" - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - if self.value: - stream.write(b"true") - else: - stream.write(b"false") - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - @staticmethod - def read_from_stream(stream: StreamType) -> "BooleanObject": - word = stream.read(4) - if word == b"true": - return BooleanObject(True) - elif word == b"fals": - stream.read(1) - return BooleanObject(False) - else: - raise PdfReadError("Could not read Boolean object") - - @staticmethod - def readFromStream(stream: StreamType) -> "BooleanObject": # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return BooleanObject.read_from_stream(stream) - - -class IndirectObject(PdfObject): - def __init__(self, idnum: int, generation: int, pdf: Any) -> None: # PdfReader - self.idnum = idnum - self.generation = generation - self.pdf = pdf - - def clone( - self, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "IndirectObject": - """clone object into pdf_dest""" - if self.pdf == pdf_dest and not force_duplicate: - # Already duplicated and no extra duplication required - return self - if id(self.pdf) not in pdf_dest._id_translated: - pdf_dest._id_translated[id(self.pdf)] = {} - - if not force_duplicate and self.idnum in pdf_dest._id_translated[id(self.pdf)]: - dup = pdf_dest.get_object(pdf_dest._id_translated[id(self.pdf)][self.idnum]) - else: - obj = self.get_object() - assert obj is not None - dup = obj.clone(pdf_dest, force_duplicate, ignore_fields) - assert dup is not None - assert dup.indirect_reference is not None - return dup.indirect_reference - - @property - def indirect_reference(self) -> "IndirectObject": # type: ignore[override] - return self - - def get_object(self) -> Optional["PdfObject"]: - obj = self.pdf.get_object(self) - if obj is None: - return None - return obj.get_object() - - def __repr__(self) -> str: - return f"IndirectObject({self.idnum!r}, {self.generation!r}, {id(self.pdf)})" - - def __eq__(self, other: Any) -> bool: - return ( - other is not None - and isinstance(other, IndirectObject) - and self.idnum == other.idnum - and self.generation == other.generation - and self.pdf is other.pdf - ) - - def __ne__(self, other: Any) -> bool: - return not self.__eq__(other) - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b_(f"{self.idnum} {self.generation} R")) - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - @staticmethod - def read_from_stream(stream: StreamType, pdf: Any) -> "IndirectObject": # PdfReader - idnum = b"" - while True: - tok = stream.read(1) - if not tok: - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - if tok.isspace(): - break - idnum += tok - generation = b"" - while True: - tok = stream.read(1) - if not tok: - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - if tok.isspace(): - if not generation: - continue - break - generation += tok - r = read_non_whitespace(stream) - if r != b"R": - raise PdfReadError( - f"Error reading indirect object reference at byte {hex_str(stream.tell())}" - ) - return IndirectObject(int(idnum), int(generation), pdf) - - @staticmethod - def readFromStream( - stream: StreamType, pdf: Any # PdfReader - ) -> "IndirectObject": # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return IndirectObject.read_from_stream(stream, pdf) - - -class FloatObject(decimal.Decimal, PdfObject): - def __new__( - cls, value: Union[str, Any] = "0", context: Optional[Any] = None - ) -> "FloatObject": - try: - return decimal.Decimal.__new__(cls, str_(value), context) - except Exception: - # If this isn't a valid decimal (happens in malformed PDFs) - # fallback to 0 - logger_warning(f"FloatObject ({value}) invalid; use 0.0 instead", __name__) - return decimal.Decimal.__new__(cls, "0.0") - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "FloatObject": - """clone object into pdf_dest""" - return cast("FloatObject", self._reference_clone(FloatObject(self), pdf_dest)) - - def __repr__(self) -> str: - if self == self.to_integral(): - # If this is an integer, format it with no decimal place. - return str(self.quantize(decimal.Decimal(1))) - else: - # Otherwise, format it with a decimal place, taking care to - # remove any extraneous trailing zeros. - return f"{self:f}".rstrip("0") - - def as_numeric(self) -> float: - return float(repr(self).encode("utf8")) - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(repr(self).encode("utf8")) - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - -class NumberObject(int, PdfObject): - NumberPattern = re.compile(b"[^+-.0-9]") - - def __new__(cls, value: Any) -> "NumberObject": - try: - return int.__new__(cls, int(value)) - except ValueError: - logger_warning(f"NumberObject({value}) invalid; use 0 instead", __name__) - return int.__new__(cls, 0) - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "NumberObject": - """clone object into pdf_dest""" - return cast("NumberObject", self._reference_clone(NumberObject(self), pdf_dest)) - - def as_numeric(self) -> int: - return int(repr(self).encode("utf8")) - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(repr(self).encode("utf8")) - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - @staticmethod - def read_from_stream(stream: StreamType) -> Union["NumberObject", "FloatObject"]: - num = read_until_regex(stream, NumberObject.NumberPattern) - if num.find(b".") != -1: - return FloatObject(num) - return NumberObject(num) - - @staticmethod - def readFromStream( - stream: StreamType, - ) -> Union["NumberObject", "FloatObject"]: # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return NumberObject.read_from_stream(stream) - - -class ByteStringObject(bytes, PdfObject): - """ - Represents a string object where the text encoding could not be determined. - This occurs quite often, as the PDF spec doesn't provide an alternate way to - represent strings -- for example, the encryption data stored in files (like - /O) is clearly not text, but is still stored in a "String" object. - """ - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "ByteStringObject": - """clone object into pdf_dest""" - return cast( - "ByteStringObject", - self._reference_clone(ByteStringObject(bytes(self)), pdf_dest), - ) - - @property - def original_bytes(self) -> bytes: - """For compatibility with TextStringObject.original_bytes.""" - return self - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - bytearr = self - if encryption_key: - from .._security import RC4_encrypt - - bytearr = RC4_encrypt(encryption_key, bytearr) # type: ignore - stream.write(b"<") - stream.write(hexencode(bytearr)) - stream.write(b">") - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - -class TextStringObject(str, PdfObject): - """ - Represents a string object that has been decoded into a real unicode string. - If read from a PDF document, this string appeared to match the - PDFDocEncoding, or contained a UTF-16BE BOM mark to cause UTF-16 decoding to - occur. - """ - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "TextStringObject": - """clone object into pdf_dest""" - obj = TextStringObject(self) - obj.autodetect_pdfdocencoding = self.autodetect_pdfdocencoding - obj.autodetect_utf16 = self.autodetect_utf16 - return cast("TextStringObject", self._reference_clone(obj, pdf_dest)) - - autodetect_pdfdocencoding = False - autodetect_utf16 = False - - @property - def original_bytes(self) -> bytes: - """ - It is occasionally possible that a text string object gets created where - a byte string object was expected due to the autodetection mechanism -- - if that occurs, this "original_bytes" property can be used to - back-calculate what the original encoded bytes were. - """ - return self.get_original_bytes() - - def get_original_bytes(self) -> bytes: - # We're a text string object, but the library is trying to get our raw - # bytes. This can happen if we auto-detected this string as text, but - # we were wrong. It's pretty common. Return the original bytes that - # would have been used to create this object, based upon the autodetect - # method. - if self.autodetect_utf16: - return codecs.BOM_UTF16_BE + self.encode("utf-16be") - elif self.autodetect_pdfdocencoding: - return encode_pdfdocencoding(self) - else: - raise Exception("no information about original bytes") - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - # Try to write the string out as a PDFDocEncoding encoded string. It's - # nicer to look at in the PDF file. Sadly, we take a performance hit - # here for trying... - try: - bytearr = encode_pdfdocencoding(self) - except UnicodeEncodeError: - bytearr = codecs.BOM_UTF16_BE + self.encode("utf-16be") - if encryption_key: - from .._security import RC4_encrypt - - bytearr = RC4_encrypt(encryption_key, bytearr) - obj = ByteStringObject(bytearr) - obj.write_to_stream(stream, None) - else: - stream.write(b"(") - for c in bytearr: - if not chr(c).isalnum() and c != b" ": - # This: - # stream.write(b_(rf"\{c:0>3o}")) - # gives - # https://github.com/davidhalter/parso/issues/207 - stream.write(b_("\\%03o" % c)) - else: - stream.write(b_(chr(c))) - stream.write(b")") - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - -class NameObject(str, PdfObject): - delimiter_pattern = re.compile(rb"\s+|[\(\)<>\[\]{}/%]") - surfix = b"/" - renumber_table = { - "#": b"#23", - "(": b"#28", - ")": b"#29", - "/": b"#2F", - **{chr(i): f"#{i:02X}".encode() for i in range(33)}, - } - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "NameObject": - """clone object into pdf_dest""" - return cast("NameObject", self._reference_clone(NameObject(self), pdf_dest)) - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(self.renumber()) # b_(renumber(self))) - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - def renumber(self) -> bytes: - out = self[0].encode("utf-8") - if out != b"/": - logger_warning(f"Incorrect first char in NameObject:({self})", __name__) - for c in self[1:]: - if c > "~": - for x in c.encode("utf-8"): - out += f"#{x:02X}".encode() - else: - try: - out += self.renumber_table[c] - except KeyError: - out += c.encode("utf-8") - return out - - @staticmethod - def unnumber(sin: bytes) -> bytes: - i = sin.find(b"#", 0) - while i >= 0: - try: - sin = sin[:i] + unhexlify(sin[i + 1 : i + 3]) + sin[i + 3 :] - i = sin.find(b"#", i + 1) - except ValueError: - # if the 2 characters after # can not be converted to hexa - # we change nothing and carry on - i = i + 1 - return sin - - @staticmethod - def read_from_stream(stream: StreamType, pdf: Any) -> "NameObject": # PdfReader - name = stream.read(1) - if name != NameObject.surfix: - raise PdfReadError("name read error") - name += read_until_regex(stream, NameObject.delimiter_pattern, ignore_eof=True) - try: - # Name objects should represent irregular characters - # with a '#' followed by the symbol's hex number - name = NameObject.unnumber(name) - for enc in ("utf-8", "gbk"): - try: - ret = name.decode(enc) - return NameObject(ret) - except Exception: - pass - raise UnicodeDecodeError("", name, 0, 0, "Code Not Found") - except (UnicodeEncodeError, UnicodeDecodeError) as e: - if not pdf.strict: - logger_warning( - f"Illegal character in Name Object ({repr(name)})", __name__ - ) - return NameObject(name.decode("charmap")) - else: - raise PdfReadError( - f"Illegal character in Name Object ({repr(name)})" - ) from e - - @staticmethod - def readFromStream( - stream: StreamType, pdf: Any # PdfReader - ) -> "NameObject": # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return NameObject.read_from_stream(stream, pdf) - - -def encode_pdfdocencoding(unicode_string: str) -> bytes: - retval = b"" - for c in unicode_string: - try: - retval += b_(chr(_pdfdoc_encoding_rev[c])) - except KeyError: - raise UnicodeEncodeError( - "pdfdocencoding", c, -1, -1, "does not exist in translation table" - ) - return retval diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_data_structures.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_data_structures.py deleted file mode 100644 index 19f5be9f..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_data_structures.py +++ /dev/null @@ -1,1382 +0,0 @@ -# Copyright (c) 2006, Mathieu Fenniak -# All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright notice, -# this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright notice, -# this list of conditions and the following disclaimer in the documentation -# and/or other materials provided with the distribution. -# * The name of the author may not be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE -# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE -# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR -# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF -# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS -# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) -# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE -# POSSIBILITY OF SUCH DAMAGE. - - -__author__ = "Mathieu Fenniak" -__author_email__ = "biziqe@mathieu.fenniak.net" - -import logging -import re -from io import BytesIO -from typing import Any, Dict, Iterable, List, Optional, Tuple, Union, cast - -from .._protocols import PdfWriterProtocol -from .._utils import ( - WHITESPACES, - StreamType, - b_, - deprecate_with_replacement, - deprecation_with_replacement, - hex_str, - logger_warning, - read_non_whitespace, - read_until_regex, - skip_over_comment, -) -from ..constants import ( - CheckboxRadioButtonAttributes, - FieldDictionaryAttributes, -) -from ..constants import FilterTypes as FT -from ..constants import OutlineFontFlag -from ..constants import StreamAttributes as SA -from ..constants import TypArguments as TA -from ..constants import TypFitArguments as TF -from ..errors import STREAM_TRUNCATED_PREMATURELY, PdfReadError, PdfStreamError -from ._base import ( - BooleanObject, - FloatObject, - IndirectObject, - NameObject, - NullObject, - NumberObject, - PdfObject, - TextStringObject, -) -from ._fit import Fit -from ._utils import read_hex_string_from_stream, read_string_from_stream - -logger = logging.getLogger(__name__) -NumberSigns = b"+-" -IndirectPattern = re.compile(rb"[+-]?(\d+)\s+(\d+)\s+R[^a-zA-Z]") - - -class ArrayObject(list, PdfObject): - def clone( - self, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "ArrayObject": - """clone object into pdf_dest""" - try: - if self.indirect_reference.pdf == pdf_dest and not force_duplicate: # type: ignore - return self - except Exception: - pass - arr = cast("ArrayObject", self._reference_clone(ArrayObject(), pdf_dest)) - for data in self: - if isinstance(data, StreamObject): - # if not hasattr(data, "indirect_reference"): - # data.indirect_reference = None - dup = data._reference_clone( - data.clone(pdf_dest, force_duplicate, ignore_fields), pdf_dest - ) - arr.append(dup.indirect_reference) - elif hasattr(data, "clone"): - arr.append(data.clone(pdf_dest, force_duplicate, ignore_fields)) - else: - arr.append(data) - return cast("ArrayObject", arr) - - def items(self) -> Iterable[Any]: - """ - Emulate DictionaryObject.items for a list - (index, object) - """ - return enumerate(self) - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b"[") - for data in self: - stream.write(b" ") - data.write_to_stream(stream, encryption_key) - stream.write(b" ]") - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - @staticmethod - def read_from_stream( - stream: StreamType, - pdf: Any, - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, - ) -> "ArrayObject": # PdfReader - arr = ArrayObject() - tmp = stream.read(1) - if tmp != b"[": - raise PdfReadError("Could not read array") - while True: - # skip leading whitespace - tok = stream.read(1) - while tok.isspace(): - tok = stream.read(1) - stream.seek(-1, 1) - # check for array ending - peekahead = stream.read(1) - if peekahead == b"]": - break - stream.seek(-1, 1) - # read and append obj - arr.append(read_object(stream, pdf, forced_encoding)) - return arr - - @staticmethod - def readFromStream( - stream: StreamType, pdf: Any # PdfReader - ) -> "ArrayObject": # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return ArrayObject.read_from_stream(stream, pdf) - - -class DictionaryObject(dict, PdfObject): - def clone( - self, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "DictionaryObject": - """clone object into pdf_dest""" - try: - if self.indirect_reference.pdf == pdf_dest and not force_duplicate: # type: ignore - return self - except Exception: - pass - - d__ = cast( - "DictionaryObject", self._reference_clone(self.__class__(), pdf_dest) - ) - if ignore_fields is None: - ignore_fields = [] - if len(d__.keys()) == 0: - d__._clone(self, pdf_dest, force_duplicate, ignore_fields) - return d__ - - def _clone( - self, - src: "DictionaryObject", - pdf_dest: PdfWriterProtocol, - force_duplicate: bool, - ignore_fields: Union[Tuple[str, ...], List[str]], - ) -> None: - """update the object from src""" - # First check if this is a chain list, we need to loop to prevent recur - if ( - ("/Next" not in ignore_fields and "/Next" in src) - or ("/Prev" not in ignore_fields and "/Prev" in src) - ) or ( - ("/N" not in ignore_fields and "/N" in src) - or ("/V" not in ignore_fields and "/V" in src) - ): - ignore_fields = list(ignore_fields) - for lst in (("/Next", "/Prev"), ("/N", "/V")): - for k in lst: - objs = [] - if ( - k in src - and k not in self - and isinstance(src.raw_get(k), IndirectObject) - ): - cur_obj: Optional["DictionaryObject"] = cast( - "DictionaryObject", src[k] - ) - prev_obj: Optional["DictionaryObject"] = self - while cur_obj is not None: - clon = cast( - "DictionaryObject", - cur_obj._reference_clone(cur_obj.__class__(), pdf_dest), - ) - objs.append((cur_obj, clon)) - assert prev_obj is not None - prev_obj[NameObject(k)] = clon.indirect_reference - prev_obj = clon - try: - if cur_obj == src: - cur_obj = None - else: - cur_obj = cast("DictionaryObject", cur_obj[k]) - except Exception: - cur_obj = None - for (s, c) in objs: - c._clone(s, pdf_dest, force_duplicate, ignore_fields + [k]) - - for k, v in src.items(): - if k not in ignore_fields: - if isinstance(v, StreamObject): - if not hasattr(v, "indirect_reference"): - v.indirect_reference = None - vv = v.clone(pdf_dest, force_duplicate, ignore_fields) - assert vv.indirect_reference is not None - self[k.clone(pdf_dest)] = vv.indirect_reference # type: ignore[attr-defined] - else: - if k not in self: - self[NameObject(k)] = ( - v.clone(pdf_dest, force_duplicate, ignore_fields) - if hasattr(v, "clone") - else v - ) - - def raw_get(self, key: Any) -> Any: - return dict.__getitem__(self, key) - - def __setitem__(self, key: Any, value: Any) -> Any: - if not isinstance(key, PdfObject): - raise ValueError("key must be PdfObject") - if not isinstance(value, PdfObject): - raise ValueError("value must be PdfObject") - return dict.__setitem__(self, key, value) - - def setdefault(self, key: Any, value: Optional[Any] = None) -> Any: - if not isinstance(key, PdfObject): - raise ValueError("key must be PdfObject") - if not isinstance(value, PdfObject): - raise ValueError("value must be PdfObject") - return dict.setdefault(self, key, value) # type: ignore - - def __getitem__(self, key: Any) -> PdfObject: - return dict.__getitem__(self, key).get_object() - - @property - def xmp_metadata(self) -> Optional[PdfObject]: - """ - Retrieve XMP (Extensible Metadata Platform) data relevant to the - this object, if available. - - Stability: Added in v1.12, will exist for all future v1.x releases. - @return Returns a {@link #xmp.XmpInformation XmlInformation} instance - that can be used to access XMP metadata from the document. Can also - return None if no metadata was found on the document root. - """ - from ..xmp import XmpInformation - - metadata = self.get("/Metadata", None) - if metadata is None: - return None - metadata = metadata.get_object() - - if not isinstance(metadata, XmpInformation): - metadata = XmpInformation(metadata) - self[NameObject("/Metadata")] = metadata - return metadata - - def getXmpMetadata( - self, - ) -> Optional[PdfObject]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :meth:`xmp_metadata` instead. - """ - deprecation_with_replacement("getXmpMetadata", "xmp_metadata", "3.0.0") - return self.xmp_metadata - - @property - def xmpMetadata(self) -> Optional[PdfObject]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :meth:`xmp_metadata` instead. - """ - deprecation_with_replacement("xmpMetadata", "xmp_metadata", "3.0.0") - return self.xmp_metadata - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b"<<\n") - for key, value in list(self.items()): - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value.write_to_stream(stream, encryption_key) - stream.write(b"\n") - stream.write(b">>") - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - @staticmethod - def read_from_stream( - stream: StreamType, - pdf: Any, # PdfReader - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, - ) -> "DictionaryObject": - def get_next_obj_pos( - p: int, p1: int, rem_gens: List[int], pdf: Any - ) -> int: # PdfReader - l = pdf.xref[rem_gens[0]] - for o in l: - if p1 > l[o] and p < l[o]: - p1 = l[o] - if len(rem_gens) == 1: - return p1 - else: - return get_next_obj_pos(p, p1, rem_gens[1:], pdf) - - def read_unsized_from_steam(stream: StreamType, pdf: Any) -> bytes: # PdfReader - # we are just pointing at beginning of the stream - eon = get_next_obj_pos(stream.tell(), 2**32, list(pdf.xref), pdf) - 1 - curr = stream.tell() - rw = stream.read(eon - stream.tell()) - p = rw.find(b"endstream") - if p < 0: - raise PdfReadError( - f"Unable to find 'endstream' marker for obj starting at {curr}." - ) - stream.seek(curr + p + 9) - return rw[: p - 1] - - tmp = stream.read(2) - if tmp != b"<<": - raise PdfReadError( - f"Dictionary read error at byte {hex_str(stream.tell())}: " - "stream must begin with '<<'" - ) - data: Dict[Any, Any] = {} - while True: - tok = read_non_whitespace(stream) - if tok == b"\x00": - continue - elif tok == b"%": - stream.seek(-1, 1) - skip_over_comment(stream) - continue - if not tok: - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - - if tok == b">": - stream.read(1) - break - stream.seek(-1, 1) - try: - key = read_object(stream, pdf) - tok = read_non_whitespace(stream) - stream.seek(-1, 1) - value = read_object(stream, pdf, forced_encoding) - except Exception as exc: - if pdf is not None and pdf.strict: - raise PdfReadError(exc.__repr__()) - logger_warning(exc.__repr__(), __name__) - retval = DictionaryObject() - retval.update(data) - return retval # return partial data - - if not data.get(key): - data[key] = value - else: - # multiple definitions of key not permitted - msg = ( - f"Multiple definitions in dictionary at byte " - f"{hex_str(stream.tell())} for key {key}" - ) - if pdf is not None and pdf.strict: - raise PdfReadError(msg) - logger_warning(msg, __name__) - - pos = stream.tell() - s = read_non_whitespace(stream) - if s == b"s" and stream.read(5) == b"tream": - eol = stream.read(1) - # odd PDF file output has spaces after 'stream' keyword but before EOL. - # patch provided by Danial Sandler - while eol == b" ": - eol = stream.read(1) - if eol not in (b"\n", b"\r"): - raise PdfStreamError("Stream data must be followed by a newline") - if eol == b"\r": - # read \n after - if stream.read(1) != b"\n": - stream.seek(-1, 1) - # this is a stream object, not a dictionary - if SA.LENGTH not in data: - raise PdfStreamError("Stream length not defined") - length = data[SA.LENGTH] - if isinstance(length, IndirectObject): - t = stream.tell() - length = pdf.get_object(length) - stream.seek(t, 0) - pstart = stream.tell() - data["__streamdata__"] = stream.read(length) - e = read_non_whitespace(stream) - ndstream = stream.read(8) - if (e + ndstream) != b"endstream": - # (sigh) - the odd PDF file has a length that is too long, so - # we need to read backwards to find the "endstream" ending. - # ReportLab (unknown version) generates files with this bug, - # and Python users into PDF files tend to be our audience. - # we need to do this to correct the streamdata and chop off - # an extra character. - pos = stream.tell() - stream.seek(-10, 1) - end = stream.read(9) - if end == b"endstream": - # we found it by looking back one character further. - data["__streamdata__"] = data["__streamdata__"][:-1] - elif not pdf.strict: - stream.seek(pstart, 0) - data["__streamdata__"] = read_unsized_from_steam(stream, pdf) - pos = stream.tell() - else: - stream.seek(pos, 0) - raise PdfReadError( - "Unable to find 'endstream' marker after stream at byte " - f"{hex_str(stream.tell())} (nd='{ndstream!r}', end='{end!r}')." - ) - else: - stream.seek(pos, 0) - if "__streamdata__" in data: - return StreamObject.initialize_from_dictionary(data) - else: - retval = DictionaryObject() - retval.update(data) - return retval - - @staticmethod - def readFromStream( - stream: StreamType, pdf: Any # PdfReader - ) -> "DictionaryObject": # pragma: no cover - deprecation_with_replacement("readFromStream", "read_from_stream", "3.0.0") - return DictionaryObject.read_from_stream(stream, pdf) - - -class TreeObject(DictionaryObject): - def __init__(self) -> None: - DictionaryObject.__init__(self) - - def hasChildren(self) -> bool: # pragma: no cover - deprecate_with_replacement("hasChildren", "has_children", "4.0.0") - return self.has_children() - - def has_children(self) -> bool: - return "/First" in self - - def __iter__(self) -> Any: - return self.children() - - def children(self) -> Iterable[Any]: - if not self.has_children(): - return - - child_ref = self[NameObject("/First")] - child = child_ref.get_object() - while True: - yield child - if child == self[NameObject("/Last")]: - return - child_ref = child.get(NameObject("/Next")) # type: ignore - if child_ref is None: - return - child = child_ref.get_object() - - def addChild(self, child: Any, pdf: Any) -> None: # pragma: no cover - deprecation_with_replacement("addChild", "add_child", "3.0.0") - self.add_child(child, pdf) - - def add_child(self, child: Any, pdf: PdfWriterProtocol) -> None: - self.insert_child(child, None, pdf) - - def insert_child(self, child: Any, before: Any, pdf: PdfWriterProtocol) -> None: - def inc_parent_counter( - parent: Union[None, IndirectObject, TreeObject], n: int - ) -> None: - if parent is None: - return - parent = cast("TreeObject", parent.get_object()) - if "/Count" in parent: - parent[NameObject("/Count")] = NumberObject( - cast(int, parent[NameObject("/Count")]) + n - ) - inc_parent_counter(parent.get("/Parent", None), n) - - child_obj = child.get_object() - child = child.indirect_reference # get_reference(child_obj) - # assert isinstance(child, IndirectObject) - - prev: Optional[DictionaryObject] - if "/First" not in self: # no child yet - self[NameObject("/First")] = child - self[NameObject("/Count")] = NumberObject(0) - self[NameObject("/Last")] = child - child_obj[NameObject("/Parent")] = self.indirect_reference - inc_parent_counter(self, child_obj.get("/Count", 1)) - if "/Next" in child_obj: - del child_obj["/Next"] - if "/Prev" in child_obj: - del child_obj["/Prev"] - return - else: - prev = cast("DictionaryObject", self["/Last"]) - - while prev.indirect_reference != before: - if "/Next" in prev: - prev = cast("TreeObject", prev["/Next"]) - else: # append at the end - prev[NameObject("/Next")] = cast("TreeObject", child) - child_obj[NameObject("/Prev")] = prev.indirect_reference - child_obj[NameObject("/Parent")] = self.indirect_reference - if "/Next" in child_obj: - del child_obj["/Next"] - self[NameObject("/Last")] = child - inc_parent_counter(self, child_obj.get("/Count", 1)) - return - try: # insert as first or in the middle - assert isinstance(prev["/Prev"], DictionaryObject) - prev["/Prev"][NameObject("/Next")] = child - child_obj[NameObject("/Prev")] = prev["/Prev"] - except Exception: # it means we are inserting in first position - del child_obj["/Next"] - child_obj[NameObject("/Next")] = prev - prev[NameObject("/Prev")] = child - child_obj[NameObject("/Parent")] = self.indirect_reference - inc_parent_counter(self, child_obj.get("/Count", 1)) - - def removeChild(self, child: Any) -> None: # pragma: no cover - deprecation_with_replacement("removeChild", "remove_child", "3.0.0") - self.remove_child(child) - - def _remove_node_from_tree( - self, prev: Any, prev_ref: Any, cur: Any, last: Any - ) -> None: - """Adjust the pointers of the linked list and tree node count.""" - next_ref = cur.get(NameObject("/Next"), None) - if prev is None: - if next_ref: - # Removing first tree node - next_obj = next_ref.get_object() - del next_obj[NameObject("/Prev")] - self[NameObject("/First")] = next_ref - self[NameObject("/Count")] = NumberObject( - self[NameObject("/Count")] - 1 # type: ignore - ) - - else: - # Removing only tree node - assert self[NameObject("/Count")] == 1 - del self[NameObject("/Count")] - del self[NameObject("/First")] - if NameObject("/Last") in self: - del self[NameObject("/Last")] - else: - if next_ref: - # Removing middle tree node - next_obj = next_ref.get_object() - next_obj[NameObject("/Prev")] = prev_ref - prev[NameObject("/Next")] = next_ref - else: - # Removing last tree node - assert cur == last - del prev[NameObject("/Next")] - self[NameObject("/Last")] = prev_ref - self[NameObject("/Count")] = NumberObject(self[NameObject("/Count")] - 1) # type: ignore - - def remove_child(self, child: Any) -> None: - child_obj = child.get_object() - child = child_obj.indirect_reference - - if NameObject("/Parent") not in child_obj: - raise ValueError("Removed child does not appear to be a tree item") - elif child_obj[NameObject("/Parent")] != self: - raise ValueError("Removed child is not a member of this tree") - - found = False - prev_ref = None - prev = None - cur_ref: Optional[Any] = self[NameObject("/First")] - cur: Optional[Dict[str, Any]] = cur_ref.get_object() # type: ignore - last_ref = self[NameObject("/Last")] - last = last_ref.get_object() - while cur is not None: - if cur == child_obj: - self._remove_node_from_tree(prev, prev_ref, cur, last) - found = True - break - - # Go to the next node - prev_ref = cur_ref - prev = cur - if NameObject("/Next") in cur: - cur_ref = cur[NameObject("/Next")] - cur = cur_ref.get_object() - else: - cur_ref = None - cur = None - - if not found: - raise ValueError("Removal couldn't find item in tree") - - _reset_node_tree_relationship(child_obj) - - def remove_from_tree(self) -> None: - """ - remove the object from the tree it is in - """ - if NameObject("/Parent") not in self: - raise ValueError("Removed child does not appear to be a tree item") - else: - cast("TreeObject", self["/Parent"]).remove_child(self) - - def emptyTree(self) -> None: # pragma: no cover - deprecate_with_replacement("emptyTree", "empty_tree", "4.0.0") - self.empty_tree() - - def empty_tree(self) -> None: - for child in self: - child_obj = child.get_object() - _reset_node_tree_relationship(child_obj) - - if NameObject("/Count") in self: - del self[NameObject("/Count")] - if NameObject("/First") in self: - del self[NameObject("/First")] - if NameObject("/Last") in self: - del self[NameObject("/Last")] - - -def _reset_node_tree_relationship(child_obj: Any) -> None: - """ - Call this after a node has been removed from a tree. - - This resets the nodes attributes in respect to that tree. - """ - del child_obj[NameObject("/Parent")] - if NameObject("/Next") in child_obj: - del child_obj[NameObject("/Next")] - if NameObject("/Prev") in child_obj: - del child_obj[NameObject("/Prev")] - - -class StreamObject(DictionaryObject): - def __init__(self) -> None: - self.__data: Optional[str] = None - self.decoded_self: Optional["DecodedStreamObject"] = None - - def _clone( - self, - src: DictionaryObject, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool, - ignore_fields: Union[Tuple[str, ...], List[str]], - ) -> None: - """update the object from src""" - self._data = cast("StreamObject", src)._data - try: - decoded_self = cast("StreamObject", src).decoded_self - if decoded_self is None: - self.decoded_self = None - else: - self.decoded_self = decoded_self.clone(pdf_dest, True, ignore_fields) # type: ignore[assignment] - except Exception: - pass - super()._clone(src, pdf_dest, force_duplicate, ignore_fields) - return - - def hash_value_data(self) -> bytes: - data = super().hash_value_data() - data += b_(self._data) - return data - - @property - def decodedSelf(self) -> Optional["DecodedStreamObject"]: # pragma: no cover - deprecation_with_replacement("decodedSelf", "decoded_self", "3.0.0") - return self.decoded_self - - @decodedSelf.setter - def decodedSelf(self, value: "DecodedStreamObject") -> None: # pragma: no cover - deprecation_with_replacement("decodedSelf", "decoded_self", "3.0.0") - self.decoded_self = value - - @property - def _data(self) -> Any: - return self.__data - - @_data.setter - def _data(self, value: Any) -> None: - self.__data = value - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - self[NameObject(SA.LENGTH)] = NumberObject(len(self._data)) - DictionaryObject.write_to_stream(self, stream, encryption_key) - del self[SA.LENGTH] - stream.write(b"\nstream\n") - data = self._data - if encryption_key: - from .._security import RC4_encrypt - - data = RC4_encrypt(encryption_key, data) - stream.write(data) - stream.write(b"\nendstream") - - @staticmethod - def initializeFromDictionary( - data: Dict[str, Any] - ) -> Union["EncodedStreamObject", "DecodedStreamObject"]: # pragma: no cover - return StreamObject.initialize_from_dictionary(data) - - @staticmethod - def initialize_from_dictionary( - data: Dict[str, Any] - ) -> Union["EncodedStreamObject", "DecodedStreamObject"]: - retval: Union["EncodedStreamObject", "DecodedStreamObject"] - if SA.FILTER in data: - retval = EncodedStreamObject() - else: - retval = DecodedStreamObject() - retval._data = data["__streamdata__"] - del data["__streamdata__"] - del data[SA.LENGTH] - retval.update(data) - return retval - - def flateEncode(self) -> "EncodedStreamObject": # pragma: no cover - deprecation_with_replacement("flateEncode", "flate_encode", "3.0.0") - return self.flate_encode() - - def flate_encode(self) -> "EncodedStreamObject": - from ..filters import FlateDecode - - if SA.FILTER in self: - f = self[SA.FILTER] - if isinstance(f, ArrayObject): - f.insert(0, NameObject(FT.FLATE_DECODE)) - else: - newf = ArrayObject() - newf.append(NameObject("/FlateDecode")) - newf.append(f) - f = newf - else: - f = NameObject("/FlateDecode") - retval = EncodedStreamObject() - retval[NameObject(SA.FILTER)] = f - retval._data = FlateDecode.encode(self._data) - return retval - - -class DecodedStreamObject(StreamObject): - def get_data(self) -> Any: - return self._data - - def set_data(self, data: Any) -> Any: - self._data = data - - def getData(self) -> Any: # pragma: no cover - deprecation_with_replacement("getData", "get_data", "3.0.0") - return self._data - - def setData(self, data: Any) -> None: # pragma: no cover - deprecation_with_replacement("setData", "set_data", "3.0.0") - self.set_data(data) - - -class EncodedStreamObject(StreamObject): - def __init__(self) -> None: - self.decoded_self: Optional["DecodedStreamObject"] = None - - @property - def decodedSelf(self) -> Optional["DecodedStreamObject"]: # pragma: no cover - deprecation_with_replacement("decodedSelf", "decoded_self", "3.0.0") - return self.decoded_self - - @decodedSelf.setter - def decodedSelf(self, value: DecodedStreamObject) -> None: # pragma: no cover - deprecation_with_replacement("decodedSelf", "decoded_self", "3.0.0") - self.decoded_self = value - - def get_data(self) -> Union[None, str, bytes]: - from ..filters import decode_stream_data - - if self.decoded_self is not None: - # cached version of decoded object - return self.decoded_self.get_data() - else: - # create decoded object - decoded = DecodedStreamObject() - - decoded._data = decode_stream_data(self) - for key, value in list(self.items()): - if key not in (SA.LENGTH, SA.FILTER, SA.DECODE_PARMS): - decoded[key] = value - self.decoded_self = decoded - return decoded._data - - def getData(self) -> Union[None, str, bytes]: # pragma: no cover - deprecation_with_replacement("getData", "get_data", "3.0.0") - return self.get_data() - - def set_data(self, data: Any) -> None: # pragma: no cover - raise PdfReadError("Creating EncodedStreamObject is not currently supported") - - def setData(self, data: Any) -> None: # pragma: no cover - deprecation_with_replacement("setData", "set_data", "3.0.0") - return self.set_data(data) - - -class ContentStream(DecodedStreamObject): - def __init__( - self, - stream: Any, - pdf: Any, - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, - ) -> None: - self.pdf = pdf - - # The inner list has two elements: - # [0] : List - # [1] : str - self.operations: List[Tuple[Any, Any]] = [] - - # stream may be a StreamObject or an ArrayObject containing - # multiple StreamObjects to be cat'd together. - if stream is not None: - stream = stream.get_object() - if isinstance(stream, ArrayObject): - data = b"" - for s in stream: - data += b_(s.get_object().get_data()) - if len(data) == 0 or data[-1] != b"\n": - data += b"\n" - stream_bytes = BytesIO(data) - else: - stream_data = stream.get_data() - assert stream_data is not None - stream_data_bytes = b_(stream_data) - stream_bytes = BytesIO(stream_data_bytes) - self.forced_encoding = forced_encoding - self.__parse_content_stream(stream_bytes) - - def clone( - self, - pdf_dest: Any, - force_duplicate: bool = False, - ignore_fields: Union[Tuple[str, ...], List[str], None] = (), - ) -> "ContentStream": - """clone object into pdf_dest""" - try: - if self.indirect_reference.pdf == pdf_dest and not force_duplicate: # type: ignore - return self - except Exception: - pass - - d__ = cast( - "ContentStream", self._reference_clone(self.__class__(None, None), pdf_dest) - ) - if ignore_fields is None: - ignore_fields = [] - d__._clone(self, pdf_dest, force_duplicate, ignore_fields) - return d__ - - def _clone( - self, - src: DictionaryObject, - pdf_dest: PdfWriterProtocol, - force_duplicate: bool, - ignore_fields: Union[Tuple[str, ...], List[str]], - ) -> None: - """update the object from src""" - self.pdf = pdf_dest - self.operations = list(cast("ContentStream", src).operations) - self.forced_encoding = cast("ContentStream", src).forced_encoding - # no need to call DictionaryObjection or any - # super(DictionaryObject,self)._clone(src, pdf_dest, force_duplicate, ignore_fields) - return - - def __parse_content_stream(self, stream: StreamType) -> None: - stream.seek(0, 0) - operands: List[Union[int, str, PdfObject]] = [] - while True: - peek = read_non_whitespace(stream) - if peek == b"" or peek == 0: - break - stream.seek(-1, 1) - if peek.isalpha() or peek in (b"'", b'"'): - operator = read_until_regex(stream, NameObject.delimiter_pattern, True) - if operator == b"BI": - # begin inline image - a completely different parsing - # mechanism is required, of course... thanks buddy... - assert operands == [] - ii = self._read_inline_image(stream) - self.operations.append((ii, b"INLINE IMAGE")) - else: - self.operations.append((operands, operator)) - operands = [] - elif peek == b"%": - # If we encounter a comment in the content stream, we have to - # handle it here. Typically, read_object will handle - # encountering a comment -- but read_object assumes that - # following the comment must be the object we're trying to - # read. In this case, it could be an operator instead. - while peek not in (b"\r", b"\n"): - peek = stream.read(1) - else: - operands.append(read_object(stream, None, self.forced_encoding)) - - def _read_inline_image(self, stream: StreamType) -> Dict[str, Any]: - # begin reading just after the "BI" - begin image - # first read the dictionary of settings. - settings = DictionaryObject() - while True: - tok = read_non_whitespace(stream) - stream.seek(-1, 1) - if tok == b"I": - # "ID" - begin of image data - break - key = read_object(stream, self.pdf) - tok = read_non_whitespace(stream) - stream.seek(-1, 1) - value = read_object(stream, self.pdf) - settings[key] = value - # left at beginning of ID - tmp = stream.read(3) - assert tmp[:2] == b"ID" - data = BytesIO() - # Read the inline image, while checking for EI (End Image) operator. - while True: - # Read 8 kB at a time and check if the chunk contains the E operator. - buf = stream.read(8192) - # We have reached the end of the stream, but haven't found the EI operator. - if not buf: - raise PdfReadError("Unexpected end of stream") - loc = buf.find(b"E") - - if loc == -1: - data.write(buf) - else: - # Write out everything before the E. - data.write(buf[0:loc]) - - # Seek back in the stream to read the E next. - stream.seek(loc - len(buf), 1) - tok = stream.read(1) - # Check for End Image - tok2 = stream.read(1) - if tok2 == b"I" and buf[loc - 1 : loc] in WHITESPACES: - # Data can contain [\s]EI, so check for the separator \s; 4 chars suffisent Q operator not required. - tok3 = stream.read(1) - info = tok + tok2 - # We need to find at least one whitespace after. - has_q_whitespace = False - while tok3 in WHITESPACES: - has_q_whitespace = True - info += tok3 - tok3 = stream.read(1) - if has_q_whitespace: - stream.seek(-1, 1) - break - else: - stream.seek(-1, 1) - data.write(info) - else: - stream.seek(-1, 1) - data.write(tok) - return {"settings": settings, "data": data.getvalue()} - - @property - def _data(self) -> bytes: - newdata = BytesIO() - for operands, operator in self.operations: - if operator == b"INLINE IMAGE": - newdata.write(b"BI") - dicttext = BytesIO() - operands["settings"].write_to_stream(dicttext, None) - newdata.write(dicttext.getvalue()[2:-2]) - newdata.write(b"ID ") - newdata.write(operands["data"]) - newdata.write(b"EI") - else: - for op in operands: - op.write_to_stream(newdata, None) - newdata.write(b" ") - newdata.write(b_(operator)) - newdata.write(b"\n") - return newdata.getvalue() - - @_data.setter - def _data(self, value: Union[str, bytes]) -> None: - self.__parse_content_stream(BytesIO(b_(value))) - - -def read_object( - stream: StreamType, - pdf: Any, # PdfReader - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, -) -> Union[PdfObject, int, str, ContentStream]: - tok = stream.read(1) - stream.seek(-1, 1) # reset to start - if tok == b"/": - return NameObject.read_from_stream(stream, pdf) - elif tok == b"<": - # hexadecimal string OR dictionary - peek = stream.read(2) - stream.seek(-2, 1) # reset to start - - if peek == b"<<": - return DictionaryObject.read_from_stream(stream, pdf, forced_encoding) - else: - return read_hex_string_from_stream(stream, forced_encoding) - elif tok == b"[": - return ArrayObject.read_from_stream(stream, pdf, forced_encoding) - elif tok == b"t" or tok == b"f": - return BooleanObject.read_from_stream(stream) - elif tok == b"(": - return read_string_from_stream(stream, forced_encoding) - elif tok == b"e" and stream.read(6) == b"endobj": - stream.seek(-6, 1) - return NullObject() - elif tok == b"n": - return NullObject.read_from_stream(stream) - elif tok == b"%": - # comment - while tok not in (b"\r", b"\n"): - tok = stream.read(1) - # Prevents an infinite loop by raising an error if the stream is at - # the EOF - if len(tok) <= 0: - raise PdfStreamError("File ended unexpectedly.") - tok = read_non_whitespace(stream) - stream.seek(-1, 1) - return read_object(stream, pdf, forced_encoding) - elif tok in b"0123456789+-.": - # number object OR indirect reference - peek = stream.read(20) - stream.seek(-len(peek), 1) # reset to start - if IndirectPattern.match(peek) is not None: - return IndirectObject.read_from_stream(stream, pdf) - else: - return NumberObject.read_from_stream(stream) - else: - stream.seek(-20, 1) - raise PdfReadError( - f"Invalid Elementary Object starting with {tok!r} @{stream.tell()}: {stream.read(80).__repr__()}" - ) - - -class Field(TreeObject): - """ - A class representing a field dictionary. - - This class is accessed through - :meth:`get_fields()` - """ - - def __init__(self, data: Dict[str, Any]) -> None: - DictionaryObject.__init__(self) - field_attributes = ( - FieldDictionaryAttributes.attributes() - + CheckboxRadioButtonAttributes.attributes() - ) - for attr in field_attributes: - try: - self[NameObject(attr)] = data[attr] - except KeyError: - pass - - # TABLE 8.69 Entries common to all field dictionaries - @property - def field_type(self) -> Optional[NameObject]: - """Read-only property accessing the type of this field.""" - return self.get(FieldDictionaryAttributes.FT) - - @property - def fieldType(self) -> Optional[NameObject]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :py:attr:`field_type` instead. - """ - deprecation_with_replacement("fieldType", "field_type", "3.0.0") - return self.field_type - - @property - def parent(self) -> Optional[DictionaryObject]: - """Read-only property accessing the parent of this field.""" - return self.get(FieldDictionaryAttributes.Parent) - - @property - def kids(self) -> Optional["ArrayObject"]: - """Read-only property accessing the kids of this field.""" - return self.get(FieldDictionaryAttributes.Kids) - - @property - def name(self) -> Optional[str]: - """Read-only property accessing the name of this field.""" - return self.get(FieldDictionaryAttributes.T) - - @property - def alternate_name(self) -> Optional[str]: - """Read-only property accessing the alternate name of this field.""" - return self.get(FieldDictionaryAttributes.TU) - - @property - def altName(self) -> Optional[str]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :py:attr:`alternate_name` instead. - """ - deprecation_with_replacement("altName", "alternate_name", "3.0.0") - return self.alternate_name - - @property - def mapping_name(self) -> Optional[str]: - """ - Read-only property accessing the mapping name of this field. This - name is used by PyPDF2 as a key in the dictionary returned by - :meth:`get_fields()` - """ - return self.get(FieldDictionaryAttributes.TM) - - @property - def mappingName(self) -> Optional[str]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :py:attr:`mapping_name` instead. - """ - deprecation_with_replacement("mappingName", "mapping_name", "3.0.0") - return self.mapping_name - - @property - def flags(self) -> Optional[int]: - """ - Read-only property accessing the field flags, specifying various - characteristics of the field (see Table 8.70 of the PDF 1.7 reference). - """ - return self.get(FieldDictionaryAttributes.Ff) - - @property - def value(self) -> Optional[Any]: - """ - Read-only property accessing the value of this field. Format - varies based on field type. - """ - return self.get(FieldDictionaryAttributes.V) - - @property - def default_value(self) -> Optional[Any]: - """Read-only property accessing the default value of this field.""" - return self.get(FieldDictionaryAttributes.DV) - - @property - def defaultValue(self) -> Optional[Any]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :py:attr:`default_value` instead. - """ - deprecation_with_replacement("defaultValue", "default_value", "3.0.0") - return self.default_value - - @property - def additional_actions(self) -> Optional[DictionaryObject]: - """ - Read-only property accessing the additional actions dictionary. - This dictionary defines the field's behavior in response to trigger events. - See Section 8.5.2 of the PDF 1.7 reference. - """ - return self.get(FieldDictionaryAttributes.AA) - - @property - def additionalActions(self) -> Optional[DictionaryObject]: # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :py:attr:`additional_actions` instead. - """ - deprecation_with_replacement("additionalActions", "additional_actions", "3.0.0") - return self.additional_actions - - -class Destination(TreeObject): - """ - A class representing a destination within a PDF file. - See section 8.2.1 of the PDF 1.6 reference. - - :param str title: Title of this destination. - :param IndirectObject page: Reference to the page of this destination. Should - be an instance of :class:`IndirectObject`. - :param Fit fit: How the destination is displayed. - :raises PdfReadError: If destination type is invalid. - - - """ - - node: Optional[ - DictionaryObject - ] = None # node provide access to the original Object - childs: List[Any] = [] # used in PdfWriter - - def __init__( - self, - title: str, - page: Union[NumberObject, IndirectObject, NullObject, DictionaryObject], - fit: Fit, - ) -> None: - typ = fit.fit_type - args = fit.fit_args - - DictionaryObject.__init__(self) - self[NameObject("/Title")] = TextStringObject(title) - self[NameObject("/Page")] = page - self[NameObject("/Type")] = typ - - # from table 8.2 of the PDF 1.7 reference. - if typ == "/XYZ": - ( - self[NameObject(TA.LEFT)], - self[NameObject(TA.TOP)], - self[NameObject("/Zoom")], - ) = args - elif typ == TF.FIT_R: - ( - self[NameObject(TA.LEFT)], - self[NameObject(TA.BOTTOM)], - self[NameObject(TA.RIGHT)], - self[NameObject(TA.TOP)], - ) = args - elif typ in [TF.FIT_H, TF.FIT_BH]: - try: # Prefered to be more robust not only to null parameters - (self[NameObject(TA.TOP)],) = args - except Exception: - (self[NameObject(TA.TOP)],) = (NullObject(),) - elif typ in [TF.FIT_V, TF.FIT_BV]: - try: # Prefered to be more robust not only to null parameters - (self[NameObject(TA.LEFT)],) = args - except Exception: - (self[NameObject(TA.LEFT)],) = (NullObject(),) - elif typ in [TF.FIT, TF.FIT_B]: - pass - else: - raise PdfReadError(f"Unknown Destination Type: {typ!r}") - - @property - def dest_array(self) -> "ArrayObject": - return ArrayObject( - [self.raw_get("/Page"), self["/Type"]] - + [ - self[x] - for x in ["/Left", "/Bottom", "/Right", "/Top", "/Zoom"] - if x in self - ] - ) - - def getDestArray(self) -> "ArrayObject": # pragma: no cover - """ - .. deprecated:: 1.28.3 - - Use :py:attr:`dest_array` instead. - """ - deprecation_with_replacement("getDestArray", "dest_array", "3.0.0") - return self.dest_array - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b"<<\n") - key = NameObject("/D") - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value = self.dest_array - value.write_to_stream(stream, encryption_key) - - key = NameObject("/S") - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value_s = NameObject("/GoTo") - value_s.write_to_stream(stream, encryption_key) - - stream.write(b"\n") - stream.write(b">>") - - @property - def title(self) -> Optional[str]: - """Read-only property accessing the destination title.""" - return self.get("/Title") - - @property - def page(self) -> Optional[int]: - """Read-only property accessing the destination page number.""" - return self.get("/Page") - - @property - def typ(self) -> Optional[str]: - """Read-only property accessing the destination type.""" - return self.get("/Type") - - @property - def zoom(self) -> Optional[int]: - """Read-only property accessing the zoom factor.""" - return self.get("/Zoom", None) - - @property - def left(self) -> Optional[FloatObject]: - """Read-only property accessing the left horizontal coordinate.""" - return self.get("/Left", None) - - @property - def right(self) -> Optional[FloatObject]: - """Read-only property accessing the right horizontal coordinate.""" - return self.get("/Right", None) - - @property - def top(self) -> Optional[FloatObject]: - """Read-only property accessing the top vertical coordinate.""" - return self.get("/Top", None) - - @property - def bottom(self) -> Optional[FloatObject]: - """Read-only property accessing the bottom vertical coordinate.""" - return self.get("/Bottom", None) - - @property - def color(self) -> Optional["ArrayObject"]: - """Read-only property accessing the color in (R, G, B) with values 0.0-1.0""" - return self.get( - "/C", ArrayObject([FloatObject(0), FloatObject(0), FloatObject(0)]) - ) - - @property - def font_format(self) -> Optional[OutlineFontFlag]: - """Read-only property accessing the font type. 1=italic, 2=bold, 3=both""" - return self.get("/F", 0) - - @property - def outline_count(self) -> Optional[int]: - """ - Read-only property accessing the outline count. - positive = expanded - negative = collapsed - absolute value = number of visible descendents at all levels - """ - return self.get("/Count", None) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_fit.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_fit.py deleted file mode 100644 index b0e7aaa9..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_fit.py +++ /dev/null @@ -1,129 +0,0 @@ -from typing import Any, Optional, Tuple, Union - - -class Fit: - def __init__( - self, fit_type: str, fit_args: Tuple[Union[None, float, Any], ...] = tuple() - ): - from ._base import FloatObject, NameObject, NullObject - - self.fit_type = NameObject(fit_type) - self.fit_args = [ - NullObject() if a is None or isinstance(a, NullObject) else FloatObject(a) - for a in fit_args - ] - - @classmethod - def xyz( - cls, - left: Optional[float] = None, - top: Optional[float] = None, - zoom: Optional[float] = None, - ) -> "Fit": - """ - Display the page designated by page, with the coordinates ( left , top ) - positioned at the upper-left corner of the window and the contents - of the page magnified by the factor zoom. - - A null value for any of the parameters left, top, or zoom specifies - that the current value of that parameter is to be retained unchanged. - - A zoom value of 0 has the same meaning as a null value. - """ - return Fit(fit_type="/XYZ", fit_args=(left, top, zoom)) - - @classmethod - def fit(cls) -> "Fit": - """ - Display the page designated by page, with its contents magnified just - enough to fit the entire page within the window both horizontally and - vertically. If the required horizontal and vertical magnification - factors are different, use the smaller of the two, centering the page - within the window in the other dimension. - """ - return Fit(fit_type="/Fit") - - @classmethod - def fit_horizontally(cls, top: Optional[float] = None) -> "Fit": - """ - Display the page designated by page , with the vertical coordinate top - positioned at the top edge of the window and the contents of the page - magnified just enough to fit the entire width of the page within the - window. - - A null value for `top` specifies that the current value of that - parameter is to be retained unchanged. - """ - return Fit(fit_type="/FitH", fit_args=(top,)) - - @classmethod - def fit_vertically(cls, left: Optional[float] = None) -> "Fit": - return Fit(fit_type="/FitV", fit_args=(left,)) - - @classmethod - def fit_rectangle( - cls, - left: Optional[float] = None, - bottom: Optional[float] = None, - right: Optional[float] = None, - top: Optional[float] = None, - ) -> "Fit": - """ - Display the page designated by page , with its contents magnified - just enough to fit the rectangle specified by the coordinates - left , bottom , right , and top entirely within the window - both horizontally and vertically. - - If the required horizontal and vertical magnification factors are - different, use the smaller of the two, centering the rectangle within - the window in the other dimension. - - A null value for any of the parameters may result in unpredictable - behavior. - """ - return Fit(fit_type="/FitR", fit_args=(left, bottom, right, top)) - - @classmethod - def fit_box(cls) -> "Fit": - """ - Display the page designated by page , with its contents magnified - just enough to fit its bounding box entirely within the window both - horizontally and vertically. If the required horizontal and vertical - magnification factors are different, use the smaller of the two, - centering the bounding box within the window in the other dimension. - """ - return Fit(fit_type="/FitB") - - @classmethod - def fit_box_horizontally(cls, top: Optional[float] = None) -> "Fit": - """ - Display the page designated by page , with the vertical coordinate - top positioned at the top edge of the window and the contents of the - page magnified just enough to fit the entire width of its bounding box - within the window. - - A null value for top specifies that the current value of that parameter - is to be retained unchanged. - """ - return Fit(fit_type="/FitBH", fit_args=(top,)) - - @classmethod - def fit_box_vertically(cls, left: Optional[float] = None) -> "Fit": - """ - Display the page designated by page , with the horizontal coordinate - left positioned at the left edge of the window and the contents of - the page magnified just enough to fit the entire height of its - bounding box within the window. - - A null value for left specifies that the current value of that - parameter is to be retained unchanged. - """ - return Fit(fit_type="/FitBV", fit_args=(left,)) - - def __str__(self) -> str: - if not self.fit_args: - return f"Fit({self.fit_type})" - return f"Fit({self.fit_type}, {self.fit_args})" - - -DEFAULT_FIT = Fit.fit() diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_outline.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_outline.py deleted file mode 100644 index c2e72c0a..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_outline.py +++ /dev/null @@ -1,35 +0,0 @@ -from typing import Any, Union - -from .._utils import StreamType, deprecation_with_replacement -from ._base import NameObject -from ._data_structures import Destination - - -class OutlineItem(Destination): - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - stream.write(b"<<\n") - for key in [ - NameObject(x) - for x in ["/Title", "/Parent", "/First", "/Last", "/Next", "/Prev"] - if x in self - ]: - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value = self.raw_get(key) - value.write_to_stream(stream, encryption_key) - stream.write(b"\n") - key = NameObject("/Dest") - key.write_to_stream(stream, encryption_key) - stream.write(b" ") - value = self.dest_array - value.write_to_stream(stream, encryption_key) - stream.write(b"\n") - stream.write(b">>") - - -class Bookmark(OutlineItem): # pragma: no cover - def __init__(self, *args: Any, **kwargs: Any) -> None: - deprecation_with_replacement("Bookmark", "OutlineItem", "3.0.0") - super().__init__(*args, **kwargs) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_rectangle.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_rectangle.py deleted file mode 100644 index 3f41bfd5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_rectangle.py +++ /dev/null @@ -1,265 +0,0 @@ -import decimal -from typing import Any, List, Tuple, Union - -from .._utils import deprecation_no_replacement, deprecation_with_replacement -from ._base import FloatObject, NumberObject -from ._data_structures import ArrayObject - - -class RectangleObject(ArrayObject): - """ - This class is used to represent *page boxes* in PyPDF2. These boxes include: - * :attr:`artbox ` - * :attr:`bleedbox ` - * :attr:`cropbox ` - * :attr:`mediabox ` - * :attr:`trimbox ` - """ - - def __init__( - self, arr: Union["RectangleObject", Tuple[float, float, float, float]] - ) -> None: - # must have four points - assert len(arr) == 4 - # automatically convert arr[x] into NumberObject(arr[x]) if necessary - ArrayObject.__init__(self, [self._ensure_is_number(x) for x in arr]) # type: ignore - - def _ensure_is_number(self, value: Any) -> Union[FloatObject, NumberObject]: - if not isinstance(value, (NumberObject, FloatObject)): - value = FloatObject(value) - return value - - def scale(self, sx: float, sy: float) -> "RectangleObject": - return RectangleObject( - ( - float(self.left) * sx, - float(self.bottom) * sy, - float(self.right) * sx, - float(self.top) * sy, - ) - ) - - def ensureIsNumber( - self, value: Any - ) -> Union[FloatObject, NumberObject]: # pragma: no cover - deprecation_no_replacement("ensureIsNumber", "3.0.0") - return self._ensure_is_number(value) - - def __repr__(self) -> str: - return f"RectangleObject({repr(list(self))})" - - @property - def left(self) -> FloatObject: - return self[0] - - @left.setter - def left(self, f: float) -> None: - self[0] = FloatObject(f) - - @property - def bottom(self) -> FloatObject: - return self[1] - - @bottom.setter - def bottom(self, f: float) -> None: - self[1] = FloatObject(f) - - @property - def right(self) -> FloatObject: - return self[2] - - @right.setter - def right(self, f: float) -> None: - self[2] = FloatObject(f) - - @property - def top(self) -> FloatObject: - return self[3] - - @top.setter - def top(self, f: float) -> None: - self[3] = FloatObject(f) - - def getLowerLeft_x(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getLowerLeft_x", "left", "3.0.0") - return self.left - - def getLowerLeft_y(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getLowerLeft_y", "bottom", "3.0.0") - return self.bottom - - def getUpperRight_x(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getUpperRight_x", "right", "3.0.0") - return self.right - - def getUpperRight_y(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getUpperRight_y", "top", "3.0.0") - return self.top - - def getUpperLeft_x(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getUpperLeft_x", "left", "3.0.0") - return self.left - - def getUpperLeft_y(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getUpperLeft_y", "top", "3.0.0") - return self.top - - def getLowerRight_x(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getLowerRight_x", "right", "3.0.0") - return self.right - - def getLowerRight_y(self) -> FloatObject: # pragma: no cover - deprecation_with_replacement("getLowerRight_y", "bottom", "3.0.0") - return self.bottom - - @property - def lower_left(self) -> Tuple[decimal.Decimal, decimal.Decimal]: - """ - Property to read and modify the lower left coordinate of this box - in (x,y) form. - """ - return self.left, self.bottom - - @lower_left.setter - def lower_left(self, value: List[Any]) -> None: - self[0], self[1] = (self._ensure_is_number(x) for x in value) - - @property - def lower_right(self) -> Tuple[decimal.Decimal, decimal.Decimal]: - """ - Property to read and modify the lower right coordinate of this box - in (x,y) form. - """ - return self.right, self.bottom - - @lower_right.setter - def lower_right(self, value: List[Any]) -> None: - self[2], self[1] = (self._ensure_is_number(x) for x in value) - - @property - def upper_left(self) -> Tuple[decimal.Decimal, decimal.Decimal]: - """ - Property to read and modify the upper left coordinate of this box - in (x,y) form. - """ - return self.left, self.top - - @upper_left.setter - def upper_left(self, value: List[Any]) -> None: - self[0], self[3] = (self._ensure_is_number(x) for x in value) - - @property - def upper_right(self) -> Tuple[decimal.Decimal, decimal.Decimal]: - """ - Property to read and modify the upper right coordinate of this box - in (x,y) form. - """ - return self.right, self.top - - @upper_right.setter - def upper_right(self, value: List[Any]) -> None: - self[2], self[3] = (self._ensure_is_number(x) for x in value) - - def getLowerLeft( - self, - ) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("getLowerLeft", "lower_left", "3.0.0") - return self.lower_left - - def getLowerRight( - self, - ) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("getLowerRight", "lower_right", "3.0.0") - return self.lower_right - - def getUpperLeft( - self, - ) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("getUpperLeft", "upper_left", "3.0.0") - return self.upper_left - - def getUpperRight( - self, - ) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("getUpperRight", "upper_right", "3.0.0") - return self.upper_right - - def setLowerLeft(self, value: Tuple[float, float]) -> None: # pragma: no cover - deprecation_with_replacement("setLowerLeft", "lower_left", "3.0.0") - self.lower_left = value # type: ignore - - def setLowerRight(self, value: Tuple[float, float]) -> None: # pragma: no cover - deprecation_with_replacement("setLowerRight", "lower_right", "3.0.0") - self[2], self[1] = (self._ensure_is_number(x) for x in value) - - def setUpperLeft(self, value: Tuple[float, float]) -> None: # pragma: no cover - deprecation_with_replacement("setUpperLeft", "upper_left", "3.0.0") - self[0], self[3] = (self._ensure_is_number(x) for x in value) - - def setUpperRight(self, value: Tuple[float, float]) -> None: # pragma: no cover - deprecation_with_replacement("setUpperRight", "upper_right", "3.0.0") - self[2], self[3] = (self._ensure_is_number(x) for x in value) - - @property - def width(self) -> decimal.Decimal: - return self.right - self.left - - def getWidth(self) -> decimal.Decimal: # pragma: no cover - deprecation_with_replacement("getWidth", "width", "3.0.0") - return self.width - - @property - def height(self) -> decimal.Decimal: - return self.top - self.bottom - - def getHeight(self) -> decimal.Decimal: # pragma: no cover - deprecation_with_replacement("getHeight", "height", "3.0.0") - return self.height - - @property - def lowerLeft(self) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("lowerLeft", "lower_left", "3.0.0") - return self.lower_left - - @lowerLeft.setter - def lowerLeft( - self, value: Tuple[decimal.Decimal, decimal.Decimal] - ) -> None: # pragma: no cover - deprecation_with_replacement("lowerLeft", "lower_left", "3.0.0") - self.lower_left = value - - @property - def lowerRight(self) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("lowerRight", "lower_right", "3.0.0") - return self.lower_right - - @lowerRight.setter - def lowerRight( - self, value: Tuple[decimal.Decimal, decimal.Decimal] - ) -> None: # pragma: no cover - deprecation_with_replacement("lowerRight", "lower_right", "3.0.0") - self.lower_right = value - - @property - def upperLeft(self) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("upperLeft", "upper_left", "3.0.0") - return self.upper_left - - @upperLeft.setter - def upperLeft( - self, value: Tuple[decimal.Decimal, decimal.Decimal] - ) -> None: # pragma: no cover - deprecation_with_replacement("upperLeft", "upper_left", "3.0.0") - self.upper_left = value - - @property - def upperRight(self) -> Tuple[decimal.Decimal, decimal.Decimal]: # pragma: no cover - deprecation_with_replacement("upperRight", "upper_right", "3.0.0") - return self.upper_right - - @upperRight.setter - def upperRight( - self, value: Tuple[decimal.Decimal, decimal.Decimal] - ) -> None: # pragma: no cover - deprecation_with_replacement("upperRight", "upper_right", "3.0.0") - self.upper_right = value diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_utils.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_utils.py deleted file mode 100644 index 2f8debdc..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/generic/_utils.py +++ /dev/null @@ -1,172 +0,0 @@ -import codecs -from typing import Dict, List, Tuple, Union - -from .._codecs import _pdfdoc_encoding -from .._utils import StreamType, b_, logger_warning, read_non_whitespace -from ..errors import STREAM_TRUNCATED_PREMATURELY, PdfStreamError -from ._base import ByteStringObject, TextStringObject - - -def hex_to_rgb(value: str) -> Tuple[float, float, float]: - return tuple(int(value.lstrip("#")[i : i + 2], 16) / 255.0 for i in (0, 2, 4)) # type: ignore - - -def read_hex_string_from_stream( - stream: StreamType, - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, -) -> Union["TextStringObject", "ByteStringObject"]: - stream.read(1) - txt = "" - x = b"" - while True: - tok = read_non_whitespace(stream) - if not tok: - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - if tok == b">": - break - x += tok - if len(x) == 2: - txt += chr(int(x, base=16)) - x = b"" - if len(x) == 1: - x += b"0" - if len(x) == 2: - txt += chr(int(x, base=16)) - return create_string_object(b_(txt), forced_encoding) - - -def read_string_from_stream( - stream: StreamType, - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, -) -> Union["TextStringObject", "ByteStringObject"]: - tok = stream.read(1) - parens = 1 - txt = [] - while True: - tok = stream.read(1) - if not tok: - raise PdfStreamError(STREAM_TRUNCATED_PREMATURELY) - if tok == b"(": - parens += 1 - elif tok == b")": - parens -= 1 - if parens == 0: - break - elif tok == b"\\": - tok = stream.read(1) - escape_dict = { - b"n": b"\n", - b"r": b"\r", - b"t": b"\t", - b"b": b"\b", - b"f": b"\f", - b"c": rb"\c", - b"(": b"(", - b")": b")", - b"/": b"/", - b"\\": b"\\", - b" ": b" ", - b"%": b"%", - b"<": b"<", - b">": b">", - b"[": b"[", - b"]": b"]", - b"#": b"#", - b"_": b"_", - b"&": b"&", - b"$": b"$", - } - try: - tok = escape_dict[tok] - except KeyError: - if b"0" <= tok and tok <= b"7": - # "The number ddd may consist of one, two, or three - # octal digits; high-order overflow shall be ignored. - # Three octal digits shall be used, with leading zeros - # as needed, if the next character of the string is also - # a digit." (PDF reference 7.3.4.2, p 16) - for _ in range(2): - ntok = stream.read(1) - if b"0" <= ntok and ntok <= b"7": - tok += ntok - else: - stream.seek(-1, 1) # ntok has to be analysed - break - tok = b_(chr(int(tok, base=8))) - elif tok in b"\n\r": - # This case is hit when a backslash followed by a line - # break occurs. If it's a multi-char EOL, consume the - # second character: - tok = stream.read(1) - if tok not in b"\n\r": - stream.seek(-1, 1) - # Then don't add anything to the actual string, since this - # line break was escaped: - tok = b"" - else: - msg = rf"Unexpected escaped string: {tok.decode('utf8')}" - logger_warning(msg, __name__) - txt.append(tok) - return create_string_object(b"".join(txt), forced_encoding) - - -def create_string_object( - string: Union[str, bytes], - forced_encoding: Union[None, str, List[str], Dict[int, str]] = None, -) -> Union[TextStringObject, ByteStringObject]: - """ - Create a ByteStringObject or a TextStringObject from a string to represent the string. - - :param Union[str, bytes] string: A string - - :raises TypeError: If string is not of type str or bytes. - """ - if isinstance(string, str): - return TextStringObject(string) - elif isinstance(string, bytes): - if isinstance(forced_encoding, (list, dict)): - out = "" - for x in string: - try: - out += forced_encoding[x] - except Exception: - out += bytes((x,)).decode("charmap") - return TextStringObject(out) - elif isinstance(forced_encoding, str): - if forced_encoding == "bytes": - return ByteStringObject(string) - return TextStringObject(string.decode(forced_encoding)) - else: - try: - if string.startswith(codecs.BOM_UTF16_BE): - retval = TextStringObject(string.decode("utf-16")) - retval.autodetect_utf16 = True - return retval - else: - # This is probably a big performance hit here, but we need to - # convert string objects into the text/unicode-aware version if - # possible... and the only way to check if that's possible is - # to try. Some strings are strings, some are just byte arrays. - retval = TextStringObject(decode_pdfdocencoding(string)) - retval.autodetect_pdfdocencoding = True - return retval - except UnicodeDecodeError: - return ByteStringObject(string) - else: - raise TypeError("create_string_object should have str or unicode arg") - - -def decode_pdfdocencoding(byte_array: bytes) -> str: - retval = "" - for b in byte_array: - c = _pdfdoc_encoding[b] - if c == "\u0000": - raise UnicodeDecodeError( - "pdfdocencoding", - bytearray(b), - -1, - -1, - "does not exist in translation table", - ) - retval += c - return retval diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/pagerange.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/pagerange.py deleted file mode 100644 index f009adc1..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/pagerange.py +++ /dev/null @@ -1,173 +0,0 @@ -""" -Representation and utils for ranges of PDF file pages. - -Copyright (c) 2014, Steve Witham . -All rights reserved. This software is available under a BSD license; -see https://github.com/py-pdf/PyPDF2/blob/main/LICENSE -""" - -import re -from typing import Any, List, Tuple, Union - -from .errors import ParseError - -_INT_RE = r"(0|-?[1-9]\d*)" # A decimal int, don't allow "-0". -PAGE_RANGE_RE = "^({int}|({int}?(:{int}?(:{int}?)?)))$".format(int=_INT_RE) -# groups: 12 34 5 6 7 8 - - -class PageRange: - """ - A slice-like representation of a range of page indices. - - For example, page numbers, only starting at zero. - - The syntax is like what you would put between brackets [ ]. - The slice is one of the few Python types that can't be subclassed, - but this class converts to and from slices, and allows similar use. - - - PageRange(str) parses a string representing a page range. - - PageRange(slice) directly "imports" a slice. - - to_slice() gives the equivalent slice. - - str() and repr() allow printing. - - indices(n) is like slice.indices(n). - - """ - - def __init__(self, arg: Union[slice, "PageRange", str]) -> None: - """ - Initialize with either a slice -- giving the equivalent page range, - or a PageRange object -- making a copy, - or a string like - "int", "[int]:[int]" or "[int]:[int]:[int]", - where the brackets indicate optional ints. - Remember, page indices start with zero. - Page range expression examples: - : all pages. -1 last page. - 22 just the 23rd page. :-1 all but the last page. - 0:3 the first three pages. -2 second-to-last page. - :3 the first three pages. -2: last two pages. - 5: from the sixth page onward. -3:-1 third & second to last. - The third, "stride" or "step" number is also recognized. - ::2 0 2 4 ... to the end. 3:0:-1 3 2 1 but not 0. - 1:10:2 1 3 5 7 9 2::-1 2 1 0. - ::-1 all pages in reverse order. - Note the difference between this notation and arguments to slice(): - slice(3) means the first three pages; - PageRange("3") means the range of only the fourth page. - However PageRange(slice(3)) means the first three pages. - """ - if isinstance(arg, slice): - self._slice = arg - return - - if isinstance(arg, PageRange): - self._slice = arg.to_slice() - return - - m = isinstance(arg, str) and re.match(PAGE_RANGE_RE, arg) - if not m: - raise ParseError(arg) - elif m.group(2): - # Special case: just an int means a range of one page. - start = int(m.group(2)) - stop = start + 1 if start != -1 else None - self._slice = slice(start, stop) - else: - self._slice = slice(*[int(g) if g else None for g in m.group(4, 6, 8)]) - - @staticmethod - def valid(input: Any) -> bool: - """True if input is a valid initializer for a PageRange.""" - return isinstance(input, (slice, PageRange)) or ( - isinstance(input, str) and bool(re.match(PAGE_RANGE_RE, input)) - ) - - def to_slice(self) -> slice: - """Return the slice equivalent of this page range.""" - return self._slice - - def __str__(self) -> str: - """A string like "1:2:3".""" - s = self._slice - indices: Union[Tuple[int, int], Tuple[int, int, int]] - if s.step is None: - if s.start is not None and s.stop == s.start + 1: - return str(s.start) - - indices = s.start, s.stop - else: - indices = s.start, s.stop, s.step - return ":".join("" if i is None else str(i) for i in indices) - - def __repr__(self) -> str: - """A string like "PageRange('1:2:3')".""" - return "PageRange(" + repr(str(self)) + ")" - - def indices(self, n: int) -> Tuple[int, int, int]: - """ - n is the length of the list of pages to choose from. - - Returns arguments for range(). See help(slice.indices). - """ - return self._slice.indices(n) - - def __eq__(self, other: Any) -> bool: - if not isinstance(other, PageRange): - return False - return self._slice == other._slice - - def __add__(self, other: "PageRange") -> "PageRange": - if not isinstance(other, PageRange): - raise TypeError(f"Can't add PageRange and {type(other)}") - if self._slice.step is not None or other._slice.step is not None: - raise ValueError("Can't add PageRange with stride") - a = self._slice.start, self._slice.stop - b = other._slice.start, other._slice.stop - - if a[0] > b[0]: - a, b = b, a - - # Now a[0] is the smallest - if b[0] > a[1]: - # There is a gap between a and b. - raise ValueError("Can't add PageRanges with gap") - return PageRange(slice(a[0], max(a[1], b[1]))) - - -PAGE_RANGE_ALL = PageRange(":") # The range of all pages. - - -def parse_filename_page_ranges( - args: List[Union[str, PageRange, None]] -) -> List[Tuple[str, PageRange]]: - """ - Given a list of filenames and page ranges, return a list of (filename, page_range) pairs. - - First arg must be a filename; other ags are filenames, page-range - expressions, slice objects, or PageRange objects. - A filename not followed by a page range indicates all pages of the file. - """ - pairs: List[Tuple[str, PageRange]] = [] - pdf_filename = None - did_page_range = False - for arg in args + [None]: - if PageRange.valid(arg): - if not pdf_filename: - raise ValueError( - "The first argument must be a filename, not a page range." - ) - - pairs.append((pdf_filename, PageRange(arg))) - did_page_range = True - else: - # New filename or end of list--do all of the previous file? - if pdf_filename and not did_page_range: - pairs.append((pdf_filename, PAGE_RANGE_ALL)) - - pdf_filename = arg - did_page_range = False - return pairs - - -PageRangeSpec = Union[str, PageRange, Tuple[int, int], Tuple[int, int, int], List[int]] diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/papersizes.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/papersizes.py deleted file mode 100644 index 51aa2de5..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/papersizes.py +++ /dev/null @@ -1,48 +0,0 @@ -"""Helper to get paper sizes.""" - -from collections import namedtuple - -Dimensions = namedtuple("Dimensions", ["width", "height"]) - - -class PaperSize: - """(width, height) of the paper in portrait mode in pixels at 72 ppi.""" - - # Notes how to calculate it: - # 1. Get the size of the paper in mm - # 2. Convert it to inches (25.4 millimeters are equal to 1 inches) - # 3. Convert it to pixels ad 72dpi (1 inch is equal to 72 pixels) - - # All Din-A paper sizes follow this pattern: - # 2xA(n-1) = A(n) - # So the height of the next bigger one is the width of the smaller one - # The ratio is always approximately the ratio 1:2**0.5 - # Additionally, A0 is defined to have an area of 1 m**2 - # Be aware of rounding issues! - A0 = Dimensions(2384, 3370) # 841mm x 1189mm - A1 = Dimensions(1684, 2384) - A2 = Dimensions(1191, 1684) - A3 = Dimensions(842, 1191) - A4 = Dimensions( - 595, 842 - ) # Printer paper, documents - this is by far the most common - A5 = Dimensions(420, 595) # Paperback books - A6 = Dimensions(298, 420) # Post cards - A7 = Dimensions(210, 298) - A8 = Dimensions(147, 210) - - # Envelopes - C4 = Dimensions(649, 918) - - -_din_a = ( - PaperSize.A0, - PaperSize.A1, - PaperSize.A2, - PaperSize.A3, - PaperSize.A4, - PaperSize.A5, - PaperSize.A6, - PaperSize.A7, - PaperSize.A8, -) diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/py.typed b/pptx-env/lib/python3.12/site-packages/PyPDF2/py.typed deleted file mode 100644 index e69de29b..00000000 diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/types.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/types.py deleted file mode 100644 index 92cba6fe..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/types.py +++ /dev/null @@ -1,52 +0,0 @@ -"""Helpers for working with PDF types.""" - -from typing import List, Union - -try: - # Python 3.8+: https://peps.python.org/pep-0586 - from typing import Literal # type: ignore[attr-defined] -except ImportError: - from typing_extensions import Literal # type: ignore[misc] - -try: - # Python 3.10+: https://www.python.org/dev/peps/pep-0484/ - from typing import TypeAlias # type: ignore[attr-defined] -except ImportError: - from typing_extensions import TypeAlias - -from .generic._base import NameObject, NullObject, NumberObject -from .generic._data_structures import ArrayObject, Destination -from .generic._outline import OutlineItem - -BorderArrayType: TypeAlias = List[Union[NameObject, NumberObject, ArrayObject]] -OutlineItemType: TypeAlias = Union[OutlineItem, Destination] -FitType: TypeAlias = Literal[ - "/Fit", "/XYZ", "/FitH", "/FitV", "/FitR", "/FitB", "/FitBH", "/FitBV" -] -# Those go with the FitType: They specify values for the fit -ZoomArgType: TypeAlias = Union[NumberObject, NullObject, float] -ZoomArgsType: TypeAlias = List[ZoomArgType] - -# Recursive types like the following are not yet supported by mypy: -# OutlineType = List[Union[Destination, "OutlineType"]] -# See https://github.com/python/mypy/issues/731 -# Hence use this for the moment: -OutlineType = List[Union[Destination, List[Union[Destination, List[Destination]]]]] - -LayoutType: TypeAlias = Literal[ - "/NoLayout", - "/SinglePage", - "/OneColumn", - "/TwoColumnLeft", - "/TwoColumnRight", - "/TwoPageLeft", - "/TwoPageRight", -] -PagemodeType: TypeAlias = Literal[ - "/UseNone", - "/UseOutlines", - "/UseThumbs", - "/FullScreen", - "/UseOC", - "/UseAttachments", -] diff --git a/pptx-env/lib/python3.12/site-packages/PyPDF2/xmp.py b/pptx-env/lib/python3.12/site-packages/PyPDF2/xmp.py deleted file mode 100644 index de432823..00000000 --- a/pptx-env/lib/python3.12/site-packages/PyPDF2/xmp.py +++ /dev/null @@ -1,525 +0,0 @@ -""" -Anything related to XMP metadata. - -See https://en.wikipedia.org/wiki/Extensible_Metadata_Platform -""" - -import datetime -import decimal -import re -from typing import ( - Any, - Callable, - Dict, - Iterator, - List, - Optional, - TypeVar, - Union, - cast, -) -from xml.dom.minidom import Document -from xml.dom.minidom import Element as XmlElement -from xml.dom.minidom import parseString -from xml.parsers.expat import ExpatError - -from ._utils import ( - StreamType, - deprecate_with_replacement, - deprecation_with_replacement, -) -from .errors import PdfReadError -from .generic import ContentStream, PdfObject - -RDF_NAMESPACE = "http://www.w3.org/1999/02/22-rdf-syntax-ns#" -DC_NAMESPACE = "http://purl.org/dc/elements/1.1/" -XMP_NAMESPACE = "http://ns.adobe.com/xap/1.0/" -PDF_NAMESPACE = "http://ns.adobe.com/pdf/1.3/" -XMPMM_NAMESPACE = "http://ns.adobe.com/xap/1.0/mm/" - -# What is the PDFX namespace, you might ask? I might ask that too. It's -# a completely undocumented namespace used to place "custom metadata" -# properties, which are arbitrary metadata properties with no semantic or -# documented meaning. Elements in the namespace are key/value-style storage, -# where the element name is the key and the content is the value. The keys -# are transformed into valid XML identifiers by substituting an invalid -# identifier character with \u2182 followed by the unicode hex ID of the -# original character. A key like "my car" is therefore "my\u21820020car". -# -# \u2182, in case you're wondering, is the unicode character -# \u{ROMAN NUMERAL TEN THOUSAND}, a straightforward and obvious choice for -# escaping characters. -# -# Intentional users of the pdfx namespace should be shot on sight. A -# custom data schema and sensical XML elements could be used instead, as is -# suggested by Adobe's own documentation on XMP (under "Extensibility of -# Schemas"). -# -# Information presented here on the /pdfx/ schema is a result of limited -# reverse engineering, and does not constitute a full specification. -PDFX_NAMESPACE = "http://ns.adobe.com/pdfx/1.3/" - -iso8601 = re.compile( - """ - (?P[0-9]{4}) - (- - (?P[0-9]{2}) - (- - (?P[0-9]+) - (T - (?P[0-9]{2}): - (?P[0-9]{2}) - (:(?P[0-9]{2}(.[0-9]+)?))? - (?PZ|[-+][0-9]{2}:[0-9]{2}) - )? - )? - )? - """, - re.VERBOSE, -) - - -K = TypeVar("K") - - -def _identity(value: K) -> K: - return value - - -def _converter_date(value: str) -> datetime.datetime: - matches = iso8601.match(value) - if matches is None: - raise ValueError(f"Invalid date format: {value}") - year = int(matches.group("year")) - month = int(matches.group("month") or "1") - day = int(matches.group("day") or "1") - hour = int(matches.group("hour") or "0") - minute = int(matches.group("minute") or "0") - second = decimal.Decimal(matches.group("second") or "0") - seconds_dec = second.to_integral(decimal.ROUND_FLOOR) - milliseconds_dec = (second - seconds_dec) * 1000000 - - seconds = int(seconds_dec) - milliseconds = int(milliseconds_dec) - - tzd = matches.group("tzd") or "Z" - dt = datetime.datetime(year, month, day, hour, minute, seconds, milliseconds) - if tzd != "Z": - tzd_hours, tzd_minutes = (int(x) for x in tzd.split(":")) - tzd_hours *= -1 - if tzd_hours < 0: - tzd_minutes *= -1 - dt = dt + datetime.timedelta(hours=tzd_hours, minutes=tzd_minutes) - return dt - - -def _getter_bag( - namespace: str, name: str -) -> Callable[["XmpInformation"], Optional[List[str]]]: - def get(self: "XmpInformation") -> Optional[List[str]]: - cached = self.cache.get(namespace, {}).get(name) - if cached: - return cached - retval = [] - for element in self.get_element("", namespace, name): - bags = element.getElementsByTagNameNS(RDF_NAMESPACE, "Bag") - if len(bags): - for bag in bags: - for item in bag.getElementsByTagNameNS(RDF_NAMESPACE, "li"): - value = self._get_text(item) - retval.append(value) - ns_cache = self.cache.setdefault(namespace, {}) - ns_cache[name] = retval - return retval - - return get - - -def _getter_seq( - namespace: str, name: str, converter: Callable[[Any], Any] = _identity -) -> Callable[["XmpInformation"], Optional[List[Any]]]: - def get(self: "XmpInformation") -> Optional[List[Any]]: - cached = self.cache.get(namespace, {}).get(name) - if cached: - return cached - retval = [] - for element in self.get_element("", namespace, name): - seqs = element.getElementsByTagNameNS(RDF_NAMESPACE, "Seq") - if len(seqs): - for seq in seqs: - for item in seq.getElementsByTagNameNS(RDF_NAMESPACE, "li"): - value = self._get_text(item) - value = converter(value) - retval.append(value) - else: - value = converter(self._get_text(element)) - retval.append(value) - ns_cache = self.cache.setdefault(namespace, {}) - ns_cache[name] = retval - return retval - - return get - - -def _getter_langalt( - namespace: str, name: str -) -> Callable[["XmpInformation"], Optional[Dict[Any, Any]]]: - def get(self: "XmpInformation") -> Optional[Dict[Any, Any]]: - cached = self.cache.get(namespace, {}).get(name) - if cached: - return cached - retval = {} - for element in self.get_element("", namespace, name): - alts = element.getElementsByTagNameNS(RDF_NAMESPACE, "Alt") - if len(alts): - for alt in alts: - for item in alt.getElementsByTagNameNS(RDF_NAMESPACE, "li"): - value = self._get_text(item) - retval[item.getAttribute("xml:lang")] = value - else: - retval["x-default"] = self._get_text(element) - ns_cache = self.cache.setdefault(namespace, {}) - ns_cache[name] = retval - return retval - - return get - - -def _getter_single( - namespace: str, name: str, converter: Callable[[str], Any] = _identity -) -> Callable[["XmpInformation"], Optional[Any]]: - def get(self: "XmpInformation") -> Optional[Any]: - cached = self.cache.get(namespace, {}).get(name) - if cached: - return cached - value = None - for element in self.get_element("", namespace, name): - if element.nodeType == element.ATTRIBUTE_NODE: - value = element.nodeValue - else: - value = self._get_text(element) - break - if value is not None: - value = converter(value) - ns_cache = self.cache.setdefault(namespace, {}) - ns_cache[name] = value - return value - - return get - - -class XmpInformation(PdfObject): - """ - An object that represents Adobe XMP metadata. - Usually accessed by :py:attr:`xmp_metadata()` - - :raises PdfReadError: if XML is invalid - """ - - def __init__(self, stream: ContentStream) -> None: - self.stream = stream - try: - data = self.stream.get_data() - doc_root: Document = parseString(data) - except ExpatError as e: - raise PdfReadError(f"XML in XmpInformation was invalid: {e}") - self.rdf_root: XmlElement = doc_root.getElementsByTagNameNS( - RDF_NAMESPACE, "RDF" - )[0] - self.cache: Dict[Any, Any] = {} - - @property - def rdfRoot(self) -> XmlElement: # pragma: no cover - deprecate_with_replacement("rdfRoot", "rdf_root", "4.0.0") - return self.rdf_root - - def write_to_stream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: - self.stream.write_to_stream(stream, encryption_key) - - def writeToStream( - self, stream: StreamType, encryption_key: Union[None, str, bytes] - ) -> None: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`write_to_stream` instead. - """ - deprecation_with_replacement("writeToStream", "write_to_stream", "3.0.0") - self.write_to_stream(stream, encryption_key) - - def get_element(self, about_uri: str, namespace: str, name: str) -> Iterator[Any]: - for desc in self.rdf_root.getElementsByTagNameNS(RDF_NAMESPACE, "Description"): - if desc.getAttributeNS(RDF_NAMESPACE, "about") == about_uri: - attr = desc.getAttributeNodeNS(namespace, name) - if attr is not None: - yield attr - yield from desc.getElementsByTagNameNS(namespace, name) - - def getElement( - self, aboutUri: str, namespace: str, name: str - ) -> Iterator[Any]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_element` instead. - """ - deprecation_with_replacement("getElement", "get_element", "3.0.0") - return self.get_element(aboutUri, namespace, name) - - def get_nodes_in_namespace(self, about_uri: str, namespace: str) -> Iterator[Any]: - for desc in self.rdf_root.getElementsByTagNameNS(RDF_NAMESPACE, "Description"): - if desc.getAttributeNS(RDF_NAMESPACE, "about") == about_uri: - for i in range(desc.attributes.length): - attr = desc.attributes.item(i) - if attr.namespaceURI == namespace: - yield attr - for child in desc.childNodes: - if child.namespaceURI == namespace: - yield child - - def getNodesInNamespace( - self, aboutUri: str, namespace: str - ) -> Iterator[Any]: # pragma: no cover - """ - .. deprecated:: 1.28.0 - - Use :meth:`get_nodes_in_namespace` instead. - """ - deprecation_with_replacement( - "getNodesInNamespace", "get_nodes_in_namespace", "3.0.0" - ) - return self.get_nodes_in_namespace(aboutUri, namespace) - - def _get_text(self, element: XmlElement) -> str: - text = "" - for child in element.childNodes: - if child.nodeType == child.TEXT_NODE: - text += child.data - return text - - dc_contributor = property(_getter_bag(DC_NAMESPACE, "contributor")) - """ - Contributors to the resource (other than the authors). An unsorted - array of names. - """ - - dc_coverage = property(_getter_single(DC_NAMESPACE, "coverage")) - """ - Text describing the extent or scope of the resource. - """ - - dc_creator = property(_getter_seq(DC_NAMESPACE, "creator")) - """ - A sorted array of names of the authors of the resource, listed in order - of precedence. - """ - - dc_date = property(_getter_seq(DC_NAMESPACE, "date", _converter_date)) - """ - A sorted array of dates (datetime.datetime instances) of significance to - the resource. The dates and times are in UTC. - """ - - dc_description = property(_getter_langalt(DC_NAMESPACE, "description")) - """ - A language-keyed dictionary of textual descriptions of the content of the - resource. - """ - - dc_format = property(_getter_single(DC_NAMESPACE, "format")) - """ - The mime-type of the resource. - """ - - dc_identifier = property(_getter_single(DC_NAMESPACE, "identifier")) - """ - Unique identifier of the resource. - """ - - dc_language = property(_getter_bag(DC_NAMESPACE, "language")) - """ - An unordered array specifying the languages used in the resource. - """ - - dc_publisher = property(_getter_bag(DC_NAMESPACE, "publisher")) - """ - An unordered array of publisher names. - """ - - dc_relation = property(_getter_bag(DC_NAMESPACE, "relation")) - """ - An unordered array of text descriptions of relationships to other - documents. - """ - - dc_rights = property(_getter_langalt(DC_NAMESPACE, "rights")) - """ - A language-keyed dictionary of textual descriptions of the rights the - user has to this resource. - """ - - dc_source = property(_getter_single(DC_NAMESPACE, "source")) - """ - Unique identifier of the work from which this resource was derived. - """ - - dc_subject = property(_getter_bag(DC_NAMESPACE, "subject")) - """ - An unordered array of descriptive phrases or keywrods that specify the - topic of the content of the resource. - """ - - dc_title = property(_getter_langalt(DC_NAMESPACE, "title")) - """ - A language-keyed dictionary of the title of the resource. - """ - - dc_type = property(_getter_bag(DC_NAMESPACE, "type")) - """ - An unordered array of textual descriptions of the document type. - """ - - pdf_keywords = property(_getter_single(PDF_NAMESPACE, "Keywords")) - """ - An unformatted text string representing document keywords. - """ - - pdf_pdfversion = property(_getter_single(PDF_NAMESPACE, "PDFVersion")) - """ - The PDF file version, for example 1.0, 1.3. - """ - - pdf_producer = property(_getter_single(PDF_NAMESPACE, "Producer")) - """ - The name of the tool that created the PDF document. - """ - - xmp_create_date = property( - _getter_single(XMP_NAMESPACE, "CreateDate", _converter_date) - ) - """ - The date and time the resource was originally created. The date and - time are returned as a UTC datetime.datetime object. - """ - - @property - def xmp_createDate(self) -> datetime.datetime: # pragma: no cover - deprecate_with_replacement("xmp_createDate", "xmp_create_date", "4.0.0") - return self.xmp_create_date - - @xmp_createDate.setter - def xmp_createDate(self, value: datetime.datetime) -> None: # pragma: no cover - deprecate_with_replacement("xmp_createDate", "xmp_create_date", "4.0.0") - self.xmp_create_date = value - - xmp_modify_date = property( - _getter_single(XMP_NAMESPACE, "ModifyDate", _converter_date) - ) - """ - The date and time the resource was last modified. The date and time - are returned as a UTC datetime.datetime object. - """ - - @property - def xmp_modifyDate(self) -> datetime.datetime: # pragma: no cover - deprecate_with_replacement("xmp_modifyDate", "xmp_modify_date", "4.0.0") - return self.xmp_modify_date - - @xmp_modifyDate.setter - def xmp_modifyDate(self, value: datetime.datetime) -> None: # pragma: no cover - deprecate_with_replacement("xmp_modifyDate", "xmp_modify_date", "4.0.0") - self.xmp_modify_date = value - - xmp_metadata_date = property( - _getter_single(XMP_NAMESPACE, "MetadataDate", _converter_date) - ) - """ - The date and time that any metadata for this resource was last changed. - - The date and time are returned as a UTC datetime.datetime object. - """ - - @property - def xmp_metadataDate(self) -> datetime.datetime: # pragma: no cover - deprecate_with_replacement("xmp_metadataDate", "xmp_metadata_date", "4.0.0") - return self.xmp_metadata_date - - @xmp_metadataDate.setter - def xmp_metadataDate(self, value: datetime.datetime) -> None: # pragma: no cover - deprecate_with_replacement("xmp_metadataDate", "xmp_metadata_date", "4.0.0") - self.xmp_metadata_date = value - - xmp_creator_tool = property(_getter_single(XMP_NAMESPACE, "CreatorTool")) - """The name of the first known tool used to create the resource.""" - - @property - def xmp_creatorTool(self) -> str: # pragma: no cover - deprecation_with_replacement("xmp_creatorTool", "xmp_creator_tool", "3.0.0") - return self.xmp_creator_tool - - @xmp_creatorTool.setter - def xmp_creatorTool(self, value: str) -> None: # pragma: no cover - deprecation_with_replacement("xmp_creatorTool", "xmp_creator_tool", "3.0.0") - self.xmp_creator_tool = value - - xmpmm_document_id = property(_getter_single(XMPMM_NAMESPACE, "DocumentID")) - """ - The common identifier for all versions and renditions of this resource. - """ - - @property - def xmpmm_documentId(self) -> str: # pragma: no cover - deprecation_with_replacement("xmpmm_documentId", "xmpmm_document_id", "3.0.0") - return self.xmpmm_document_id - - @xmpmm_documentId.setter - def xmpmm_documentId(self, value: str) -> None: # pragma: no cover - deprecation_with_replacement("xmpmm_documentId", "xmpmm_document_id", "3.0.0") - self.xmpmm_document_id = value - - xmpmm_instance_id = property(_getter_single(XMPMM_NAMESPACE, "InstanceID")) - """ - An identifier for a specific incarnation of a document, updated each - time a file is saved. - """ - - @property - def xmpmm_instanceId(self) -> str: # pragma: no cover - deprecation_with_replacement("xmpmm_instanceId", "xmpmm_instance_id", "3.0.0") - return cast(str, self.xmpmm_instance_id) - - @xmpmm_instanceId.setter - def xmpmm_instanceId(self, value: str) -> None: # pragma: no cover - deprecation_with_replacement("xmpmm_instanceId", "xmpmm_instance_id", "3.0.0") - self.xmpmm_instance_id = value - - @property - def custom_properties(self) -> Dict[Any, Any]: - """ - Retrieve custom metadata properties defined in the undocumented pdfx - metadata schema. - - :return: a dictionary of key/value items for custom metadata properties. - """ - if not hasattr(self, "_custom_properties"): - self._custom_properties = {} - for node in self.get_nodes_in_namespace("", PDFX_NAMESPACE): - key = node.localName - while True: - # see documentation about PDFX_NAMESPACE earlier in file - idx = key.find("\u2182") - if idx == -1: - break - key = ( - key[:idx] - + chr(int(key[idx + 1 : idx + 5], base=16)) - + key[idx + 5 :] - ) - if node.nodeType == node.ATTRIBUTE_NODE: - value = node.nodeValue - else: - value = self._get_text(node) - self._custom_properties[key] = value - return self._custom_properties diff --git a/pptx-env/lib/python3.12/site-packages/__pycache__/brotli.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/__pycache__/brotli.cpython-312.pyc deleted file mode 100644 index a62968a8..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/__pycache__/brotli.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/__pycache__/typing_extensions.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/__pycache__/typing_extensions.cpython-312.pyc deleted file mode 100644 index 169033c0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/__pycache__/typing_extensions.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/_brotli.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/_brotli.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index 5eba5f84..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/_brotli.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/_cffi_backend.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/_cffi_backend.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index 156ee431..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/_cffi_backend.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/brotli.py b/pptx-env/lib/python3.12/site-packages/brotli.py deleted file mode 100644 index 9be4ed4b..00000000 --- a/pptx-env/lib/python3.12/site-packages/brotli.py +++ /dev/null @@ -1,55 +0,0 @@ -# Copyright 2016 The Brotli Authors. All rights reserved. -# -# Distributed under MIT license. -# See file LICENSE for detail or copy at https://opensource.org/licenses/MIT - -"""Functions to compress and decompress data using the Brotli library.""" - -import _brotli - -# The library version. -version = __version__ = _brotli.__version__ - -# The compression mode. -MODE_GENERIC = _brotli.MODE_GENERIC -MODE_TEXT = _brotli.MODE_TEXT -MODE_FONT = _brotli.MODE_FONT - -# The Compressor object. -Compressor = _brotli.Compressor - -# The Decompressor object. -Decompressor = _brotli.Decompressor - -# Compress a byte string. -def compress(string, mode=MODE_GENERIC, quality=11, lgwin=22, lgblock=0): - """Compress a byte string. - - Args: - string (bytes): The input data. - mode (int, optional): The compression mode can be MODE_GENERIC (default), - MODE_TEXT (for UTF-8 format text input) or MODE_FONT (for WOFF 2.0). - quality (int, optional): Controls the compression-speed vs compression- - density tradeoff. The higher the quality, the slower the compression. - Range is 0 to 11. Defaults to 11. - lgwin (int, optional): Base 2 logarithm of the sliding window size. Range - is 10 to 24. Defaults to 22. - lgblock (int, optional): Base 2 logarithm of the maximum input block size. - Range is 16 to 24. If set to 0, the value will be set based on the - quality. Defaults to 0. - - Returns: - The compressed byte string. - - Raises: - brotli.error: If arguments are invalid, or compressor fails. - """ - compressor = Compressor(mode=mode, quality=quality, lgwin=lgwin, - lgblock=lgblock) - return compressor.process(string) + compressor.finish() - -# Decompress a compressed byte string. -decompress = _brotli.decompress - -# Raised if compression or decompression fails. -error = _brotli.error diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/INSTALLER b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/INSTALLER deleted file mode 100644 index a1b589e3..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/INSTALLER +++ /dev/null @@ -1 +0,0 @@ -pip diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/METADATA b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/METADATA deleted file mode 100644 index 67508e56..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/METADATA +++ /dev/null @@ -1,68 +0,0 @@ -Metadata-Version: 2.4 -Name: cffi -Version: 2.0.0 -Summary: Foreign Function Interface for Python calling C code. -Author: Armin Rigo, Maciej Fijalkowski -Maintainer: Matt Davis, Matt Clay, Matti Picus -License-Expression: MIT -Project-URL: Documentation, https://cffi.readthedocs.io/ -Project-URL: Changelog, https://cffi.readthedocs.io/en/latest/whatsnew.html -Project-URL: Downloads, https://github.com/python-cffi/cffi/releases -Project-URL: Contact, https://groups.google.com/forum/#!forum/python-cffi -Project-URL: Source Code, https://github.com/python-cffi/cffi -Project-URL: Issue Tracker, https://github.com/python-cffi/cffi/issues -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3.9 -Classifier: Programming Language :: Python :: 3.10 -Classifier: Programming Language :: Python :: 3.11 -Classifier: Programming Language :: Python :: 3.12 -Classifier: Programming Language :: Python :: 3.13 -Classifier: Programming Language :: Python :: 3.14 -Classifier: Programming Language :: Python :: Free Threading :: 2 - Beta -Classifier: Programming Language :: Python :: Implementation :: CPython -Requires-Python: >=3.9 -Description-Content-Type: text/markdown -License-File: LICENSE -License-File: AUTHORS -Requires-Dist: pycparser; implementation_name != "PyPy" -Dynamic: license-file - -[![GitHub Actions Status](https://github.com/python-cffi/cffi/actions/workflows/ci.yaml/badge.svg?branch=main)](https://github.com/python-cffi/cffi/actions/workflows/ci.yaml?query=branch%3Amain++) -[![PyPI version](https://img.shields.io/pypi/v/cffi.svg)](https://pypi.org/project/cffi) -[![Read the Docs](https://img.shields.io/badge/docs-latest-blue.svg)][Documentation] - - -CFFI -==== - -Foreign Function Interface for Python calling C code. - -Please see the [Documentation] or uncompiled in the `doc/` subdirectory. - -Download --------- - -[Download page](https://github.com/python-cffi/cffi/releases) - -Source Code ------------ - -Source code is publicly available on -[GitHub](https://github.com/python-cffi/cffi). - -Contact -------- - -[Mailing list](https://groups.google.com/forum/#!forum/python-cffi) - -Testing/development tips ------------------------- - -After `git clone` or `wget && tar`, we will get a directory called `cffi` or `cffi-x.x.x`. we call it `repo-directory`. To run tests under CPython, run the following in the `repo-directory`: - - pip install pytest - pip install -e . # editable install of CFFI for local development - pytest src/c/ testing/ - -[Documentation]: http://cffi.readthedocs.org/ diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/RECORD b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/RECORD deleted file mode 100644 index 6f822989..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/RECORD +++ /dev/null @@ -1,49 +0,0 @@ -_cffi_backend.cpython-312-x86_64-linux-gnu.so,sha256=AGLtw5fn9u4Cmwk3BbGlsXG7VZEvQekABMyEGuRZmcE,348808 -cffi-2.0.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 -cffi-2.0.0.dist-info/METADATA,sha256=uYzn40F68Im8EtXHNBLZs7FoPM-OxzyYbDWsjJvhujk,2559 -cffi-2.0.0.dist-info/RECORD,, -cffi-2.0.0.dist-info/WHEEL,sha256=aSgG0F4rGPZtV0iTEIfy6dtHq6g67Lze3uLfk0vWn88,151 -cffi-2.0.0.dist-info/entry_points.txt,sha256=y6jTxnyeuLnL-XJcDv8uML3n6wyYiGRg8MTp_QGJ9Ho,75 -cffi-2.0.0.dist-info/licenses/AUTHORS,sha256=KmemC7-zN1nWfWRf8TG45ta8TK_CMtdR_Kw-2k0xTMg,208 -cffi-2.0.0.dist-info/licenses/LICENSE,sha256=W6JN3FcGf5JJrdZEw6_EGl1tw34jQz73Wdld83Cwr2M,1123 -cffi-2.0.0.dist-info/top_level.txt,sha256=rE7WR3rZfNKxWI9-jn6hsHCAl7MDkB-FmuQbxWjFehQ,19 -cffi/__init__.py,sha256=-ksBQ7MfDzVvbBlV_ftYBWAmEqfA86ljIzMxzaZeAlI,511 -cffi/__pycache__/__init__.cpython-312.pyc,, -cffi/__pycache__/_imp_emulation.cpython-312.pyc,, -cffi/__pycache__/_shimmed_dist_utils.cpython-312.pyc,, -cffi/__pycache__/api.cpython-312.pyc,, -cffi/__pycache__/backend_ctypes.cpython-312.pyc,, -cffi/__pycache__/cffi_opcode.cpython-312.pyc,, -cffi/__pycache__/commontypes.cpython-312.pyc,, -cffi/__pycache__/cparser.cpython-312.pyc,, -cffi/__pycache__/error.cpython-312.pyc,, -cffi/__pycache__/ffiplatform.cpython-312.pyc,, -cffi/__pycache__/lock.cpython-312.pyc,, -cffi/__pycache__/model.cpython-312.pyc,, -cffi/__pycache__/pkgconfig.cpython-312.pyc,, -cffi/__pycache__/recompiler.cpython-312.pyc,, -cffi/__pycache__/setuptools_ext.cpython-312.pyc,, -cffi/__pycache__/vengine_cpy.cpython-312.pyc,, -cffi/__pycache__/vengine_gen.cpython-312.pyc,, -cffi/__pycache__/verifier.cpython-312.pyc,, -cffi/_cffi_errors.h,sha256=zQXt7uR_m8gUW-fI2hJg0KoSkJFwXv8RGUkEDZ177dQ,3908 -cffi/_cffi_include.h,sha256=Exhmgm9qzHWzWivjfTe0D7Xp4rPUkVxdNuwGhMTMzbw,15055 -cffi/_embedding.h,sha256=Ai33FHblE7XSpHOCp8kPcWwN5_9BV14OvN0JVa6ITpw,18786 -cffi/_imp_emulation.py,sha256=RxREG8zAbI2RPGBww90u_5fi8sWdahpdipOoPzkp7C0,2960 -cffi/_shimmed_dist_utils.py,sha256=Bjj2wm8yZbvFvWEx5AEfmqaqZyZFhYfoyLLQHkXZuao,2230 -cffi/api.py,sha256=alBv6hZQkjpmZplBphdaRn2lPO9-CORs_M7ixabvZWI,42169 -cffi/backend_ctypes.py,sha256=h5ZIzLc6BFVXnGyc9xPqZWUS7qGy7yFSDqXe68Sa8z4,42454 -cffi/cffi_opcode.py,sha256=JDV5l0R0_OadBX_uE7xPPTYtMdmpp8I9UYd6av7aiDU,5731 -cffi/commontypes.py,sha256=7N6zPtCFlvxXMWhHV08psUjdYIK2XgsN3yo5dgua_v4,2805 -cffi/cparser.py,sha256=QUTfmlL-aO-MYR8bFGlvAUHc36OQr7XYLe0WLkGFjRo,44790 -cffi/error.py,sha256=v6xTiS4U0kvDcy4h_BDRo5v39ZQuj-IMRYLv5ETddZs,877 -cffi/ffiplatform.py,sha256=avxFjdikYGJoEtmJO7ewVmwG_VEVl6EZ_WaNhZYCqv4,3584 -cffi/lock.py,sha256=l9TTdwMIMpi6jDkJGnQgE9cvTIR7CAntIJr8EGHt3pY,747 -cffi/model.py,sha256=W30UFQZE73jL5Mx5N81YT77us2W2iJjTm0XYfnwz1cg,21797 -cffi/parse_c_type.h,sha256=OdwQfwM9ktq6vlCB43exFQmxDBtj2MBNdK8LYl15tjw,5976 -cffi/pkgconfig.py,sha256=LP1w7vmWvmKwyqLaU1Z243FOWGNQMrgMUZrvgFuOlco,4374 -cffi/recompiler.py,sha256=78J6lMEEOygXNmjN9-fOFFO3j7eW-iFxSrxfvQb54bY,65509 -cffi/setuptools_ext.py,sha256=0rCwBJ1W7FHWtiMKfNXsSST88V8UXrui5oeXFlDNLG8,9411 -cffi/vengine_cpy.py,sha256=oyQKD23kpE0aChUKA8Jg0e723foPiYzLYEdb-J0MiNs,43881 -cffi/vengine_gen.py,sha256=DUlEIrDiVin1Pnhn1sfoamnS5NLqfJcOdhRoeSNeJRg,26939 -cffi/verifier.py,sha256=oX8jpaohg2Qm3aHcznidAdvrVm5N4sQYG0a3Eo5mIl4,11182 diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/WHEEL b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/WHEEL deleted file mode 100644 index e21e9f2f..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/WHEEL +++ /dev/null @@ -1,6 +0,0 @@ -Wheel-Version: 1.0 -Generator: setuptools (80.9.0) -Root-Is-Purelib: false -Tag: cp312-cp312-manylinux_2_17_x86_64 -Tag: cp312-cp312-manylinux2014_x86_64 - diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/entry_points.txt b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/entry_points.txt deleted file mode 100644 index 4b0274f2..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/entry_points.txt +++ /dev/null @@ -1,2 +0,0 @@ -[distutils.setup_keywords] -cffi_modules = cffi.setuptools_ext:cffi_modules diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/licenses/AUTHORS b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/licenses/AUTHORS deleted file mode 100644 index 370a25d3..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/licenses/AUTHORS +++ /dev/null @@ -1,8 +0,0 @@ -This package has been mostly done by Armin Rigo with help from -Maciej FijaΕ‚kowski. The idea is heavily based (although not directly -copied) from LuaJIT ffi by Mike Pall. - - -Other contributors: - - Google Inc. diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/licenses/LICENSE b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/licenses/LICENSE deleted file mode 100644 index 0a1dbfb0..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/licenses/LICENSE +++ /dev/null @@ -1,23 +0,0 @@ - -Except when otherwise stated (look for LICENSE files in directories or -information at the beginning of each file) all software and -documentation is licensed as follows: - - MIT No Attribution - - Permission is hereby granted, free of charge, to any person - obtaining a copy of this software and associated documentation - files (the "Software"), to deal in the Software without - restriction, including without limitation the rights to use, - copy, modify, merge, publish, distribute, sublicense, and/or - sell copies of the Software, and to permit persons to whom the - Software is furnished to do so. - - THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS - OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, - FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL - THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER - LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING - FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER - DEALINGS IN THE SOFTWARE. - diff --git a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/top_level.txt b/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/top_level.txt deleted file mode 100644 index f6457795..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi-2.0.0.dist-info/top_level.txt +++ /dev/null @@ -1,2 +0,0 @@ -_cffi_backend -cffi diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__init__.py b/pptx-env/lib/python3.12/site-packages/cffi/__init__.py deleted file mode 100644 index c99ec3d4..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -__all__ = ['FFI', 'VerificationError', 'VerificationMissing', 'CDefError', - 'FFIError'] - -from .api import FFI -from .error import CDefError, FFIError, VerificationError, VerificationMissing -from .error import PkgConfigError - -__version__ = "2.0.0" -__version_info__ = (2, 0, 0) - -# The verifier module file names are based on the CRC32 of a string that -# contains the following version number. It may be older than __version__ -# if nothing is clearly incompatible. -__version_verifier_modules__ = "0.8.6" diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index c5ae3305..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/_imp_emulation.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/_imp_emulation.cpython-312.pyc deleted file mode 100644 index 07886cd6..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/_imp_emulation.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/_shimmed_dist_utils.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/_shimmed_dist_utils.cpython-312.pyc deleted file mode 100644 index 43591ce0..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/_shimmed_dist_utils.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/api.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/api.cpython-312.pyc deleted file mode 100644 index b43ada14..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/api.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/backend_ctypes.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/backend_ctypes.cpython-312.pyc deleted file mode 100644 index 52a18176..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/backend_ctypes.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/cffi_opcode.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/cffi_opcode.cpython-312.pyc deleted file mode 100644 index 2d3692c7..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/cffi_opcode.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/commontypes.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/commontypes.cpython-312.pyc deleted file mode 100644 index c6d0fb7b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/commontypes.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/cparser.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/cparser.cpython-312.pyc deleted file mode 100644 index 42abb687..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/cparser.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/error.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/error.cpython-312.pyc deleted file mode 100644 index e96ef04a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/error.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/ffiplatform.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/ffiplatform.cpython-312.pyc deleted file mode 100644 index 419f296b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/ffiplatform.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/lock.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/lock.cpython-312.pyc deleted file mode 100644 index 9cdb2096..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/lock.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/model.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/model.cpython-312.pyc deleted file mode 100644 index cf937312..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/model.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/pkgconfig.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/pkgconfig.cpython-312.pyc deleted file mode 100644 index ead4719a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/pkgconfig.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/recompiler.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/recompiler.cpython-312.pyc deleted file mode 100644 index a6c6c557..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/recompiler.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/setuptools_ext.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/setuptools_ext.cpython-312.pyc deleted file mode 100644 index 77f9000a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/setuptools_ext.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/vengine_cpy.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/vengine_cpy.cpython-312.pyc deleted file mode 100644 index a53fe22e..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/vengine_cpy.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/vengine_gen.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/vengine_gen.cpython-312.pyc deleted file mode 100644 index b5bb6732..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/vengine_gen.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/verifier.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/verifier.cpython-312.pyc deleted file mode 100644 index ea35be27..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cffi/__pycache__/verifier.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cffi/_cffi_errors.h b/pptx-env/lib/python3.12/site-packages/cffi/_cffi_errors.h deleted file mode 100644 index 158e0590..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/_cffi_errors.h +++ /dev/null @@ -1,149 +0,0 @@ -#ifndef CFFI_MESSAGEBOX -# ifdef _MSC_VER -# define CFFI_MESSAGEBOX 1 -# else -# define CFFI_MESSAGEBOX 0 -# endif -#endif - - -#if CFFI_MESSAGEBOX -/* Windows only: logic to take the Python-CFFI embedding logic - initialization errors and display them in a background thread - with MessageBox. The idea is that if the whole program closes - as a result of this problem, then likely it is already a console - program and you can read the stderr output in the console too. - If it is not a console program, then it will likely show its own - dialog to complain, or generally not abruptly close, and for this - case the background thread should stay alive. -*/ -static void *volatile _cffi_bootstrap_text; - -static PyObject *_cffi_start_error_capture(void) -{ - PyObject *result = NULL; - PyObject *x, *m, *bi; - - if (InterlockedCompareExchangePointer(&_cffi_bootstrap_text, - (void *)1, NULL) != NULL) - return (PyObject *)1; - - m = PyImport_AddModule("_cffi_error_capture"); - if (m == NULL) - goto error; - - result = PyModule_GetDict(m); - if (result == NULL) - goto error; - -#if PY_MAJOR_VERSION >= 3 - bi = PyImport_ImportModule("builtins"); -#else - bi = PyImport_ImportModule("__builtin__"); -#endif - if (bi == NULL) - goto error; - PyDict_SetItemString(result, "__builtins__", bi); - Py_DECREF(bi); - - x = PyRun_String( - "import sys\n" - "class FileLike:\n" - " def write(self, x):\n" - " try:\n" - " of.write(x)\n" - " except: pass\n" - " self.buf += x\n" - " def flush(self):\n" - " pass\n" - "fl = FileLike()\n" - "fl.buf = ''\n" - "of = sys.stderr\n" - "sys.stderr = fl\n" - "def done():\n" - " sys.stderr = of\n" - " return fl.buf\n", /* make sure the returned value stays alive */ - Py_file_input, - result, result); - Py_XDECREF(x); - - error: - if (PyErr_Occurred()) - { - PyErr_WriteUnraisable(Py_None); - PyErr_Clear(); - } - return result; -} - -#pragma comment(lib, "user32.lib") - -static DWORD WINAPI _cffi_bootstrap_dialog(LPVOID ignored) -{ - Sleep(666); /* may be interrupted if the whole process is closing */ -#if PY_MAJOR_VERSION >= 3 - MessageBoxW(NULL, (wchar_t *)_cffi_bootstrap_text, - L"Python-CFFI error", - MB_OK | MB_ICONERROR); -#else - MessageBoxA(NULL, (char *)_cffi_bootstrap_text, - "Python-CFFI error", - MB_OK | MB_ICONERROR); -#endif - _cffi_bootstrap_text = NULL; - return 0; -} - -static void _cffi_stop_error_capture(PyObject *ecap) -{ - PyObject *s; - void *text; - - if (ecap == (PyObject *)1) - return; - - if (ecap == NULL) - goto error; - - s = PyRun_String("done()", Py_eval_input, ecap, ecap); - if (s == NULL) - goto error; - - /* Show a dialog box, but in a background thread, and - never show multiple dialog boxes at once. */ -#if PY_MAJOR_VERSION >= 3 - text = PyUnicode_AsWideCharString(s, NULL); -#else - text = PyString_AsString(s); -#endif - - _cffi_bootstrap_text = text; - - if (text != NULL) - { - HANDLE h; - h = CreateThread(NULL, 0, _cffi_bootstrap_dialog, - NULL, 0, NULL); - if (h != NULL) - CloseHandle(h); - } - /* decref the string, but it should stay alive as 'fl.buf' - in the small module above. It will really be freed only if - we later get another similar error. So it's a leak of at - most one copy of the small module. That's fine for this - situation which is usually a "fatal error" anyway. */ - Py_DECREF(s); - PyErr_Clear(); - return; - - error: - _cffi_bootstrap_text = NULL; - PyErr_Clear(); -} - -#else - -static PyObject *_cffi_start_error_capture(void) { return NULL; } -static void _cffi_stop_error_capture(PyObject *ecap) { } - -#endif diff --git a/pptx-env/lib/python3.12/site-packages/cffi/_cffi_include.h b/pptx-env/lib/python3.12/site-packages/cffi/_cffi_include.h deleted file mode 100644 index 908a1d73..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/_cffi_include.h +++ /dev/null @@ -1,389 +0,0 @@ -#define _CFFI_ - -/* We try to define Py_LIMITED_API before including Python.h. - - Mess: we can only define it if Py_DEBUG, Py_TRACE_REFS and - Py_REF_DEBUG are not defined. This is a best-effort approximation: - we can learn about Py_DEBUG from pyconfig.h, but it is unclear if - the same works for the other two macros. Py_DEBUG implies them, - but not the other way around. - - The implementation is messy (issue #350): on Windows, with _MSC_VER, - we have to define Py_LIMITED_API even before including pyconfig.h. - In that case, we guess what pyconfig.h will do to the macros above, - and check our guess after the #include. - - Note that on Windows, with CPython 3.x, you need >= 3.5 and virtualenv - version >= 16.0.0. With older versions of either, you don't get a - copy of PYTHON3.DLL in the virtualenv. We can't check the version of - CPython *before* we even include pyconfig.h. ffi.set_source() puts - a ``#define _CFFI_NO_LIMITED_API'' at the start of this file if it is - running on Windows < 3.5, as an attempt at fixing it, but that's - arguably wrong because it may not be the target version of Python. - Still better than nothing I guess. As another workaround, you can - remove the definition of Py_LIMITED_API here. - - See also 'py_limited_api' in cffi/setuptools_ext.py. -*/ -#if !defined(_CFFI_USE_EMBEDDING) && !defined(Py_LIMITED_API) -# ifdef _MSC_VER -# if !defined(_DEBUG) && !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) -# define Py_LIMITED_API -# endif -# include - /* sanity-check: Py_LIMITED_API will cause crashes if any of these - are also defined. Normally, the Python file PC/pyconfig.h does not - cause any of these to be defined, with the exception that _DEBUG - causes Py_DEBUG. Double-check that. */ -# ifdef Py_LIMITED_API -# if defined(Py_DEBUG) -# error "pyconfig.h unexpectedly defines Py_DEBUG, but Py_LIMITED_API is set" -# endif -# if defined(Py_TRACE_REFS) -# error "pyconfig.h unexpectedly defines Py_TRACE_REFS, but Py_LIMITED_API is set" -# endif -# if defined(Py_REF_DEBUG) -# error "pyconfig.h unexpectedly defines Py_REF_DEBUG, but Py_LIMITED_API is set" -# endif -# endif -# else -# include -# if !defined(Py_DEBUG) && !defined(Py_TRACE_REFS) && !defined(Py_REF_DEBUG) && !defined(_CFFI_NO_LIMITED_API) -# define Py_LIMITED_API -# endif -# endif -#endif - -#include -#ifdef __cplusplus -extern "C" { -#endif -#include -#include "parse_c_type.h" - -/* this block of #ifs should be kept exactly identical between - c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py - and cffi/_cffi_include.h */ -#if defined(_MSC_VER) -# include /* for alloca() */ -# if _MSC_VER < 1600 /* MSVC < 2010 */ - typedef __int8 int8_t; - typedef __int16 int16_t; - typedef __int32 int32_t; - typedef __int64 int64_t; - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - typedef unsigned __int64 uint64_t; - typedef __int8 int_least8_t; - typedef __int16 int_least16_t; - typedef __int32 int_least32_t; - typedef __int64 int_least64_t; - typedef unsigned __int8 uint_least8_t; - typedef unsigned __int16 uint_least16_t; - typedef unsigned __int32 uint_least32_t; - typedef unsigned __int64 uint_least64_t; - typedef __int8 int_fast8_t; - typedef __int16 int_fast16_t; - typedef __int32 int_fast32_t; - typedef __int64 int_fast64_t; - typedef unsigned __int8 uint_fast8_t; - typedef unsigned __int16 uint_fast16_t; - typedef unsigned __int32 uint_fast32_t; - typedef unsigned __int64 uint_fast64_t; - typedef __int64 intmax_t; - typedef unsigned __int64 uintmax_t; -# else -# include -# endif -# if _MSC_VER < 1800 /* MSVC < 2013 */ -# ifndef __cplusplus - typedef unsigned char _Bool; -# endif -# endif -# define _cffi_float_complex_t _Fcomplex /* include for it */ -# define _cffi_double_complex_t _Dcomplex /* include for it */ -#else -# include -# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) -# include -# endif -# define _cffi_float_complex_t float _Complex -# define _cffi_double_complex_t double _Complex -#endif - -#ifdef __GNUC__ -# define _CFFI_UNUSED_FN __attribute__((unused)) -#else -# define _CFFI_UNUSED_FN /* nothing */ -#endif - -#ifdef __cplusplus -# ifndef _Bool - typedef bool _Bool; /* semi-hackish: C++ has no _Bool; bool is builtin */ -# endif -#endif - -/********** CPython-specific section **********/ -#ifndef PYPY_VERSION - - -#if PY_MAJOR_VERSION >= 3 -# define PyInt_FromLong PyLong_FromLong -#endif - -#define _cffi_from_c_double PyFloat_FromDouble -#define _cffi_from_c_float PyFloat_FromDouble -#define _cffi_from_c_long PyInt_FromLong -#define _cffi_from_c_ulong PyLong_FromUnsignedLong -#define _cffi_from_c_longlong PyLong_FromLongLong -#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong -#define _cffi_from_c__Bool PyBool_FromLong - -#define _cffi_to_c_double PyFloat_AsDouble -#define _cffi_to_c_float PyFloat_AsDouble - -#define _cffi_from_c_int(x, type) \ - (((type)-1) > 0 ? /* unsigned */ \ - (sizeof(type) < sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - sizeof(type) == sizeof(long) ? \ - PyLong_FromUnsignedLong((unsigned long)x) : \ - PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ - (sizeof(type) <= sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - PyLong_FromLongLong((long long)x))) - -#define _cffi_to_c_int(o, type) \ - ((type)( \ - sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ - : (type)_cffi_to_c_i8(o)) : \ - sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ - : (type)_cffi_to_c_i16(o)) : \ - sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ - : (type)_cffi_to_c_i32(o)) : \ - sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ - : (type)_cffi_to_c_i64(o)) : \ - (Py_FatalError("unsupported size for type " #type), (type)0))) - -#define _cffi_to_c_i8 \ - ((int(*)(PyObject *))_cffi_exports[1]) -#define _cffi_to_c_u8 \ - ((int(*)(PyObject *))_cffi_exports[2]) -#define _cffi_to_c_i16 \ - ((int(*)(PyObject *))_cffi_exports[3]) -#define _cffi_to_c_u16 \ - ((int(*)(PyObject *))_cffi_exports[4]) -#define _cffi_to_c_i32 \ - ((int(*)(PyObject *))_cffi_exports[5]) -#define _cffi_to_c_u32 \ - ((unsigned int(*)(PyObject *))_cffi_exports[6]) -#define _cffi_to_c_i64 \ - ((long long(*)(PyObject *))_cffi_exports[7]) -#define _cffi_to_c_u64 \ - ((unsigned long long(*)(PyObject *))_cffi_exports[8]) -#define _cffi_to_c_char \ - ((int(*)(PyObject *))_cffi_exports[9]) -#define _cffi_from_c_pointer \ - ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[10]) -#define _cffi_to_c_pointer \ - ((char *(*)(PyObject *, struct _cffi_ctypedescr *))_cffi_exports[11]) -#define _cffi_get_struct_layout \ - not used any more -#define _cffi_restore_errno \ - ((void(*)(void))_cffi_exports[13]) -#define _cffi_save_errno \ - ((void(*)(void))_cffi_exports[14]) -#define _cffi_from_c_char \ - ((PyObject *(*)(char))_cffi_exports[15]) -#define _cffi_from_c_deref \ - ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[16]) -#define _cffi_to_c \ - ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[17]) -#define _cffi_from_c_struct \ - ((PyObject *(*)(char *, struct _cffi_ctypedescr *))_cffi_exports[18]) -#define _cffi_to_c_wchar_t \ - ((_cffi_wchar_t(*)(PyObject *))_cffi_exports[19]) -#define _cffi_from_c_wchar_t \ - ((PyObject *(*)(_cffi_wchar_t))_cffi_exports[20]) -#define _cffi_to_c_long_double \ - ((long double(*)(PyObject *))_cffi_exports[21]) -#define _cffi_to_c__Bool \ - ((_Bool(*)(PyObject *))_cffi_exports[22]) -#define _cffi_prepare_pointer_call_argument \ - ((Py_ssize_t(*)(struct _cffi_ctypedescr *, \ - PyObject *, char **))_cffi_exports[23]) -#define _cffi_convert_array_from_object \ - ((int(*)(char *, struct _cffi_ctypedescr *, PyObject *))_cffi_exports[24]) -#define _CFFI_CPIDX 25 -#define _cffi_call_python \ - ((void(*)(struct _cffi_externpy_s *, char *))_cffi_exports[_CFFI_CPIDX]) -#define _cffi_to_c_wchar3216_t \ - ((int(*)(PyObject *))_cffi_exports[26]) -#define _cffi_from_c_wchar3216_t \ - ((PyObject *(*)(int))_cffi_exports[27]) -#define _CFFI_NUM_EXPORTS 28 - -struct _cffi_ctypedescr; - -static void *_cffi_exports[_CFFI_NUM_EXPORTS]; - -#define _cffi_type(index) ( \ - assert((((uintptr_t)_cffi_types[index]) & 1) == 0), \ - (struct _cffi_ctypedescr *)_cffi_types[index]) - -static PyObject *_cffi_init(const char *module_name, Py_ssize_t version, - const struct _cffi_type_context_s *ctx) -{ - PyObject *module, *o_arg, *new_module; - void *raw[] = { - (void *)module_name, - (void *)version, - (void *)_cffi_exports, - (void *)ctx, - }; - - module = PyImport_ImportModule("_cffi_backend"); - if (module == NULL) - goto failure; - - o_arg = PyLong_FromVoidPtr((void *)raw); - if (o_arg == NULL) - goto failure; - - new_module = PyObject_CallMethod( - module, (char *)"_init_cffi_1_0_external_module", (char *)"O", o_arg); - - Py_DECREF(o_arg); - Py_DECREF(module); - return new_module; - - failure: - Py_XDECREF(module); - return NULL; -} - - -#ifdef HAVE_WCHAR_H -typedef wchar_t _cffi_wchar_t; -#else -typedef uint16_t _cffi_wchar_t; /* same random pick as _cffi_backend.c */ -#endif - -_CFFI_UNUSED_FN static uint16_t _cffi_to_c_char16_t(PyObject *o) -{ - if (sizeof(_cffi_wchar_t) == 2) - return (uint16_t)_cffi_to_c_wchar_t(o); - else - return (uint16_t)_cffi_to_c_wchar3216_t(o); -} - -_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char16_t(uint16_t x) -{ - if (sizeof(_cffi_wchar_t) == 2) - return _cffi_from_c_wchar_t((_cffi_wchar_t)x); - else - return _cffi_from_c_wchar3216_t((int)x); -} - -_CFFI_UNUSED_FN static int _cffi_to_c_char32_t(PyObject *o) -{ - if (sizeof(_cffi_wchar_t) == 4) - return (int)_cffi_to_c_wchar_t(o); - else - return (int)_cffi_to_c_wchar3216_t(o); -} - -_CFFI_UNUSED_FN static PyObject *_cffi_from_c_char32_t(unsigned int x) -{ - if (sizeof(_cffi_wchar_t) == 4) - return _cffi_from_c_wchar_t((_cffi_wchar_t)x); - else - return _cffi_from_c_wchar3216_t((int)x); -} - -union _cffi_union_alignment_u { - unsigned char m_char; - unsigned short m_short; - unsigned int m_int; - unsigned long m_long; - unsigned long long m_longlong; - float m_float; - double m_double; - long double m_longdouble; -}; - -struct _cffi_freeme_s { - struct _cffi_freeme_s *next; - union _cffi_union_alignment_u alignment; -}; - -_CFFI_UNUSED_FN static int -_cffi_convert_array_argument(struct _cffi_ctypedescr *ctptr, PyObject *arg, - char **output_data, Py_ssize_t datasize, - struct _cffi_freeme_s **freeme) -{ - char *p; - if (datasize < 0) - return -1; - - p = *output_data; - if (p == NULL) { - struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( - offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); - if (fp == NULL) - return -1; - fp->next = *freeme; - *freeme = fp; - p = *output_data = (char *)&fp->alignment; - } - memset((void *)p, 0, (size_t)datasize); - return _cffi_convert_array_from_object(p, ctptr, arg); -} - -_CFFI_UNUSED_FN static void -_cffi_free_array_arguments(struct _cffi_freeme_s *freeme) -{ - do { - void *p = (void *)freeme; - freeme = freeme->next; - PyObject_Free(p); - } while (freeme != NULL); -} - -/********** end CPython-specific section **********/ -#else -_CFFI_UNUSED_FN -static void (*_cffi_call_python_org)(struct _cffi_externpy_s *, char *); -# define _cffi_call_python _cffi_call_python_org -#endif - - -#define _cffi_array_len(array) (sizeof(array) / sizeof((array)[0])) - -#define _cffi_prim_int(size, sign) \ - ((size) == 1 ? ((sign) ? _CFFI_PRIM_INT8 : _CFFI_PRIM_UINT8) : \ - (size) == 2 ? ((sign) ? _CFFI_PRIM_INT16 : _CFFI_PRIM_UINT16) : \ - (size) == 4 ? ((sign) ? _CFFI_PRIM_INT32 : _CFFI_PRIM_UINT32) : \ - (size) == 8 ? ((sign) ? _CFFI_PRIM_INT64 : _CFFI_PRIM_UINT64) : \ - _CFFI__UNKNOWN_PRIM) - -#define _cffi_prim_float(size) \ - ((size) == sizeof(float) ? _CFFI_PRIM_FLOAT : \ - (size) == sizeof(double) ? _CFFI_PRIM_DOUBLE : \ - (size) == sizeof(long double) ? _CFFI__UNKNOWN_LONG_DOUBLE : \ - _CFFI__UNKNOWN_FLOAT_PRIM) - -#define _cffi_check_int(got, got_nonpos, expected) \ - ((got_nonpos) == (expected <= 0) && \ - (got) == (unsigned long long)expected) - -#ifdef MS_WIN32 -# define _cffi_stdcall __stdcall -#else -# define _cffi_stdcall /* nothing */ -#endif - -#ifdef __cplusplus -} -#endif diff --git a/pptx-env/lib/python3.12/site-packages/cffi/_embedding.h b/pptx-env/lib/python3.12/site-packages/cffi/_embedding.h deleted file mode 100644 index 64c04f67..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/_embedding.h +++ /dev/null @@ -1,550 +0,0 @@ - -/***** Support code for embedding *****/ - -#ifdef __cplusplus -extern "C" { -#endif - - -#if defined(_WIN32) -# define CFFI_DLLEXPORT __declspec(dllexport) -#elif defined(__GNUC__) -# define CFFI_DLLEXPORT __attribute__((visibility("default"))) -#else -# define CFFI_DLLEXPORT /* nothing */ -#endif - - -/* There are two global variables of type _cffi_call_python_fnptr: - - * _cffi_call_python, which we declare just below, is the one called - by ``extern "Python"`` implementations. - - * _cffi_call_python_org, which on CPython is actually part of the - _cffi_exports[] array, is the function pointer copied from - _cffi_backend. If _cffi_start_python() fails, then this is set - to NULL; otherwise, it should never be NULL. - - After initialization is complete, both are equal. However, the - first one remains equal to &_cffi_start_and_call_python until the - very end of initialization, when we are (or should be) sure that - concurrent threads also see a completely initialized world, and - only then is it changed. -*/ -#undef _cffi_call_python -typedef void (*_cffi_call_python_fnptr)(struct _cffi_externpy_s *, char *); -static void _cffi_start_and_call_python(struct _cffi_externpy_s *, char *); -static _cffi_call_python_fnptr _cffi_call_python = &_cffi_start_and_call_python; - - -#ifndef _MSC_VER - /* --- Assuming a GCC not infinitely old --- */ -# define cffi_compare_and_swap(l,o,n) __sync_bool_compare_and_swap(l,o,n) -# define cffi_write_barrier() __sync_synchronize() -# if !defined(__amd64__) && !defined(__x86_64__) && \ - !defined(__i386__) && !defined(__i386) -# define cffi_read_barrier() __sync_synchronize() -# else -# define cffi_read_barrier() (void)0 -# endif -#else - /* --- Windows threads version --- */ -# include -# define cffi_compare_and_swap(l,o,n) \ - (InterlockedCompareExchangePointer(l,n,o) == (o)) -# define cffi_write_barrier() InterlockedCompareExchange(&_cffi_dummy,0,0) -# define cffi_read_barrier() (void)0 -static volatile LONG _cffi_dummy; -#endif - -#ifdef WITH_THREAD -# ifndef _MSC_VER -# include - static pthread_mutex_t _cffi_embed_startup_lock; -# else - static CRITICAL_SECTION _cffi_embed_startup_lock; -# endif - static char _cffi_embed_startup_lock_ready = 0; -#endif - -static void _cffi_acquire_reentrant_mutex(void) -{ - static void *volatile lock = NULL; - - while (!cffi_compare_and_swap(&lock, NULL, (void *)1)) { - /* should ideally do a spin loop instruction here, but - hard to do it portably and doesn't really matter I - think: pthread_mutex_init() should be very fast, and - this is only run at start-up anyway. */ - } - -#ifdef WITH_THREAD - if (!_cffi_embed_startup_lock_ready) { -# ifndef _MSC_VER - pthread_mutexattr_t attr; - pthread_mutexattr_init(&attr); - pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); - pthread_mutex_init(&_cffi_embed_startup_lock, &attr); -# else - InitializeCriticalSection(&_cffi_embed_startup_lock); -# endif - _cffi_embed_startup_lock_ready = 1; - } -#endif - - while (!cffi_compare_and_swap(&lock, (void *)1, NULL)) - ; - -#ifndef _MSC_VER - pthread_mutex_lock(&_cffi_embed_startup_lock); -#else - EnterCriticalSection(&_cffi_embed_startup_lock); -#endif -} - -static void _cffi_release_reentrant_mutex(void) -{ -#ifndef _MSC_VER - pthread_mutex_unlock(&_cffi_embed_startup_lock); -#else - LeaveCriticalSection(&_cffi_embed_startup_lock); -#endif -} - - -/********** CPython-specific section **********/ -#ifndef PYPY_VERSION - -#include "_cffi_errors.h" - - -#define _cffi_call_python_org _cffi_exports[_CFFI_CPIDX] - -PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(void); /* forward */ - -static void _cffi_py_initialize(void) -{ - /* XXX use initsigs=0, which "skips initialization registration of - signal handlers, which might be useful when Python is - embedded" according to the Python docs. But review and think - if it should be a user-controllable setting. - - XXX we should also give a way to write errors to a buffer - instead of to stderr. - - XXX if importing 'site' fails, CPython (any version) calls - exit(). Should we try to work around this behavior here? - */ - Py_InitializeEx(0); -} - -static int _cffi_initialize_python(void) -{ - /* This initializes Python, imports _cffi_backend, and then the - present .dll/.so is set up as a CPython C extension module. - */ - int result; - PyGILState_STATE state; - PyObject *pycode=NULL, *global_dict=NULL, *x; - PyObject *builtins; - - state = PyGILState_Ensure(); - - /* Call the initxxx() function from the present module. It will - create and initialize us as a CPython extension module, instead - of letting the startup Python code do it---it might reimport - the same .dll/.so and get maybe confused on some platforms. - It might also have troubles locating the .dll/.so again for all - I know. - */ - (void)_CFFI_PYTHON_STARTUP_FUNC(); - if (PyErr_Occurred()) - goto error; - - /* Now run the Python code provided to ffi.embedding_init_code(). - */ - pycode = Py_CompileString(_CFFI_PYTHON_STARTUP_CODE, - "", - Py_file_input); - if (pycode == NULL) - goto error; - global_dict = PyDict_New(); - if (global_dict == NULL) - goto error; - builtins = PyEval_GetBuiltins(); - if (builtins == NULL) - goto error; - if (PyDict_SetItemString(global_dict, "__builtins__", builtins) < 0) - goto error; - x = PyEval_EvalCode( -#if PY_MAJOR_VERSION < 3 - (PyCodeObject *) -#endif - pycode, global_dict, global_dict); - if (x == NULL) - goto error; - Py_DECREF(x); - - /* Done! Now if we've been called from - _cffi_start_and_call_python() in an ``extern "Python"``, we can - only hope that the Python code did correctly set up the - corresponding @ffi.def_extern() function. Otherwise, the - general logic of ``extern "Python"`` functions (inside the - _cffi_backend module) will find that the reference is still - missing and print an error. - */ - result = 0; - done: - Py_XDECREF(pycode); - Py_XDECREF(global_dict); - PyGILState_Release(state); - return result; - - error:; - { - /* Print as much information as potentially useful. - Debugging load-time failures with embedding is not fun - */ - PyObject *ecap; - PyObject *exception, *v, *tb, *f, *modules, *mod; - PyErr_Fetch(&exception, &v, &tb); - ecap = _cffi_start_error_capture(); - f = PySys_GetObject((char *)"stderr"); - if (f != NULL && f != Py_None) { - PyFile_WriteString( - "Failed to initialize the Python-CFFI embedding logic:\n\n", f); - } - - if (exception != NULL) { - PyErr_NormalizeException(&exception, &v, &tb); - PyErr_Display(exception, v, tb); - } - Py_XDECREF(exception); - Py_XDECREF(v); - Py_XDECREF(tb); - - if (f != NULL && f != Py_None) { - PyFile_WriteString("\nFrom: " _CFFI_MODULE_NAME - "\ncompiled with cffi version: 2.0.0" - "\n_cffi_backend module: ", f); - modules = PyImport_GetModuleDict(); - mod = PyDict_GetItemString(modules, "_cffi_backend"); - if (mod == NULL) { - PyFile_WriteString("not loaded", f); - } - else { - v = PyObject_GetAttrString(mod, "__file__"); - PyFile_WriteObject(v, f, 0); - Py_XDECREF(v); - } - PyFile_WriteString("\nsys.path: ", f); - PyFile_WriteObject(PySys_GetObject((char *)"path"), f, 0); - PyFile_WriteString("\n\n", f); - } - _cffi_stop_error_capture(ecap); - } - result = -1; - goto done; -} - -#if PY_VERSION_HEX < 0x03080000 -PyAPI_DATA(char *) _PyParser_TokenNames[]; /* from CPython */ -#endif - -static int _cffi_carefully_make_gil(void) -{ - /* This does the basic initialization of Python. It can be called - completely concurrently from unrelated threads. It assumes - that we don't hold the GIL before (if it exists), and we don't - hold it afterwards. - - (What it really does used to be completely different in Python 2 - and Python 3, with the Python 2 solution avoiding the spin-lock - around the Py_InitializeEx() call. However, after recent changes - to CPython 2.7 (issue #358) it no longer works. So we use the - Python 3 solution everywhere.) - - This initializes Python by calling Py_InitializeEx(). - Important: this must not be called concurrently at all. - So we use a global variable as a simple spin lock. This global - variable must be from 'libpythonX.Y.so', not from this - cffi-based extension module, because it must be shared from - different cffi-based extension modules. - - In Python < 3.8, we choose - _PyParser_TokenNames[0] as a completely arbitrary pointer value - that is never written to. The default is to point to the - string "ENDMARKER". We change it temporarily to point to the - next character in that string. (Yes, I know it's REALLY - obscure.) - - In Python >= 3.8, this string array is no longer writable, so - instead we pick PyCapsuleType.tp_version_tag. We can't change - Python < 3.8 because someone might use a mixture of cffi - embedded modules, some of which were compiled before this file - changed. - - In Python >= 3.12, this stopped working because that particular - tp_version_tag gets modified during interpreter startup. It's - arguably a bad idea before 3.12 too, but again we can't change - that because someone might use a mixture of cffi embedded - modules, and no-one reported a bug so far. In Python >= 3.12 - we go instead for PyCapsuleType.tp_as_buffer, which is supposed - to always be NULL. We write to it temporarily a pointer to - a struct full of NULLs, which is semantically the same. - */ - -#ifdef WITH_THREAD -# if PY_VERSION_HEX < 0x03080000 - char *volatile *lock = (char *volatile *)_PyParser_TokenNames; - char *old_value, *locked_value; - - while (1) { /* spin loop */ - old_value = *lock; - locked_value = old_value + 1; - if (old_value[0] == 'E') { - assert(old_value[1] == 'N'); - if (cffi_compare_and_swap(lock, old_value, locked_value)) - break; - } - else { - assert(old_value[0] == 'N'); - /* should ideally do a spin loop instruction here, but - hard to do it portably and doesn't really matter I - think: PyEval_InitThreads() should be very fast, and - this is only run at start-up anyway. */ - } - } -# else -# if PY_VERSION_HEX < 0x030C0000 - int volatile *lock = (int volatile *)&PyCapsule_Type.tp_version_tag; - int old_value, locked_value = -42; - assert(!(PyCapsule_Type.tp_flags & Py_TPFLAGS_HAVE_VERSION_TAG)); -# else - static struct ebp_s { PyBufferProcs buf; int mark; } empty_buffer_procs; - empty_buffer_procs.mark = -42; - PyBufferProcs *volatile *lock = (PyBufferProcs *volatile *) - &PyCapsule_Type.tp_as_buffer; - PyBufferProcs *old_value, *locked_value = &empty_buffer_procs.buf; -# endif - - while (1) { /* spin loop */ - old_value = *lock; - if (old_value == 0) { - if (cffi_compare_and_swap(lock, old_value, locked_value)) - break; - } - else { -# if PY_VERSION_HEX < 0x030C0000 - assert(old_value == locked_value); -# else - /* The pointer should point to a possibly different - empty_buffer_procs from another C extension module */ - assert(((struct ebp_s *)old_value)->mark == -42); -# endif - /* should ideally do a spin loop instruction here, but - hard to do it portably and doesn't really matter I - think: PyEval_InitThreads() should be very fast, and - this is only run at start-up anyway. */ - } - } -# endif -#endif - - /* call Py_InitializeEx() */ - if (!Py_IsInitialized()) { - _cffi_py_initialize(); -#if PY_VERSION_HEX < 0x03070000 - PyEval_InitThreads(); -#endif - PyEval_SaveThread(); /* release the GIL */ - /* the returned tstate must be the one that has been stored into the - autoTLSkey by _PyGILState_Init() called from Py_Initialize(). */ - } - else { -#if PY_VERSION_HEX < 0x03070000 - /* PyEval_InitThreads() is always a no-op from CPython 3.7 */ - PyGILState_STATE state = PyGILState_Ensure(); - PyEval_InitThreads(); - PyGILState_Release(state); -#endif - } - -#ifdef WITH_THREAD - /* release the lock */ - while (!cffi_compare_and_swap(lock, locked_value, old_value)) - ; -#endif - - return 0; -} - -/********** end CPython-specific section **********/ - - -#else - - -/********** PyPy-specific section **********/ - -PyMODINIT_FUNC _CFFI_PYTHON_STARTUP_FUNC(const void *[]); /* forward */ - -static struct _cffi_pypy_init_s { - const char *name; - void *func; /* function pointer */ - const char *code; -} _cffi_pypy_init = { - _CFFI_MODULE_NAME, - _CFFI_PYTHON_STARTUP_FUNC, - _CFFI_PYTHON_STARTUP_CODE, -}; - -extern int pypy_carefully_make_gil(const char *); -extern int pypy_init_embedded_cffi_module(int, struct _cffi_pypy_init_s *); - -static int _cffi_carefully_make_gil(void) -{ - return pypy_carefully_make_gil(_CFFI_MODULE_NAME); -} - -static int _cffi_initialize_python(void) -{ - return pypy_init_embedded_cffi_module(0xB011, &_cffi_pypy_init); -} - -/********** end PyPy-specific section **********/ - - -#endif - - -#ifdef __GNUC__ -__attribute__((noinline)) -#endif -static _cffi_call_python_fnptr _cffi_start_python(void) -{ - /* Delicate logic to initialize Python. This function can be - called multiple times concurrently, e.g. when the process calls - its first ``extern "Python"`` functions in multiple threads at - once. It can also be called recursively, in which case we must - ignore it. We also have to consider what occurs if several - different cffi-based extensions reach this code in parallel - threads---it is a different copy of the code, then, and we - can't have any shared global variable unless it comes from - 'libpythonX.Y.so'. - - Idea: - - * _cffi_carefully_make_gil(): "carefully" call - PyEval_InitThreads() (possibly with Py_InitializeEx() first). - - * then we use a (local) custom lock to make sure that a call to this - cffi-based extension will wait if another call to the *same* - extension is running the initialization in another thread. - It is reentrant, so that a recursive call will not block, but - only one from a different thread. - - * then we grab the GIL and (Python 2) we call Py_InitializeEx(). - At this point, concurrent calls to Py_InitializeEx() are not - possible: we have the GIL. - - * do the rest of the specific initialization, which may - temporarily release the GIL but not the custom lock. - Only release the custom lock when we are done. - */ - static char called = 0; - - if (_cffi_carefully_make_gil() != 0) - return NULL; - - _cffi_acquire_reentrant_mutex(); - - /* Here the GIL exists, but we don't have it. We're only protected - from concurrency by the reentrant mutex. */ - - /* This file only initializes the embedded module once, the first - time this is called, even if there are subinterpreters. */ - if (!called) { - called = 1; /* invoke _cffi_initialize_python() only once, - but don't set '_cffi_call_python' right now, - otherwise concurrent threads won't call - this function at all (we need them to wait) */ - if (_cffi_initialize_python() == 0) { - /* now initialization is finished. Switch to the fast-path. */ - - /* We would like nobody to see the new value of - '_cffi_call_python' without also seeing the rest of the - data initialized. However, this is not possible. But - the new value of '_cffi_call_python' is the function - 'cffi_call_python()' from _cffi_backend. So: */ - cffi_write_barrier(); - /* ^^^ we put a write barrier here, and a corresponding - read barrier at the start of cffi_call_python(). This - ensures that after that read barrier, we see everything - done here before the write barrier. - */ - - assert(_cffi_call_python_org != NULL); - _cffi_call_python = (_cffi_call_python_fnptr)_cffi_call_python_org; - } - else { - /* initialization failed. Reset this to NULL, even if it was - already set to some other value. Future calls to - _cffi_start_python() are still forced to occur, and will - always return NULL from now on. */ - _cffi_call_python_org = NULL; - } - } - - _cffi_release_reentrant_mutex(); - - return (_cffi_call_python_fnptr)_cffi_call_python_org; -} - -static -void _cffi_start_and_call_python(struct _cffi_externpy_s *externpy, char *args) -{ - _cffi_call_python_fnptr fnptr; - int current_err = errno; -#ifdef _MSC_VER - int current_lasterr = GetLastError(); -#endif - fnptr = _cffi_start_python(); - if (fnptr == NULL) { - fprintf(stderr, "function %s() called, but initialization code " - "failed. Returning 0.\n", externpy->name); - memset(args, 0, externpy->size_of_result); - } -#ifdef _MSC_VER - SetLastError(current_lasterr); -#endif - errno = current_err; - - if (fnptr != NULL) - fnptr(externpy, args); -} - - -/* The cffi_start_python() function makes sure Python is initialized - and our cffi module is set up. It can be called manually from the - user C code. The same effect is obtained automatically from any - dll-exported ``extern "Python"`` function. This function returns - -1 if initialization failed, 0 if all is OK. */ -_CFFI_UNUSED_FN -static int cffi_start_python(void) -{ - if (_cffi_call_python == &_cffi_start_and_call_python) { - if (_cffi_start_python() == NULL) - return -1; - } - cffi_read_barrier(); - return 0; -} - -#undef cffi_compare_and_swap -#undef cffi_write_barrier -#undef cffi_read_barrier - -#ifdef __cplusplus -} -#endif diff --git a/pptx-env/lib/python3.12/site-packages/cffi/_imp_emulation.py b/pptx-env/lib/python3.12/site-packages/cffi/_imp_emulation.py deleted file mode 100644 index 136abddd..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/_imp_emulation.py +++ /dev/null @@ -1,83 +0,0 @@ - -try: - # this works on Python < 3.12 - from imp import * - -except ImportError: - # this is a limited emulation for Python >= 3.12. - # Note that this is used only for tests or for the old ffi.verify(). - # This is copied from the source code of Python 3.11. - - from _imp import (acquire_lock, release_lock, - is_builtin, is_frozen) - - from importlib._bootstrap import _load - - from importlib import machinery - import os - import sys - import tokenize - - SEARCH_ERROR = 0 - PY_SOURCE = 1 - PY_COMPILED = 2 - C_EXTENSION = 3 - PY_RESOURCE = 4 - PKG_DIRECTORY = 5 - C_BUILTIN = 6 - PY_FROZEN = 7 - PY_CODERESOURCE = 8 - IMP_HOOK = 9 - - def get_suffixes(): - extensions = [(s, 'rb', C_EXTENSION) - for s in machinery.EXTENSION_SUFFIXES] - source = [(s, 'r', PY_SOURCE) for s in machinery.SOURCE_SUFFIXES] - bytecode = [(s, 'rb', PY_COMPILED) for s in machinery.BYTECODE_SUFFIXES] - return extensions + source + bytecode - - def find_module(name, path=None): - if not isinstance(name, str): - raise TypeError("'name' must be a str, not {}".format(type(name))) - elif not isinstance(path, (type(None), list)): - # Backwards-compatibility - raise RuntimeError("'path' must be None or a list, " - "not {}".format(type(path))) - - if path is None: - if is_builtin(name): - return None, None, ('', '', C_BUILTIN) - elif is_frozen(name): - return None, None, ('', '', PY_FROZEN) - else: - path = sys.path - - for entry in path: - package_directory = os.path.join(entry, name) - for suffix in ['.py', machinery.BYTECODE_SUFFIXES[0]]: - package_file_name = '__init__' + suffix - file_path = os.path.join(package_directory, package_file_name) - if os.path.isfile(file_path): - return None, package_directory, ('', '', PKG_DIRECTORY) - for suffix, mode, type_ in get_suffixes(): - file_name = name + suffix - file_path = os.path.join(entry, file_name) - if os.path.isfile(file_path): - break - else: - continue - break # Break out of outer loop when breaking out of inner loop. - else: - raise ImportError(name, name=name) - - encoding = None - if 'b' not in mode: - with open(file_path, 'rb') as file: - encoding = tokenize.detect_encoding(file.readline)[0] - file = open(file_path, mode, encoding=encoding) - return file, file_path, (suffix, mode, type_) - - def load_dynamic(name, path, file=None): - loader = machinery.ExtensionFileLoader(name, path) - spec = machinery.ModuleSpec(name=name, loader=loader, origin=path) - return _load(spec) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/_shimmed_dist_utils.py b/pptx-env/lib/python3.12/site-packages/cffi/_shimmed_dist_utils.py deleted file mode 100644 index c3d23128..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/_shimmed_dist_utils.py +++ /dev/null @@ -1,45 +0,0 @@ -""" -Temporary shim module to indirect the bits of distutils we need from setuptools/distutils while providing useful -error messages beyond `No module named 'distutils' on Python >= 3.12, or when setuptools' vendored distutils is broken. - -This is a compromise to avoid a hard-dep on setuptools for Python >= 3.12, since many users don't need runtime compilation support from CFFI. -""" -import sys - -try: - # import setuptools first; this is the most robust way to ensure its embedded distutils is available - # (the .pth shim should usually work, but this is even more robust) - import setuptools -except Exception as ex: - if sys.version_info >= (3, 12): - # Python 3.12 has no built-in distutils to fall back on, so any import problem is fatal - raise Exception("This CFFI feature requires setuptools on Python >= 3.12. The setuptools module is missing or non-functional.") from ex - - # silently ignore on older Pythons (support fallback to stdlib distutils where available) -else: - del setuptools - -try: - # bring in just the bits of distutils we need, whether they really came from setuptools or stdlib-embedded distutils - from distutils import log, sysconfig - from distutils.ccompiler import CCompiler - from distutils.command.build_ext import build_ext - from distutils.core import Distribution, Extension - from distutils.dir_util import mkpath - from distutils.errors import DistutilsSetupError, CompileError, LinkError - from distutils.log import set_threshold, set_verbosity - - if sys.platform == 'win32': - try: - # FUTURE: msvc9compiler module was removed in setuptools 74; consider removing, as it's only used by an ancient patch in `recompiler` - from distutils.msvc9compiler import MSVCCompiler - except ImportError: - MSVCCompiler = None -except Exception as ex: - if sys.version_info >= (3, 12): - raise Exception("This CFFI feature requires setuptools on Python >= 3.12. Please install the setuptools package.") from ex - - # anything older, just let the underlying distutils import error fly - raise Exception("This CFFI feature requires distutils. Please install the distutils or setuptools package.") from ex - -del sys diff --git a/pptx-env/lib/python3.12/site-packages/cffi/api.py b/pptx-env/lib/python3.12/site-packages/cffi/api.py deleted file mode 100644 index 5a474f3d..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/api.py +++ /dev/null @@ -1,967 +0,0 @@ -import sys, types -from .lock import allocate_lock -from .error import CDefError -from . import model - -try: - callable -except NameError: - # Python 3.1 - from collections import Callable - callable = lambda x: isinstance(x, Callable) - -try: - basestring -except NameError: - # Python 3.x - basestring = str - -_unspecified = object() - - - -class FFI(object): - r''' - The main top-level class that you instantiate once, or once per module. - - Example usage: - - ffi = FFI() - ffi.cdef(""" - int printf(const char *, ...); - """) - - C = ffi.dlopen(None) # standard library - -or- - C = ffi.verify() # use a C compiler: verify the decl above is right - - C.printf("hello, %s!\n", ffi.new("char[]", "world")) - ''' - - def __init__(self, backend=None): - """Create an FFI instance. The 'backend' argument is used to - select a non-default backend, mostly for tests. - """ - if backend is None: - # You need PyPy (>= 2.0 beta), or a CPython (>= 2.6) with - # _cffi_backend.so compiled. - import _cffi_backend as backend - from . import __version__ - if backend.__version__ != __version__: - # bad version! Try to be as explicit as possible. - if hasattr(backend, '__file__'): - # CPython - raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. When we import the top-level '_cffi_backend' extension module, we get version %s, located in %r. The two versions should be equal; check your installation." % ( - __version__, __file__, - backend.__version__, backend.__file__)) - else: - # PyPy - raise Exception("Version mismatch: this is the 'cffi' package version %s, located in %r. This interpreter comes with a built-in '_cffi_backend' module, which is version %s. The two versions should be equal; check your installation." % ( - __version__, __file__, backend.__version__)) - # (If you insist you can also try to pass the option - # 'backend=backend_ctypes.CTypesBackend()', but don't - # rely on it! It's probably not going to work well.) - - from . import cparser - self._backend = backend - self._lock = allocate_lock() - self._parser = cparser.Parser() - self._cached_btypes = {} - self._parsed_types = types.ModuleType('parsed_types').__dict__ - self._new_types = types.ModuleType('new_types').__dict__ - self._function_caches = [] - self._libraries = [] - self._cdefsources = [] - self._included_ffis = [] - self._windows_unicode = None - self._init_once_cache = {} - self._cdef_version = None - self._embedding = None - self._typecache = model.get_typecache(backend) - if hasattr(backend, 'set_ffi'): - backend.set_ffi(self) - for name in list(backend.__dict__): - if name.startswith('RTLD_'): - setattr(self, name, getattr(backend, name)) - # - with self._lock: - self.BVoidP = self._get_cached_btype(model.voidp_type) - self.BCharA = self._get_cached_btype(model.char_array_type) - if isinstance(backend, types.ModuleType): - # _cffi_backend: attach these constants to the class - if not hasattr(FFI, 'NULL'): - FFI.NULL = self.cast(self.BVoidP, 0) - FFI.CData, FFI.CType = backend._get_types() - else: - # ctypes backend: attach these constants to the instance - self.NULL = self.cast(self.BVoidP, 0) - self.CData, self.CType = backend._get_types() - self.buffer = backend.buffer - - def cdef(self, csource, override=False, packed=False, pack=None): - """Parse the given C source. This registers all declared functions, - types, and global variables. The functions and global variables can - then be accessed via either 'ffi.dlopen()' or 'ffi.verify()'. - The types can be used in 'ffi.new()' and other functions. - If 'packed' is specified as True, all structs declared inside this - cdef are packed, i.e. laid out without any field alignment at all. - Alternatively, 'pack' can be a small integer, and requests for - alignment greater than that are ignored (pack=1 is equivalent to - packed=True). - """ - self._cdef(csource, override=override, packed=packed, pack=pack) - - def embedding_api(self, csource, packed=False, pack=None): - self._cdef(csource, packed=packed, pack=pack, dllexport=True) - if self._embedding is None: - self._embedding = '' - - def _cdef(self, csource, override=False, **options): - if not isinstance(csource, str): # unicode, on Python 2 - if not isinstance(csource, basestring): - raise TypeError("cdef() argument must be a string") - csource = csource.encode('ascii') - with self._lock: - self._cdef_version = object() - self._parser.parse(csource, override=override, **options) - self._cdefsources.append(csource) - if override: - for cache in self._function_caches: - cache.clear() - finishlist = self._parser._recomplete - if finishlist: - self._parser._recomplete = [] - for tp in finishlist: - tp.finish_backend_type(self, finishlist) - - def dlopen(self, name, flags=0): - """Load and return a dynamic library identified by 'name'. - The standard C library can be loaded by passing None. - Note that functions and types declared by 'ffi.cdef()' are not - linked to a particular library, just like C headers; in the - library we only look for the actual (untyped) symbols. - """ - if not (isinstance(name, basestring) or - name is None or - isinstance(name, self.CData)): - raise TypeError("dlopen(name): name must be a file name, None, " - "or an already-opened 'void *' handle") - with self._lock: - lib, function_cache = _make_ffi_library(self, name, flags) - self._function_caches.append(function_cache) - self._libraries.append(lib) - return lib - - def dlclose(self, lib): - """Close a library obtained with ffi.dlopen(). After this call, - access to functions or variables from the library will fail - (possibly with a segmentation fault). - """ - type(lib).__cffi_close__(lib) - - def _typeof_locked(self, cdecl): - # call me with the lock! - key = cdecl - if key in self._parsed_types: - return self._parsed_types[key] - # - if not isinstance(cdecl, str): # unicode, on Python 2 - cdecl = cdecl.encode('ascii') - # - type = self._parser.parse_type(cdecl) - really_a_function_type = type.is_raw_function - if really_a_function_type: - type = type.as_function_pointer() - btype = self._get_cached_btype(type) - result = btype, really_a_function_type - self._parsed_types[key] = result - return result - - def _typeof(self, cdecl, consider_function_as_funcptr=False): - # string -> ctype object - try: - result = self._parsed_types[cdecl] - except KeyError: - with self._lock: - result = self._typeof_locked(cdecl) - # - btype, really_a_function_type = result - if really_a_function_type and not consider_function_as_funcptr: - raise CDefError("the type %r is a function type, not a " - "pointer-to-function type" % (cdecl,)) - return btype - - def typeof(self, cdecl): - """Parse the C type given as a string and return the - corresponding object. - It can also be used on 'cdata' instance to get its C type. - """ - if isinstance(cdecl, basestring): - return self._typeof(cdecl) - if isinstance(cdecl, self.CData): - return self._backend.typeof(cdecl) - if isinstance(cdecl, types.BuiltinFunctionType): - res = _builtin_function_type(cdecl) - if res is not None: - return res - if (isinstance(cdecl, types.FunctionType) - and hasattr(cdecl, '_cffi_base_type')): - with self._lock: - return self._get_cached_btype(cdecl._cffi_base_type) - raise TypeError(type(cdecl)) - - def sizeof(self, cdecl): - """Return the size in bytes of the argument. It can be a - string naming a C type, or a 'cdata' instance. - """ - if isinstance(cdecl, basestring): - BType = self._typeof(cdecl) - return self._backend.sizeof(BType) - else: - return self._backend.sizeof(cdecl) - - def alignof(self, cdecl): - """Return the natural alignment size in bytes of the C type - given as a string. - """ - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - return self._backend.alignof(cdecl) - - def offsetof(self, cdecl, *fields_or_indexes): - """Return the offset of the named field inside the given - structure or array, which must be given as a C type name. - You can give several field names in case of nested structures. - You can also give numeric values which correspond to array - items, in case of an array type. - """ - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - return self._typeoffsetof(cdecl, *fields_or_indexes)[1] - - def new(self, cdecl, init=None): - """Allocate an instance according to the specified C type and - return a pointer to it. The specified C type must be either a - pointer or an array: ``new('X *')`` allocates an X and returns - a pointer to it, whereas ``new('X[n]')`` allocates an array of - n X'es and returns an array referencing it (which works - mostly like a pointer, like in C). You can also use - ``new('X[]', n)`` to allocate an array of a non-constant - length n. - - The memory is initialized following the rules of declaring a - global variable in C: by default it is zero-initialized, but - an explicit initializer can be given which can be used to - fill all or part of the memory. - - When the returned object goes out of scope, the memory - is freed. In other words the returned object has - ownership of the value of type 'cdecl' that it points to. This - means that the raw data can be used as long as this object is - kept alive, but must not be used for a longer time. Be careful - about that when copying the pointer to the memory somewhere - else, e.g. into another structure. - """ - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - return self._backend.newp(cdecl, init) - - def new_allocator(self, alloc=None, free=None, - should_clear_after_alloc=True): - """Return a new allocator, i.e. a function that behaves like ffi.new() - but uses the provided low-level 'alloc' and 'free' functions. - - 'alloc' is called with the size as argument. If it returns NULL, a - MemoryError is raised. 'free' is called with the result of 'alloc' - as argument. Both can be either Python function or directly C - functions. If 'free' is None, then no free function is called. - If both 'alloc' and 'free' are None, the default is used. - - If 'should_clear_after_alloc' is set to False, then the memory - returned by 'alloc' is assumed to be already cleared (or you are - fine with garbage); otherwise CFFI will clear it. - """ - compiled_ffi = self._backend.FFI() - allocator = compiled_ffi.new_allocator(alloc, free, - should_clear_after_alloc) - def allocate(cdecl, init=None): - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - return allocator(cdecl, init) - return allocate - - def cast(self, cdecl, source): - """Similar to a C cast: returns an instance of the named C - type initialized with the given 'source'. The source is - casted between integers or pointers of any type. - """ - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - return self._backend.cast(cdecl, source) - - def string(self, cdata, maxlen=-1): - """Return a Python string (or unicode string) from the 'cdata'. - If 'cdata' is a pointer or array of characters or bytes, returns - the null-terminated string. The returned string extends until - the first null character, or at most 'maxlen' characters. If - 'cdata' is an array then 'maxlen' defaults to its length. - - If 'cdata' is a pointer or array of wchar_t, returns a unicode - string following the same rules. - - If 'cdata' is a single character or byte or a wchar_t, returns - it as a string or unicode string. - - If 'cdata' is an enum, returns the value of the enumerator as a - string, or 'NUMBER' if the value is out of range. - """ - return self._backend.string(cdata, maxlen) - - def unpack(self, cdata, length): - """Unpack an array of C data of the given length, - returning a Python string/unicode/list. - - If 'cdata' is a pointer to 'char', returns a byte string. - It does not stop at the first null. This is equivalent to: - ffi.buffer(cdata, length)[:] - - If 'cdata' is a pointer to 'wchar_t', returns a unicode string. - 'length' is measured in wchar_t's; it is not the size in bytes. - - If 'cdata' is a pointer to anything else, returns a list of - 'length' items. This is a faster equivalent to: - [cdata[i] for i in range(length)] - """ - return self._backend.unpack(cdata, length) - - #def buffer(self, cdata, size=-1): - # """Return a read-write buffer object that references the raw C data - # pointed to by the given 'cdata'. The 'cdata' must be a pointer or - # an array. Can be passed to functions expecting a buffer, or directly - # manipulated with: - # - # buf[:] get a copy of it in a regular string, or - # buf[idx] as a single character - # buf[:] = ... - # buf[idx] = ... change the content - # """ - # note that 'buffer' is a type, set on this instance by __init__ - - def from_buffer(self, cdecl, python_buffer=_unspecified, - require_writable=False): - """Return a cdata of the given type pointing to the data of the - given Python object, which must support the buffer interface. - Note that this is not meant to be used on the built-in types - str or unicode (you can build 'char[]' arrays explicitly) - but only on objects containing large quantities of raw data - in some other format, like 'array.array' or numpy arrays. - - The first argument is optional and default to 'char[]'. - """ - if python_buffer is _unspecified: - cdecl, python_buffer = self.BCharA, cdecl - elif isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - return self._backend.from_buffer(cdecl, python_buffer, - require_writable) - - def memmove(self, dest, src, n): - """ffi.memmove(dest, src, n) copies n bytes of memory from src to dest. - - Like the C function memmove(), the memory areas may overlap; - apart from that it behaves like the C function memcpy(). - - 'src' can be any cdata ptr or array, or any Python buffer object. - 'dest' can be any cdata ptr or array, or a writable Python buffer - object. The size to copy, 'n', is always measured in bytes. - - Unlike other methods, this one supports all Python buffer including - byte strings and bytearrays---but it still does not support - non-contiguous buffers. - """ - return self._backend.memmove(dest, src, n) - - def callback(self, cdecl, python_callable=None, error=None, onerror=None): - """Return a callback object or a decorator making such a - callback object. 'cdecl' must name a C function pointer type. - The callback invokes the specified 'python_callable' (which may - be provided either directly or via a decorator). Important: the - callback object must be manually kept alive for as long as the - callback may be invoked from the C level. - """ - def callback_decorator_wrap(python_callable): - if not callable(python_callable): - raise TypeError("the 'python_callable' argument " - "is not callable") - return self._backend.callback(cdecl, python_callable, - error, onerror) - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl, consider_function_as_funcptr=True) - if python_callable is None: - return callback_decorator_wrap # decorator mode - else: - return callback_decorator_wrap(python_callable) # direct mode - - def getctype(self, cdecl, replace_with=''): - """Return a string giving the C type 'cdecl', which may be itself - a string or a object. If 'replace_with' is given, it gives - extra text to append (or insert for more complicated C types), like - a variable name, or '*' to get actually the C type 'pointer-to-cdecl'. - """ - if isinstance(cdecl, basestring): - cdecl = self._typeof(cdecl) - replace_with = replace_with.strip() - if (replace_with.startswith('*') - and '&[' in self._backend.getcname(cdecl, '&')): - replace_with = '(%s)' % replace_with - elif replace_with and not replace_with[0] in '[(': - replace_with = ' ' + replace_with - return self._backend.getcname(cdecl, replace_with) - - def gc(self, cdata, destructor, size=0): - """Return a new cdata object that points to the same - data. Later, when this new cdata object is garbage-collected, - 'destructor(old_cdata_object)' will be called. - - The optional 'size' gives an estimate of the size, used to - trigger the garbage collection more eagerly. So far only used - on PyPy. It tells the GC that the returned object keeps alive - roughly 'size' bytes of external memory. - """ - return self._backend.gcp(cdata, destructor, size) - - def _get_cached_btype(self, type): - assert self._lock.acquire(False) is False - # call me with the lock! - try: - BType = self._cached_btypes[type] - except KeyError: - finishlist = [] - BType = type.get_cached_btype(self, finishlist) - for type in finishlist: - type.finish_backend_type(self, finishlist) - return BType - - def verify(self, source='', tmpdir=None, **kwargs): - """Verify that the current ffi signatures compile on this - machine, and return a dynamic library object. The dynamic - library can be used to call functions and access global - variables declared in this 'ffi'. The library is compiled - by the C compiler: it gives you C-level API compatibility - (including calling macros). This is unlike 'ffi.dlopen()', - which requires binary compatibility in the signatures. - """ - from .verifier import Verifier, _caller_dir_pycache - # - # If set_unicode(True) was called, insert the UNICODE and - # _UNICODE macro declarations - if self._windows_unicode: - self._apply_windows_unicode(kwargs) - # - # Set the tmpdir here, and not in Verifier.__init__: it picks - # up the caller's directory, which we want to be the caller of - # ffi.verify(), as opposed to the caller of Veritier(). - tmpdir = tmpdir or _caller_dir_pycache() - # - # Make a Verifier() and use it to load the library. - self.verifier = Verifier(self, source, tmpdir, **kwargs) - lib = self.verifier.load_library() - # - # Save the loaded library for keep-alive purposes, even - # if the caller doesn't keep it alive itself (it should). - self._libraries.append(lib) - return lib - - def _get_errno(self): - return self._backend.get_errno() - def _set_errno(self, errno): - self._backend.set_errno(errno) - errno = property(_get_errno, _set_errno, None, - "the value of 'errno' from/to the C calls") - - def getwinerror(self, code=-1): - return self._backend.getwinerror(code) - - def _pointer_to(self, ctype): - with self._lock: - return model.pointer_cache(self, ctype) - - def addressof(self, cdata, *fields_or_indexes): - """Return the address of a . - If 'fields_or_indexes' are given, returns the address of that - field or array item in the structure or array, recursively in - case of nested structures. - """ - try: - ctype = self._backend.typeof(cdata) - except TypeError: - if '__addressof__' in type(cdata).__dict__: - return type(cdata).__addressof__(cdata, *fields_or_indexes) - raise - if fields_or_indexes: - ctype, offset = self._typeoffsetof(ctype, *fields_or_indexes) - else: - if ctype.kind == "pointer": - raise TypeError("addressof(pointer)") - offset = 0 - ctypeptr = self._pointer_to(ctype) - return self._backend.rawaddressof(ctypeptr, cdata, offset) - - def _typeoffsetof(self, ctype, field_or_index, *fields_or_indexes): - ctype, offset = self._backend.typeoffsetof(ctype, field_or_index) - for field1 in fields_or_indexes: - ctype, offset1 = self._backend.typeoffsetof(ctype, field1, 1) - offset += offset1 - return ctype, offset - - def include(self, ffi_to_include): - """Includes the typedefs, structs, unions and enums defined - in another FFI instance. Usage is similar to a #include in C, - where a part of the program might include types defined in - another part for its own usage. Note that the include() - method has no effect on functions, constants and global - variables, which must anyway be accessed directly from the - lib object returned by the original FFI instance. - """ - if not isinstance(ffi_to_include, FFI): - raise TypeError("ffi.include() expects an argument that is also of" - " type cffi.FFI, not %r" % ( - type(ffi_to_include).__name__,)) - if ffi_to_include is self: - raise ValueError("self.include(self)") - with ffi_to_include._lock: - with self._lock: - self._parser.include(ffi_to_include._parser) - self._cdefsources.append('[') - self._cdefsources.extend(ffi_to_include._cdefsources) - self._cdefsources.append(']') - self._included_ffis.append(ffi_to_include) - - def new_handle(self, x): - return self._backend.newp_handle(self.BVoidP, x) - - def from_handle(self, x): - return self._backend.from_handle(x) - - def release(self, x): - self._backend.release(x) - - def set_unicode(self, enabled_flag): - """Windows: if 'enabled_flag' is True, enable the UNICODE and - _UNICODE defines in C, and declare the types like TCHAR and LPTCSTR - to be (pointers to) wchar_t. If 'enabled_flag' is False, - declare these types to be (pointers to) plain 8-bit characters. - This is mostly for backward compatibility; you usually want True. - """ - if self._windows_unicode is not None: - raise ValueError("set_unicode() can only be called once") - enabled_flag = bool(enabled_flag) - if enabled_flag: - self.cdef("typedef wchar_t TBYTE;" - "typedef wchar_t TCHAR;" - "typedef const wchar_t *LPCTSTR;" - "typedef const wchar_t *PCTSTR;" - "typedef wchar_t *LPTSTR;" - "typedef wchar_t *PTSTR;" - "typedef TBYTE *PTBYTE;" - "typedef TCHAR *PTCHAR;") - else: - self.cdef("typedef char TBYTE;" - "typedef char TCHAR;" - "typedef const char *LPCTSTR;" - "typedef const char *PCTSTR;" - "typedef char *LPTSTR;" - "typedef char *PTSTR;" - "typedef TBYTE *PTBYTE;" - "typedef TCHAR *PTCHAR;") - self._windows_unicode = enabled_flag - - def _apply_windows_unicode(self, kwds): - defmacros = kwds.get('define_macros', ()) - if not isinstance(defmacros, (list, tuple)): - raise TypeError("'define_macros' must be a list or tuple") - defmacros = list(defmacros) + [('UNICODE', '1'), - ('_UNICODE', '1')] - kwds['define_macros'] = defmacros - - def _apply_embedding_fix(self, kwds): - # must include an argument like "-lpython2.7" for the compiler - def ensure(key, value): - lst = kwds.setdefault(key, []) - if value not in lst: - lst.append(value) - # - if '__pypy__' in sys.builtin_module_names: - import os - if sys.platform == "win32": - # we need 'libpypy-c.lib'. Current distributions of - # pypy (>= 4.1) contain it as 'libs/python27.lib'. - pythonlib = "python{0[0]}{0[1]}".format(sys.version_info) - if hasattr(sys, 'prefix'): - ensure('library_dirs', os.path.join(sys.prefix, 'libs')) - else: - # we need 'libpypy-c.{so,dylib}', which should be by - # default located in 'sys.prefix/bin' for installed - # systems. - if sys.version_info < (3,): - pythonlib = "pypy-c" - else: - pythonlib = "pypy3-c" - if hasattr(sys, 'prefix'): - ensure('library_dirs', os.path.join(sys.prefix, 'bin')) - # On uninstalled pypy's, the libpypy-c is typically found in - # .../pypy/goal/. - if hasattr(sys, 'prefix'): - ensure('library_dirs', os.path.join(sys.prefix, 'pypy', 'goal')) - else: - if sys.platform == "win32": - template = "python%d%d" - if hasattr(sys, 'gettotalrefcount'): - template += '_d' - else: - try: - import sysconfig - except ImportError: # 2.6 - from cffi._shimmed_dist_utils import sysconfig - template = "python%d.%d" - if sysconfig.get_config_var('DEBUG_EXT'): - template += sysconfig.get_config_var('DEBUG_EXT') - pythonlib = (template % - (sys.hexversion >> 24, (sys.hexversion >> 16) & 0xff)) - if hasattr(sys, 'abiflags'): - pythonlib += sys.abiflags - ensure('libraries', pythonlib) - if sys.platform == "win32": - ensure('extra_link_args', '/MANIFEST') - - def set_source(self, module_name, source, source_extension='.c', **kwds): - import os - if hasattr(self, '_assigned_source'): - raise ValueError("set_source() cannot be called several times " - "per ffi object") - if not isinstance(module_name, basestring): - raise TypeError("'module_name' must be a string") - if os.sep in module_name or (os.altsep and os.altsep in module_name): - raise ValueError("'module_name' must not contain '/': use a dotted " - "name to make a 'package.module' location") - self._assigned_source = (str(module_name), source, - source_extension, kwds) - - def set_source_pkgconfig(self, module_name, pkgconfig_libs, source, - source_extension='.c', **kwds): - from . import pkgconfig - if not isinstance(pkgconfig_libs, list): - raise TypeError("the pkgconfig_libs argument must be a list " - "of package names") - kwds2 = pkgconfig.flags_from_pkgconfig(pkgconfig_libs) - pkgconfig.merge_flags(kwds, kwds2) - self.set_source(module_name, source, source_extension, **kwds) - - def distutils_extension(self, tmpdir='build', verbose=True): - from cffi._shimmed_dist_utils import mkpath - from .recompiler import recompile - # - if not hasattr(self, '_assigned_source'): - if hasattr(self, 'verifier'): # fallback, 'tmpdir' ignored - return self.verifier.get_extension() - raise ValueError("set_source() must be called before" - " distutils_extension()") - module_name, source, source_extension, kwds = self._assigned_source - if source is None: - raise TypeError("distutils_extension() is only for C extension " - "modules, not for dlopen()-style pure Python " - "modules") - mkpath(tmpdir) - ext, updated = recompile(self, module_name, - source, tmpdir=tmpdir, extradir=tmpdir, - source_extension=source_extension, - call_c_compiler=False, **kwds) - if verbose: - if updated: - sys.stderr.write("regenerated: %r\n" % (ext.sources[0],)) - else: - sys.stderr.write("not modified: %r\n" % (ext.sources[0],)) - return ext - - def emit_c_code(self, filename): - from .recompiler import recompile - # - if not hasattr(self, '_assigned_source'): - raise ValueError("set_source() must be called before emit_c_code()") - module_name, source, source_extension, kwds = self._assigned_source - if source is None: - raise TypeError("emit_c_code() is only for C extension modules, " - "not for dlopen()-style pure Python modules") - recompile(self, module_name, source, - c_file=filename, call_c_compiler=False, - uses_ffiplatform=False, **kwds) - - def emit_python_code(self, filename): - from .recompiler import recompile - # - if not hasattr(self, '_assigned_source'): - raise ValueError("set_source() must be called before emit_c_code()") - module_name, source, source_extension, kwds = self._assigned_source - if source is not None: - raise TypeError("emit_python_code() is only for dlopen()-style " - "pure Python modules, not for C extension modules") - recompile(self, module_name, source, - c_file=filename, call_c_compiler=False, - uses_ffiplatform=False, **kwds) - - def compile(self, tmpdir='.', verbose=0, target=None, debug=None): - """The 'target' argument gives the final file name of the - compiled DLL. Use '*' to force distutils' choice, suitable for - regular CPython C API modules. Use a file name ending in '.*' - to ask for the system's default extension for dynamic libraries - (.so/.dll/.dylib). - - The default is '*' when building a non-embedded C API extension, - and (module_name + '.*') when building an embedded library. - """ - from .recompiler import recompile - # - if not hasattr(self, '_assigned_source'): - raise ValueError("set_source() must be called before compile()") - module_name, source, source_extension, kwds = self._assigned_source - return recompile(self, module_name, source, tmpdir=tmpdir, - target=target, source_extension=source_extension, - compiler_verbose=verbose, debug=debug, **kwds) - - def init_once(self, func, tag): - # Read _init_once_cache[tag], which is either (False, lock) if - # we're calling the function now in some thread, or (True, result). - # Don't call setdefault() in most cases, to avoid allocating and - # immediately freeing a lock; but still use setdefaut() to avoid - # races. - try: - x = self._init_once_cache[tag] - except KeyError: - x = self._init_once_cache.setdefault(tag, (False, allocate_lock())) - # Common case: we got (True, result), so we return the result. - if x[0]: - return x[1] - # Else, it's a lock. Acquire it to serialize the following tests. - with x[1]: - # Read again from _init_once_cache the current status. - x = self._init_once_cache[tag] - if x[0]: - return x[1] - # Call the function and store the result back. - result = func() - self._init_once_cache[tag] = (True, result) - return result - - def embedding_init_code(self, pysource): - if self._embedding: - raise ValueError("embedding_init_code() can only be called once") - # fix 'pysource' before it gets dumped into the C file: - # - remove empty lines at the beginning, so it starts at "line 1" - # - dedent, if all non-empty lines are indented - # - check for SyntaxErrors - import re - match = re.match(r'\s*\n', pysource) - if match: - pysource = pysource[match.end():] - lines = pysource.splitlines() or [''] - prefix = re.match(r'\s*', lines[0]).group() - for i in range(1, len(lines)): - line = lines[i] - if line.rstrip(): - while not line.startswith(prefix): - prefix = prefix[:-1] - i = len(prefix) - lines = [line[i:]+'\n' for line in lines] - pysource = ''.join(lines) - # - compile(pysource, "cffi_init", "exec") - # - self._embedding = pysource - - def def_extern(self, *args, **kwds): - raise ValueError("ffi.def_extern() is only available on API-mode FFI " - "objects") - - def list_types(self): - """Returns the user type names known to this FFI instance. - This returns a tuple containing three lists of names: - (typedef_names, names_of_structs, names_of_unions) - """ - typedefs = [] - structs = [] - unions = [] - for key in self._parser._declarations: - if key.startswith('typedef '): - typedefs.append(key[8:]) - elif key.startswith('struct '): - structs.append(key[7:]) - elif key.startswith('union '): - unions.append(key[6:]) - typedefs.sort() - structs.sort() - unions.sort() - return (typedefs, structs, unions) - - -def _load_backend_lib(backend, name, flags): - import os - if not isinstance(name, basestring): - if sys.platform != "win32" or name is not None: - return backend.load_library(name, flags) - name = "c" # Windows: load_library(None) fails, but this works - # on Python 2 (backward compatibility hack only) - first_error = None - if '.' in name or '/' in name or os.sep in name: - try: - return backend.load_library(name, flags) - except OSError as e: - first_error = e - import ctypes.util - path = ctypes.util.find_library(name) - if path is None: - if name == "c" and sys.platform == "win32" and sys.version_info >= (3,): - raise OSError("dlopen(None) cannot work on Windows for Python 3 " - "(see http://bugs.python.org/issue23606)") - msg = ("ctypes.util.find_library() did not manage " - "to locate a library called %r" % (name,)) - if first_error is not None: - msg = "%s. Additionally, %s" % (first_error, msg) - raise OSError(msg) - return backend.load_library(path, flags) - -def _make_ffi_library(ffi, libname, flags): - backend = ffi._backend - backendlib = _load_backend_lib(backend, libname, flags) - # - def accessor_function(name): - key = 'function ' + name - tp, _ = ffi._parser._declarations[key] - BType = ffi._get_cached_btype(tp) - value = backendlib.load_function(BType, name) - library.__dict__[name] = value - # - def accessor_variable(name): - key = 'variable ' + name - tp, _ = ffi._parser._declarations[key] - BType = ffi._get_cached_btype(tp) - read_variable = backendlib.read_variable - write_variable = backendlib.write_variable - setattr(FFILibrary, name, property( - lambda self: read_variable(BType, name), - lambda self, value: write_variable(BType, name, value))) - # - def addressof_var(name): - try: - return addr_variables[name] - except KeyError: - with ffi._lock: - if name not in addr_variables: - key = 'variable ' + name - tp, _ = ffi._parser._declarations[key] - BType = ffi._get_cached_btype(tp) - if BType.kind != 'array': - BType = model.pointer_cache(ffi, BType) - p = backendlib.load_function(BType, name) - addr_variables[name] = p - return addr_variables[name] - # - def accessor_constant(name): - raise NotImplementedError("non-integer constant '%s' cannot be " - "accessed from a dlopen() library" % (name,)) - # - def accessor_int_constant(name): - library.__dict__[name] = ffi._parser._int_constants[name] - # - accessors = {} - accessors_version = [False] - addr_variables = {} - # - def update_accessors(): - if accessors_version[0] is ffi._cdef_version: - return - # - for key, (tp, _) in ffi._parser._declarations.items(): - if not isinstance(tp, model.EnumType): - tag, name = key.split(' ', 1) - if tag == 'function': - accessors[name] = accessor_function - elif tag == 'variable': - accessors[name] = accessor_variable - elif tag == 'constant': - accessors[name] = accessor_constant - else: - for i, enumname in enumerate(tp.enumerators): - def accessor_enum(name, tp=tp, i=i): - tp.check_not_partial() - library.__dict__[name] = tp.enumvalues[i] - accessors[enumname] = accessor_enum - for name in ffi._parser._int_constants: - accessors.setdefault(name, accessor_int_constant) - accessors_version[0] = ffi._cdef_version - # - def make_accessor(name): - with ffi._lock: - if name in library.__dict__ or name in FFILibrary.__dict__: - return # added by another thread while waiting for the lock - if name not in accessors: - update_accessors() - if name not in accessors: - raise AttributeError(name) - accessors[name](name) - # - class FFILibrary(object): - def __getattr__(self, name): - make_accessor(name) - return getattr(self, name) - def __setattr__(self, name, value): - try: - property = getattr(self.__class__, name) - except AttributeError: - make_accessor(name) - setattr(self, name, value) - else: - property.__set__(self, value) - def __dir__(self): - with ffi._lock: - update_accessors() - return accessors.keys() - def __addressof__(self, name): - if name in library.__dict__: - return library.__dict__[name] - if name in FFILibrary.__dict__: - return addressof_var(name) - make_accessor(name) - if name in library.__dict__: - return library.__dict__[name] - if name in FFILibrary.__dict__: - return addressof_var(name) - raise AttributeError("cffi library has no function or " - "global variable named '%s'" % (name,)) - def __cffi_close__(self): - backendlib.close_lib() - self.__dict__.clear() - # - if isinstance(libname, basestring): - try: - if not isinstance(libname, str): # unicode, on Python 2 - libname = libname.encode('utf-8') - FFILibrary.__name__ = 'FFILibrary_%s' % libname - except UnicodeError: - pass - library = FFILibrary() - return library, library.__dict__ - -def _builtin_function_type(func): - # a hack to make at least ffi.typeof(builtin_function) work, - # if the builtin function was obtained by 'vengine_cpy'. - import sys - try: - module = sys.modules[func.__module__] - ffi = module._cffi_original_ffi - types_of_builtin_funcs = module._cffi_types_of_builtin_funcs - tp = types_of_builtin_funcs[func] - except (KeyError, AttributeError, TypeError): - return None - else: - with ffi._lock: - return ffi._get_cached_btype(tp) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/backend_ctypes.py b/pptx-env/lib/python3.12/site-packages/cffi/backend_ctypes.py deleted file mode 100644 index e7956a79..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/backend_ctypes.py +++ /dev/null @@ -1,1121 +0,0 @@ -import ctypes, ctypes.util, operator, sys -from . import model - -if sys.version_info < (3,): - bytechr = chr -else: - unicode = str - long = int - xrange = range - bytechr = lambda num: bytes([num]) - -class CTypesType(type): - pass - -class CTypesData(object): - __metaclass__ = CTypesType - __slots__ = ['__weakref__'] - __name__ = '' - - def __init__(self, *args): - raise TypeError("cannot instantiate %r" % (self.__class__,)) - - @classmethod - def _newp(cls, init): - raise TypeError("expected a pointer or array ctype, got '%s'" - % (cls._get_c_name(),)) - - @staticmethod - def _to_ctypes(value): - raise TypeError - - @classmethod - def _arg_to_ctypes(cls, *value): - try: - ctype = cls._ctype - except AttributeError: - raise TypeError("cannot create an instance of %r" % (cls,)) - if value: - res = cls._to_ctypes(*value) - if not isinstance(res, ctype): - res = cls._ctype(res) - else: - res = cls._ctype() - return res - - @classmethod - def _create_ctype_obj(cls, init): - if init is None: - return cls._arg_to_ctypes() - else: - return cls._arg_to_ctypes(init) - - @staticmethod - def _from_ctypes(ctypes_value): - raise TypeError - - @classmethod - def _get_c_name(cls, replace_with=''): - return cls._reftypename.replace(' &', replace_with) - - @classmethod - def _fix_class(cls): - cls.__name__ = 'CData<%s>' % (cls._get_c_name(),) - cls.__qualname__ = 'CData<%s>' % (cls._get_c_name(),) - cls.__module__ = 'ffi' - - def _get_own_repr(self): - raise NotImplementedError - - def _addr_repr(self, address): - if address == 0: - return 'NULL' - else: - if address < 0: - address += 1 << (8*ctypes.sizeof(ctypes.c_void_p)) - return '0x%x' % address - - def __repr__(self, c_name=None): - own = self._get_own_repr() - return '' % (c_name or self._get_c_name(), own) - - def _convert_to_address(self, BClass): - if BClass is None: - raise TypeError("cannot convert %r to an address" % ( - self._get_c_name(),)) - else: - raise TypeError("cannot convert %r to %r" % ( - self._get_c_name(), BClass._get_c_name())) - - @classmethod - def _get_size(cls): - return ctypes.sizeof(cls._ctype) - - def _get_size_of_instance(self): - return ctypes.sizeof(self._ctype) - - @classmethod - def _cast_from(cls, source): - raise TypeError("cannot cast to %r" % (cls._get_c_name(),)) - - def _cast_to_integer(self): - return self._convert_to_address(None) - - @classmethod - def _alignment(cls): - return ctypes.alignment(cls._ctype) - - def __iter__(self): - raise TypeError("cdata %r does not support iteration" % ( - self._get_c_name()),) - - def _make_cmp(name): - cmpfunc = getattr(operator, name) - def cmp(self, other): - v_is_ptr = not isinstance(self, CTypesGenericPrimitive) - w_is_ptr = (isinstance(other, CTypesData) and - not isinstance(other, CTypesGenericPrimitive)) - if v_is_ptr and w_is_ptr: - return cmpfunc(self._convert_to_address(None), - other._convert_to_address(None)) - elif v_is_ptr or w_is_ptr: - return NotImplemented - else: - if isinstance(self, CTypesGenericPrimitive): - self = self._value - if isinstance(other, CTypesGenericPrimitive): - other = other._value - return cmpfunc(self, other) - cmp.func_name = name - return cmp - - __eq__ = _make_cmp('__eq__') - __ne__ = _make_cmp('__ne__') - __lt__ = _make_cmp('__lt__') - __le__ = _make_cmp('__le__') - __gt__ = _make_cmp('__gt__') - __ge__ = _make_cmp('__ge__') - - def __hash__(self): - return hash(self._convert_to_address(None)) - - def _to_string(self, maxlen): - raise TypeError("string(): %r" % (self,)) - - -class CTypesGenericPrimitive(CTypesData): - __slots__ = [] - - def __hash__(self): - return hash(self._value) - - def _get_own_repr(self): - return repr(self._from_ctypes(self._value)) - - -class CTypesGenericArray(CTypesData): - __slots__ = [] - - @classmethod - def _newp(cls, init): - return cls(init) - - def __iter__(self): - for i in xrange(len(self)): - yield self[i] - - def _get_own_repr(self): - return self._addr_repr(ctypes.addressof(self._blob)) - - -class CTypesGenericPtr(CTypesData): - __slots__ = ['_address', '_as_ctype_ptr'] - _automatic_casts = False - kind = "pointer" - - @classmethod - def _newp(cls, init): - return cls(init) - - @classmethod - def _cast_from(cls, source): - if source is None: - address = 0 - elif isinstance(source, CTypesData): - address = source._cast_to_integer() - elif isinstance(source, (int, long)): - address = source - else: - raise TypeError("bad type for cast to %r: %r" % - (cls, type(source).__name__)) - return cls._new_pointer_at(address) - - @classmethod - def _new_pointer_at(cls, address): - self = cls.__new__(cls) - self._address = address - self._as_ctype_ptr = ctypes.cast(address, cls._ctype) - return self - - def _get_own_repr(self): - try: - return self._addr_repr(self._address) - except AttributeError: - return '???' - - def _cast_to_integer(self): - return self._address - - def __nonzero__(self): - return bool(self._address) - __bool__ = __nonzero__ - - @classmethod - def _to_ctypes(cls, value): - if not isinstance(value, CTypesData): - raise TypeError("unexpected %s object" % type(value).__name__) - address = value._convert_to_address(cls) - return ctypes.cast(address, cls._ctype) - - @classmethod - def _from_ctypes(cls, ctypes_ptr): - address = ctypes.cast(ctypes_ptr, ctypes.c_void_p).value or 0 - return cls._new_pointer_at(address) - - @classmethod - def _initialize(cls, ctypes_ptr, value): - if value: - ctypes_ptr.contents = cls._to_ctypes(value).contents - - def _convert_to_address(self, BClass): - if (BClass in (self.__class__, None) or BClass._automatic_casts - or self._automatic_casts): - return self._address - else: - return CTypesData._convert_to_address(self, BClass) - - -class CTypesBaseStructOrUnion(CTypesData): - __slots__ = ['_blob'] - - @classmethod - def _create_ctype_obj(cls, init): - # may be overridden - raise TypeError("cannot instantiate opaque type %s" % (cls,)) - - def _get_own_repr(self): - return self._addr_repr(ctypes.addressof(self._blob)) - - @classmethod - def _offsetof(cls, fieldname): - return getattr(cls._ctype, fieldname).offset - - def _convert_to_address(self, BClass): - if getattr(BClass, '_BItem', None) is self.__class__: - return ctypes.addressof(self._blob) - else: - return CTypesData._convert_to_address(self, BClass) - - @classmethod - def _from_ctypes(cls, ctypes_struct_or_union): - self = cls.__new__(cls) - self._blob = ctypes_struct_or_union - return self - - @classmethod - def _to_ctypes(cls, value): - return value._blob - - def __repr__(self, c_name=None): - return CTypesData.__repr__(self, c_name or self._get_c_name(' &')) - - -class CTypesBackend(object): - - PRIMITIVE_TYPES = { - 'char': ctypes.c_char, - 'short': ctypes.c_short, - 'int': ctypes.c_int, - 'long': ctypes.c_long, - 'long long': ctypes.c_longlong, - 'signed char': ctypes.c_byte, - 'unsigned char': ctypes.c_ubyte, - 'unsigned short': ctypes.c_ushort, - 'unsigned int': ctypes.c_uint, - 'unsigned long': ctypes.c_ulong, - 'unsigned long long': ctypes.c_ulonglong, - 'float': ctypes.c_float, - 'double': ctypes.c_double, - '_Bool': ctypes.c_bool, - } - - for _name in ['unsigned long long', 'unsigned long', - 'unsigned int', 'unsigned short', 'unsigned char']: - _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) - PRIMITIVE_TYPES['uint%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_void_p): - PRIMITIVE_TYPES['uintptr_t'] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_size_t): - PRIMITIVE_TYPES['size_t'] = PRIMITIVE_TYPES[_name] - - for _name in ['long long', 'long', 'int', 'short', 'signed char']: - _size = ctypes.sizeof(PRIMITIVE_TYPES[_name]) - PRIMITIVE_TYPES['int%d_t' % (8*_size)] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_void_p): - PRIMITIVE_TYPES['intptr_t'] = PRIMITIVE_TYPES[_name] - PRIMITIVE_TYPES['ptrdiff_t'] = PRIMITIVE_TYPES[_name] - if _size == ctypes.sizeof(ctypes.c_size_t): - PRIMITIVE_TYPES['ssize_t'] = PRIMITIVE_TYPES[_name] - - - def __init__(self): - self.RTLD_LAZY = 0 # not supported anyway by ctypes - self.RTLD_NOW = 0 - self.RTLD_GLOBAL = ctypes.RTLD_GLOBAL - self.RTLD_LOCAL = ctypes.RTLD_LOCAL - - def set_ffi(self, ffi): - self.ffi = ffi - - def _get_types(self): - return CTypesData, CTypesType - - def load_library(self, path, flags=0): - cdll = ctypes.CDLL(path, flags) - return CTypesLibrary(self, cdll) - - def new_void_type(self): - class CTypesVoid(CTypesData): - __slots__ = [] - _reftypename = 'void &' - @staticmethod - def _from_ctypes(novalue): - return None - @staticmethod - def _to_ctypes(novalue): - if novalue is not None: - raise TypeError("None expected, got %s object" % - (type(novalue).__name__,)) - return None - CTypesVoid._fix_class() - return CTypesVoid - - def new_primitive_type(self, name): - if name == 'wchar_t': - raise NotImplementedError(name) - ctype = self.PRIMITIVE_TYPES[name] - if name == 'char': - kind = 'char' - elif name in ('float', 'double'): - kind = 'float' - else: - if name in ('signed char', 'unsigned char'): - kind = 'byte' - elif name == '_Bool': - kind = 'bool' - else: - kind = 'int' - is_signed = (ctype(-1).value == -1) - # - def _cast_source_to_int(source): - if isinstance(source, (int, long, float)): - source = int(source) - elif isinstance(source, CTypesData): - source = source._cast_to_integer() - elif isinstance(source, bytes): - source = ord(source) - elif source is None: - source = 0 - else: - raise TypeError("bad type for cast to %r: %r" % - (CTypesPrimitive, type(source).__name__)) - return source - # - kind1 = kind - class CTypesPrimitive(CTypesGenericPrimitive): - __slots__ = ['_value'] - _ctype = ctype - _reftypename = '%s &' % name - kind = kind1 - - def __init__(self, value): - self._value = value - - @staticmethod - def _create_ctype_obj(init): - if init is None: - return ctype() - return ctype(CTypesPrimitive._to_ctypes(init)) - - if kind == 'int' or kind == 'byte': - @classmethod - def _cast_from(cls, source): - source = _cast_source_to_int(source) - source = ctype(source).value # cast within range - return cls(source) - def __int__(self): - return self._value - - if kind == 'bool': - @classmethod - def _cast_from(cls, source): - if not isinstance(source, (int, long, float)): - source = _cast_source_to_int(source) - return cls(bool(source)) - def __int__(self): - return int(self._value) - - if kind == 'char': - @classmethod - def _cast_from(cls, source): - source = _cast_source_to_int(source) - source = bytechr(source & 0xFF) - return cls(source) - def __int__(self): - return ord(self._value) - - if kind == 'float': - @classmethod - def _cast_from(cls, source): - if isinstance(source, float): - pass - elif isinstance(source, CTypesGenericPrimitive): - if hasattr(source, '__float__'): - source = float(source) - else: - source = int(source) - else: - source = _cast_source_to_int(source) - source = ctype(source).value # fix precision - return cls(source) - def __int__(self): - return int(self._value) - def __float__(self): - return self._value - - _cast_to_integer = __int__ - - if kind == 'int' or kind == 'byte' or kind == 'bool': - @staticmethod - def _to_ctypes(x): - if not isinstance(x, (int, long)): - if isinstance(x, CTypesData): - x = int(x) - else: - raise TypeError("integer expected, got %s" % - type(x).__name__) - if ctype(x).value != x: - if not is_signed and x < 0: - raise OverflowError("%s: negative integer" % name) - else: - raise OverflowError("%s: integer out of bounds" - % name) - return x - - if kind == 'char': - @staticmethod - def _to_ctypes(x): - if isinstance(x, bytes) and len(x) == 1: - return x - if isinstance(x, CTypesPrimitive): # > - return x._value - raise TypeError("character expected, got %s" % - type(x).__name__) - def __nonzero__(self): - return ord(self._value) != 0 - else: - def __nonzero__(self): - return self._value != 0 - __bool__ = __nonzero__ - - if kind == 'float': - @staticmethod - def _to_ctypes(x): - if not isinstance(x, (int, long, float, CTypesData)): - raise TypeError("float expected, got %s" % - type(x).__name__) - return ctype(x).value - - @staticmethod - def _from_ctypes(value): - return getattr(value, 'value', value) - - @staticmethod - def _initialize(blob, init): - blob.value = CTypesPrimitive._to_ctypes(init) - - if kind == 'char': - def _to_string(self, maxlen): - return self._value - if kind == 'byte': - def _to_string(self, maxlen): - return chr(self._value & 0xff) - # - CTypesPrimitive._fix_class() - return CTypesPrimitive - - def new_pointer_type(self, BItem): - getbtype = self.ffi._get_cached_btype - if BItem is getbtype(model.PrimitiveType('char')): - kind = 'charp' - elif BItem in (getbtype(model.PrimitiveType('signed char')), - getbtype(model.PrimitiveType('unsigned char'))): - kind = 'bytep' - elif BItem is getbtype(model.void_type): - kind = 'voidp' - else: - kind = 'generic' - # - class CTypesPtr(CTypesGenericPtr): - __slots__ = ['_own'] - if kind == 'charp': - __slots__ += ['__as_strbuf'] - _BItem = BItem - if hasattr(BItem, '_ctype'): - _ctype = ctypes.POINTER(BItem._ctype) - _bitem_size = ctypes.sizeof(BItem._ctype) - else: - _ctype = ctypes.c_void_p - if issubclass(BItem, CTypesGenericArray): - _reftypename = BItem._get_c_name('(* &)') - else: - _reftypename = BItem._get_c_name(' * &') - - def __init__(self, init): - ctypeobj = BItem._create_ctype_obj(init) - if kind == 'charp': - self.__as_strbuf = ctypes.create_string_buffer( - ctypeobj.value + b'\x00') - self._as_ctype_ptr = ctypes.cast( - self.__as_strbuf, self._ctype) - else: - self._as_ctype_ptr = ctypes.pointer(ctypeobj) - self._address = ctypes.cast(self._as_ctype_ptr, - ctypes.c_void_p).value - self._own = True - - def __add__(self, other): - if isinstance(other, (int, long)): - return self._new_pointer_at(self._address + - other * self._bitem_size) - else: - return NotImplemented - - def __sub__(self, other): - if isinstance(other, (int, long)): - return self._new_pointer_at(self._address - - other * self._bitem_size) - elif type(self) is type(other): - return (self._address - other._address) // self._bitem_size - else: - return NotImplemented - - def __getitem__(self, index): - if getattr(self, '_own', False) and index != 0: - raise IndexError - return BItem._from_ctypes(self._as_ctype_ptr[index]) - - def __setitem__(self, index, value): - self._as_ctype_ptr[index] = BItem._to_ctypes(value) - - if kind == 'charp' or kind == 'voidp': - @classmethod - def _arg_to_ctypes(cls, *value): - if value and isinstance(value[0], bytes): - return ctypes.c_char_p(value[0]) - else: - return super(CTypesPtr, cls)._arg_to_ctypes(*value) - - if kind == 'charp' or kind == 'bytep': - def _to_string(self, maxlen): - if maxlen < 0: - maxlen = sys.maxsize - p = ctypes.cast(self._as_ctype_ptr, - ctypes.POINTER(ctypes.c_char)) - n = 0 - while n < maxlen and p[n] != b'\x00': - n += 1 - return b''.join([p[i] for i in range(n)]) - - def _get_own_repr(self): - if getattr(self, '_own', False): - return 'owning %d bytes' % ( - ctypes.sizeof(self._as_ctype_ptr.contents),) - return super(CTypesPtr, self)._get_own_repr() - # - if (BItem is self.ffi._get_cached_btype(model.void_type) or - BItem is self.ffi._get_cached_btype(model.PrimitiveType('char'))): - CTypesPtr._automatic_casts = True - # - CTypesPtr._fix_class() - return CTypesPtr - - def new_array_type(self, CTypesPtr, length): - if length is None: - brackets = ' &[]' - else: - brackets = ' &[%d]' % length - BItem = CTypesPtr._BItem - getbtype = self.ffi._get_cached_btype - if BItem is getbtype(model.PrimitiveType('char')): - kind = 'char' - elif BItem in (getbtype(model.PrimitiveType('signed char')), - getbtype(model.PrimitiveType('unsigned char'))): - kind = 'byte' - else: - kind = 'generic' - # - class CTypesArray(CTypesGenericArray): - __slots__ = ['_blob', '_own'] - if length is not None: - _ctype = BItem._ctype * length - else: - __slots__.append('_ctype') - _reftypename = BItem._get_c_name(brackets) - _declared_length = length - _CTPtr = CTypesPtr - - def __init__(self, init): - if length is None: - if isinstance(init, (int, long)): - len1 = init - init = None - elif kind == 'char' and isinstance(init, bytes): - len1 = len(init) + 1 # extra null - else: - init = tuple(init) - len1 = len(init) - self._ctype = BItem._ctype * len1 - self._blob = self._ctype() - self._own = True - if init is not None: - self._initialize(self._blob, init) - - @staticmethod - def _initialize(blob, init): - if isinstance(init, bytes): - init = [init[i:i+1] for i in range(len(init))] - else: - if isinstance(init, CTypesGenericArray): - if (len(init) != len(blob) or - not isinstance(init, CTypesArray)): - raise TypeError("length/type mismatch: %s" % (init,)) - init = tuple(init) - if len(init) > len(blob): - raise IndexError("too many initializers") - addr = ctypes.cast(blob, ctypes.c_void_p).value - PTR = ctypes.POINTER(BItem._ctype) - itemsize = ctypes.sizeof(BItem._ctype) - for i, value in enumerate(init): - p = ctypes.cast(addr + i * itemsize, PTR) - BItem._initialize(p.contents, value) - - def __len__(self): - return len(self._blob) - - def __getitem__(self, index): - if not (0 <= index < len(self._blob)): - raise IndexError - return BItem._from_ctypes(self._blob[index]) - - def __setitem__(self, index, value): - if not (0 <= index < len(self._blob)): - raise IndexError - self._blob[index] = BItem._to_ctypes(value) - - if kind == 'char' or kind == 'byte': - def _to_string(self, maxlen): - if maxlen < 0: - maxlen = len(self._blob) - p = ctypes.cast(self._blob, - ctypes.POINTER(ctypes.c_char)) - n = 0 - while n < maxlen and p[n] != b'\x00': - n += 1 - return b''.join([p[i] for i in range(n)]) - - def _get_own_repr(self): - if getattr(self, '_own', False): - return 'owning %d bytes' % (ctypes.sizeof(self._blob),) - return super(CTypesArray, self)._get_own_repr() - - def _convert_to_address(self, BClass): - if BClass in (CTypesPtr, None) or BClass._automatic_casts: - return ctypes.addressof(self._blob) - else: - return CTypesData._convert_to_address(self, BClass) - - @staticmethod - def _from_ctypes(ctypes_array): - self = CTypesArray.__new__(CTypesArray) - self._blob = ctypes_array - return self - - @staticmethod - def _arg_to_ctypes(value): - return CTypesPtr._arg_to_ctypes(value) - - def __add__(self, other): - if isinstance(other, (int, long)): - return CTypesPtr._new_pointer_at( - ctypes.addressof(self._blob) + - other * ctypes.sizeof(BItem._ctype)) - else: - return NotImplemented - - @classmethod - def _cast_from(cls, source): - raise NotImplementedError("casting to %r" % ( - cls._get_c_name(),)) - # - CTypesArray._fix_class() - return CTypesArray - - def _new_struct_or_union(self, kind, name, base_ctypes_class): - # - class struct_or_union(base_ctypes_class): - pass - struct_or_union.__name__ = '%s_%s' % (kind, name) - kind1 = kind - # - class CTypesStructOrUnion(CTypesBaseStructOrUnion): - __slots__ = ['_blob'] - _ctype = struct_or_union - _reftypename = '%s &' % (name,) - _kind = kind = kind1 - # - CTypesStructOrUnion._fix_class() - return CTypesStructOrUnion - - def new_struct_type(self, name): - return self._new_struct_or_union('struct', name, ctypes.Structure) - - def new_union_type(self, name): - return self._new_struct_or_union('union', name, ctypes.Union) - - def complete_struct_or_union(self, CTypesStructOrUnion, fields, tp, - totalsize=-1, totalalignment=-1, sflags=0, - pack=0): - if totalsize >= 0 or totalalignment >= 0: - raise NotImplementedError("the ctypes backend of CFFI does not support " - "structures completed by verify(); please " - "compile and install the _cffi_backend module.") - struct_or_union = CTypesStructOrUnion._ctype - fnames = [fname for (fname, BField, bitsize) in fields] - btypes = [BField for (fname, BField, bitsize) in fields] - bitfields = [bitsize for (fname, BField, bitsize) in fields] - # - bfield_types = {} - cfields = [] - for (fname, BField, bitsize) in fields: - if bitsize < 0: - cfields.append((fname, BField._ctype)) - bfield_types[fname] = BField - else: - cfields.append((fname, BField._ctype, bitsize)) - bfield_types[fname] = Ellipsis - if sflags & 8: - struct_or_union._pack_ = 1 - elif pack: - struct_or_union._pack_ = pack - struct_or_union._fields_ = cfields - CTypesStructOrUnion._bfield_types = bfield_types - # - @staticmethod - def _create_ctype_obj(init): - result = struct_or_union() - if init is not None: - initialize(result, init) - return result - CTypesStructOrUnion._create_ctype_obj = _create_ctype_obj - # - def initialize(blob, init): - if is_union: - if len(init) > 1: - raise ValueError("union initializer: %d items given, but " - "only one supported (use a dict if needed)" - % (len(init),)) - if not isinstance(init, dict): - if isinstance(init, (bytes, unicode)): - raise TypeError("union initializer: got a str") - init = tuple(init) - if len(init) > len(fnames): - raise ValueError("too many values for %s initializer" % - CTypesStructOrUnion._get_c_name()) - init = dict(zip(fnames, init)) - addr = ctypes.addressof(blob) - for fname, value in init.items(): - BField, bitsize = name2fieldtype[fname] - assert bitsize < 0, \ - "not implemented: initializer with bit fields" - offset = CTypesStructOrUnion._offsetof(fname) - PTR = ctypes.POINTER(BField._ctype) - p = ctypes.cast(addr + offset, PTR) - BField._initialize(p.contents, value) - is_union = CTypesStructOrUnion._kind == 'union' - name2fieldtype = dict(zip(fnames, zip(btypes, bitfields))) - # - for fname, BField, bitsize in fields: - if fname == '': - raise NotImplementedError("nested anonymous structs/unions") - if hasattr(CTypesStructOrUnion, fname): - raise ValueError("the field name %r conflicts in " - "the ctypes backend" % fname) - if bitsize < 0: - def getter(self, fname=fname, BField=BField, - offset=CTypesStructOrUnion._offsetof(fname), - PTR=ctypes.POINTER(BField._ctype)): - addr = ctypes.addressof(self._blob) - p = ctypes.cast(addr + offset, PTR) - return BField._from_ctypes(p.contents) - def setter(self, value, fname=fname, BField=BField): - setattr(self._blob, fname, BField._to_ctypes(value)) - # - if issubclass(BField, CTypesGenericArray): - setter = None - if BField._declared_length == 0: - def getter(self, fname=fname, BFieldPtr=BField._CTPtr, - offset=CTypesStructOrUnion._offsetof(fname), - PTR=ctypes.POINTER(BField._ctype)): - addr = ctypes.addressof(self._blob) - p = ctypes.cast(addr + offset, PTR) - return BFieldPtr._from_ctypes(p) - # - else: - def getter(self, fname=fname, BField=BField): - return BField._from_ctypes(getattr(self._blob, fname)) - def setter(self, value, fname=fname, BField=BField): - # xxx obscure workaround - value = BField._to_ctypes(value) - oldvalue = getattr(self._blob, fname) - setattr(self._blob, fname, value) - if value != getattr(self._blob, fname): - setattr(self._blob, fname, oldvalue) - raise OverflowError("value too large for bitfield") - setattr(CTypesStructOrUnion, fname, property(getter, setter)) - # - CTypesPtr = self.ffi._get_cached_btype(model.PointerType(tp)) - for fname in fnames: - if hasattr(CTypesPtr, fname): - raise ValueError("the field name %r conflicts in " - "the ctypes backend" % fname) - def getter(self, fname=fname): - return getattr(self[0], fname) - def setter(self, value, fname=fname): - setattr(self[0], fname, value) - setattr(CTypesPtr, fname, property(getter, setter)) - - def new_function_type(self, BArgs, BResult, has_varargs): - nameargs = [BArg._get_c_name() for BArg in BArgs] - if has_varargs: - nameargs.append('...') - nameargs = ', '.join(nameargs) - # - class CTypesFunctionPtr(CTypesGenericPtr): - __slots__ = ['_own_callback', '_name'] - _ctype = ctypes.CFUNCTYPE(getattr(BResult, '_ctype', None), - *[BArg._ctype for BArg in BArgs], - use_errno=True) - _reftypename = BResult._get_c_name('(* &)(%s)' % (nameargs,)) - - def __init__(self, init, error=None): - # create a callback to the Python callable init() - import traceback - assert not has_varargs, "varargs not supported for callbacks" - if getattr(BResult, '_ctype', None) is not None: - error = BResult._from_ctypes( - BResult._create_ctype_obj(error)) - else: - error = None - def callback(*args): - args2 = [] - for arg, BArg in zip(args, BArgs): - args2.append(BArg._from_ctypes(arg)) - try: - res2 = init(*args2) - res2 = BResult._to_ctypes(res2) - except: - traceback.print_exc() - res2 = error - if issubclass(BResult, CTypesGenericPtr): - if res2: - res2 = ctypes.cast(res2, ctypes.c_void_p).value - # .value: http://bugs.python.org/issue1574593 - else: - res2 = None - #print repr(res2) - return res2 - if issubclass(BResult, CTypesGenericPtr): - # The only pointers callbacks can return are void*s: - # http://bugs.python.org/issue5710 - callback_ctype = ctypes.CFUNCTYPE( - ctypes.c_void_p, - *[BArg._ctype for BArg in BArgs], - use_errno=True) - else: - callback_ctype = CTypesFunctionPtr._ctype - self._as_ctype_ptr = callback_ctype(callback) - self._address = ctypes.cast(self._as_ctype_ptr, - ctypes.c_void_p).value - self._own_callback = init - - @staticmethod - def _initialize(ctypes_ptr, value): - if value: - raise NotImplementedError("ctypes backend: not supported: " - "initializers for function pointers") - - def __repr__(self): - c_name = getattr(self, '_name', None) - if c_name: - i = self._reftypename.index('(* &)') - if self._reftypename[i-1] not in ' )*': - c_name = ' ' + c_name - c_name = self._reftypename.replace('(* &)', c_name) - return CTypesData.__repr__(self, c_name) - - def _get_own_repr(self): - if getattr(self, '_own_callback', None) is not None: - return 'calling %r' % (self._own_callback,) - return super(CTypesFunctionPtr, self)._get_own_repr() - - def __call__(self, *args): - if has_varargs: - assert len(args) >= len(BArgs) - extraargs = args[len(BArgs):] - args = args[:len(BArgs)] - else: - assert len(args) == len(BArgs) - ctypes_args = [] - for arg, BArg in zip(args, BArgs): - ctypes_args.append(BArg._arg_to_ctypes(arg)) - if has_varargs: - for i, arg in enumerate(extraargs): - if arg is None: - ctypes_args.append(ctypes.c_void_p(0)) # NULL - continue - if not isinstance(arg, CTypesData): - raise TypeError( - "argument %d passed in the variadic part " - "needs to be a cdata object (got %s)" % - (1 + len(BArgs) + i, type(arg).__name__)) - ctypes_args.append(arg._arg_to_ctypes(arg)) - result = self._as_ctype_ptr(*ctypes_args) - return BResult._from_ctypes(result) - # - CTypesFunctionPtr._fix_class() - return CTypesFunctionPtr - - def new_enum_type(self, name, enumerators, enumvalues, CTypesInt): - assert isinstance(name, str) - reverse_mapping = dict(zip(reversed(enumvalues), - reversed(enumerators))) - # - class CTypesEnum(CTypesInt): - __slots__ = [] - _reftypename = '%s &' % name - - def _get_own_repr(self): - value = self._value - try: - return '%d: %s' % (value, reverse_mapping[value]) - except KeyError: - return str(value) - - def _to_string(self, maxlen): - value = self._value - try: - return reverse_mapping[value] - except KeyError: - return str(value) - # - CTypesEnum._fix_class() - return CTypesEnum - - def get_errno(self): - return ctypes.get_errno() - - def set_errno(self, value): - ctypes.set_errno(value) - - def string(self, b, maxlen=-1): - return b._to_string(maxlen) - - def buffer(self, bptr, size=-1): - raise NotImplementedError("buffer() with ctypes backend") - - def sizeof(self, cdata_or_BType): - if isinstance(cdata_or_BType, CTypesData): - return cdata_or_BType._get_size_of_instance() - else: - assert issubclass(cdata_or_BType, CTypesData) - return cdata_or_BType._get_size() - - def alignof(self, BType): - assert issubclass(BType, CTypesData) - return BType._alignment() - - def newp(self, BType, source): - if not issubclass(BType, CTypesData): - raise TypeError - return BType._newp(source) - - def cast(self, BType, source): - return BType._cast_from(source) - - def callback(self, BType, source, error, onerror): - assert onerror is None # XXX not implemented - return BType(source, error) - - _weakref_cache_ref = None - - def gcp(self, cdata, destructor, size=0): - if self._weakref_cache_ref is None: - import weakref - class MyRef(weakref.ref): - def __eq__(self, other): - myref = self() - return self is other or ( - myref is not None and myref is other()) - def __ne__(self, other): - return not (self == other) - def __hash__(self): - try: - return self._hash - except AttributeError: - self._hash = hash(self()) - return self._hash - self._weakref_cache_ref = {}, MyRef - weak_cache, MyRef = self._weakref_cache_ref - - if destructor is None: - try: - del weak_cache[MyRef(cdata)] - except KeyError: - raise TypeError("Can remove destructor only on a object " - "previously returned by ffi.gc()") - return None - - def remove(k): - cdata, destructor = weak_cache.pop(k, (None, None)) - if destructor is not None: - destructor(cdata) - - new_cdata = self.cast(self.typeof(cdata), cdata) - assert new_cdata is not cdata - weak_cache[MyRef(new_cdata, remove)] = (cdata, destructor) - return new_cdata - - typeof = type - - def getcname(self, BType, replace_with): - return BType._get_c_name(replace_with) - - def typeoffsetof(self, BType, fieldname, num=0): - if isinstance(fieldname, str): - if num == 0 and issubclass(BType, CTypesGenericPtr): - BType = BType._BItem - if not issubclass(BType, CTypesBaseStructOrUnion): - raise TypeError("expected a struct or union ctype") - BField = BType._bfield_types[fieldname] - if BField is Ellipsis: - raise TypeError("not supported for bitfields") - return (BField, BType._offsetof(fieldname)) - elif isinstance(fieldname, (int, long)): - if issubclass(BType, CTypesGenericArray): - BType = BType._CTPtr - if not issubclass(BType, CTypesGenericPtr): - raise TypeError("expected an array or ptr ctype") - BItem = BType._BItem - offset = BItem._get_size() * fieldname - if offset > sys.maxsize: - raise OverflowError - return (BItem, offset) - else: - raise TypeError(type(fieldname)) - - def rawaddressof(self, BTypePtr, cdata, offset=None): - if isinstance(cdata, CTypesBaseStructOrUnion): - ptr = ctypes.pointer(type(cdata)._to_ctypes(cdata)) - elif isinstance(cdata, CTypesGenericPtr): - if offset is None or not issubclass(type(cdata)._BItem, - CTypesBaseStructOrUnion): - raise TypeError("unexpected cdata type") - ptr = type(cdata)._to_ctypes(cdata) - elif isinstance(cdata, CTypesGenericArray): - ptr = type(cdata)._to_ctypes(cdata) - else: - raise TypeError("expected a ") - if offset: - ptr = ctypes.cast( - ctypes.c_void_p( - ctypes.cast(ptr, ctypes.c_void_p).value + offset), - type(ptr)) - return BTypePtr._from_ctypes(ptr) - - -class CTypesLibrary(object): - - def __init__(self, backend, cdll): - self.backend = backend - self.cdll = cdll - - def load_function(self, BType, name): - c_func = getattr(self.cdll, name) - funcobj = BType._from_ctypes(c_func) - funcobj._name = name - return funcobj - - def read_variable(self, BType, name): - try: - ctypes_obj = BType._ctype.in_dll(self.cdll, name) - except AttributeError as e: - raise NotImplementedError(e) - return BType._from_ctypes(ctypes_obj) - - def write_variable(self, BType, name, value): - new_ctypes_obj = BType._to_ctypes(value) - ctypes_obj = BType._ctype.in_dll(self.cdll, name) - ctypes.memmove(ctypes.addressof(ctypes_obj), - ctypes.addressof(new_ctypes_obj), - ctypes.sizeof(BType._ctype)) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/cffi_opcode.py b/pptx-env/lib/python3.12/site-packages/cffi/cffi_opcode.py deleted file mode 100644 index 6421df62..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/cffi_opcode.py +++ /dev/null @@ -1,187 +0,0 @@ -from .error import VerificationError - -class CffiOp(object): - def __init__(self, op, arg): - self.op = op - self.arg = arg - - def as_c_expr(self): - if self.op is None: - assert isinstance(self.arg, str) - return '(_cffi_opcode_t)(%s)' % (self.arg,) - classname = CLASS_NAME[self.op] - return '_CFFI_OP(_CFFI_OP_%s, %s)' % (classname, self.arg) - - def as_python_bytes(self): - if self.op is None and self.arg.isdigit(): - value = int(self.arg) # non-negative: '-' not in self.arg - if value >= 2**31: - raise OverflowError("cannot emit %r: limited to 2**31-1" - % (self.arg,)) - return format_four_bytes(value) - if isinstance(self.arg, str): - raise VerificationError("cannot emit to Python: %r" % (self.arg,)) - return format_four_bytes((self.arg << 8) | self.op) - - def __str__(self): - classname = CLASS_NAME.get(self.op, self.op) - return '(%s %s)' % (classname, self.arg) - -def format_four_bytes(num): - return '\\x%02X\\x%02X\\x%02X\\x%02X' % ( - (num >> 24) & 0xFF, - (num >> 16) & 0xFF, - (num >> 8) & 0xFF, - (num ) & 0xFF) - -OP_PRIMITIVE = 1 -OP_POINTER = 3 -OP_ARRAY = 5 -OP_OPEN_ARRAY = 7 -OP_STRUCT_UNION = 9 -OP_ENUM = 11 -OP_FUNCTION = 13 -OP_FUNCTION_END = 15 -OP_NOOP = 17 -OP_BITFIELD = 19 -OP_TYPENAME = 21 -OP_CPYTHON_BLTN_V = 23 # varargs -OP_CPYTHON_BLTN_N = 25 # noargs -OP_CPYTHON_BLTN_O = 27 # O (i.e. a single arg) -OP_CONSTANT = 29 -OP_CONSTANT_INT = 31 -OP_GLOBAL_VAR = 33 -OP_DLOPEN_FUNC = 35 -OP_DLOPEN_CONST = 37 -OP_GLOBAL_VAR_F = 39 -OP_EXTERN_PYTHON = 41 - -PRIM_VOID = 0 -PRIM_BOOL = 1 -PRIM_CHAR = 2 -PRIM_SCHAR = 3 -PRIM_UCHAR = 4 -PRIM_SHORT = 5 -PRIM_USHORT = 6 -PRIM_INT = 7 -PRIM_UINT = 8 -PRIM_LONG = 9 -PRIM_ULONG = 10 -PRIM_LONGLONG = 11 -PRIM_ULONGLONG = 12 -PRIM_FLOAT = 13 -PRIM_DOUBLE = 14 -PRIM_LONGDOUBLE = 15 - -PRIM_WCHAR = 16 -PRIM_INT8 = 17 -PRIM_UINT8 = 18 -PRIM_INT16 = 19 -PRIM_UINT16 = 20 -PRIM_INT32 = 21 -PRIM_UINT32 = 22 -PRIM_INT64 = 23 -PRIM_UINT64 = 24 -PRIM_INTPTR = 25 -PRIM_UINTPTR = 26 -PRIM_PTRDIFF = 27 -PRIM_SIZE = 28 -PRIM_SSIZE = 29 -PRIM_INT_LEAST8 = 30 -PRIM_UINT_LEAST8 = 31 -PRIM_INT_LEAST16 = 32 -PRIM_UINT_LEAST16 = 33 -PRIM_INT_LEAST32 = 34 -PRIM_UINT_LEAST32 = 35 -PRIM_INT_LEAST64 = 36 -PRIM_UINT_LEAST64 = 37 -PRIM_INT_FAST8 = 38 -PRIM_UINT_FAST8 = 39 -PRIM_INT_FAST16 = 40 -PRIM_UINT_FAST16 = 41 -PRIM_INT_FAST32 = 42 -PRIM_UINT_FAST32 = 43 -PRIM_INT_FAST64 = 44 -PRIM_UINT_FAST64 = 45 -PRIM_INTMAX = 46 -PRIM_UINTMAX = 47 -PRIM_FLOATCOMPLEX = 48 -PRIM_DOUBLECOMPLEX = 49 -PRIM_CHAR16 = 50 -PRIM_CHAR32 = 51 - -_NUM_PRIM = 52 -_UNKNOWN_PRIM = -1 -_UNKNOWN_FLOAT_PRIM = -2 -_UNKNOWN_LONG_DOUBLE = -3 - -_IO_FILE_STRUCT = -1 - -PRIMITIVE_TO_INDEX = { - 'char': PRIM_CHAR, - 'short': PRIM_SHORT, - 'int': PRIM_INT, - 'long': PRIM_LONG, - 'long long': PRIM_LONGLONG, - 'signed char': PRIM_SCHAR, - 'unsigned char': PRIM_UCHAR, - 'unsigned short': PRIM_USHORT, - 'unsigned int': PRIM_UINT, - 'unsigned long': PRIM_ULONG, - 'unsigned long long': PRIM_ULONGLONG, - 'float': PRIM_FLOAT, - 'double': PRIM_DOUBLE, - 'long double': PRIM_LONGDOUBLE, - '_cffi_float_complex_t': PRIM_FLOATCOMPLEX, - '_cffi_double_complex_t': PRIM_DOUBLECOMPLEX, - '_Bool': PRIM_BOOL, - 'wchar_t': PRIM_WCHAR, - 'char16_t': PRIM_CHAR16, - 'char32_t': PRIM_CHAR32, - 'int8_t': PRIM_INT8, - 'uint8_t': PRIM_UINT8, - 'int16_t': PRIM_INT16, - 'uint16_t': PRIM_UINT16, - 'int32_t': PRIM_INT32, - 'uint32_t': PRIM_UINT32, - 'int64_t': PRIM_INT64, - 'uint64_t': PRIM_UINT64, - 'intptr_t': PRIM_INTPTR, - 'uintptr_t': PRIM_UINTPTR, - 'ptrdiff_t': PRIM_PTRDIFF, - 'size_t': PRIM_SIZE, - 'ssize_t': PRIM_SSIZE, - 'int_least8_t': PRIM_INT_LEAST8, - 'uint_least8_t': PRIM_UINT_LEAST8, - 'int_least16_t': PRIM_INT_LEAST16, - 'uint_least16_t': PRIM_UINT_LEAST16, - 'int_least32_t': PRIM_INT_LEAST32, - 'uint_least32_t': PRIM_UINT_LEAST32, - 'int_least64_t': PRIM_INT_LEAST64, - 'uint_least64_t': PRIM_UINT_LEAST64, - 'int_fast8_t': PRIM_INT_FAST8, - 'uint_fast8_t': PRIM_UINT_FAST8, - 'int_fast16_t': PRIM_INT_FAST16, - 'uint_fast16_t': PRIM_UINT_FAST16, - 'int_fast32_t': PRIM_INT_FAST32, - 'uint_fast32_t': PRIM_UINT_FAST32, - 'int_fast64_t': PRIM_INT_FAST64, - 'uint_fast64_t': PRIM_UINT_FAST64, - 'intmax_t': PRIM_INTMAX, - 'uintmax_t': PRIM_UINTMAX, - } - -F_UNION = 0x01 -F_CHECK_FIELDS = 0x02 -F_PACKED = 0x04 -F_EXTERNAL = 0x08 -F_OPAQUE = 0x10 - -G_FLAGS = dict([('_CFFI_' + _key, globals()[_key]) - for _key in ['F_UNION', 'F_CHECK_FIELDS', 'F_PACKED', - 'F_EXTERNAL', 'F_OPAQUE']]) - -CLASS_NAME = {} -for _name, _value in list(globals().items()): - if _name.startswith('OP_') and isinstance(_value, int): - CLASS_NAME[_value] = _name[3:] diff --git a/pptx-env/lib/python3.12/site-packages/cffi/commontypes.py b/pptx-env/lib/python3.12/site-packages/cffi/commontypes.py deleted file mode 100644 index d4dae351..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/commontypes.py +++ /dev/null @@ -1,82 +0,0 @@ -import sys -from . import model -from .error import FFIError - - -COMMON_TYPES = {} - -try: - # fetch "bool" and all simple Windows types - from _cffi_backend import _get_common_types - _get_common_types(COMMON_TYPES) -except ImportError: - pass - -COMMON_TYPES['FILE'] = model.unknown_type('FILE', '_IO_FILE') -COMMON_TYPES['bool'] = '_Bool' # in case we got ImportError above -COMMON_TYPES['float _Complex'] = '_cffi_float_complex_t' -COMMON_TYPES['double _Complex'] = '_cffi_double_complex_t' - -for _type in model.PrimitiveType.ALL_PRIMITIVE_TYPES: - if _type.endswith('_t'): - COMMON_TYPES[_type] = _type -del _type - -_CACHE = {} - -def resolve_common_type(parser, commontype): - try: - return _CACHE[commontype] - except KeyError: - cdecl = COMMON_TYPES.get(commontype, commontype) - if not isinstance(cdecl, str): - result, quals = cdecl, 0 # cdecl is already a BaseType - elif cdecl in model.PrimitiveType.ALL_PRIMITIVE_TYPES: - result, quals = model.PrimitiveType(cdecl), 0 - elif cdecl == 'set-unicode-needed': - raise FFIError("The Windows type %r is only available after " - "you call ffi.set_unicode()" % (commontype,)) - else: - if commontype == cdecl: - raise FFIError( - "Unsupported type: %r. Please look at " - "http://cffi.readthedocs.io/en/latest/cdef.html#ffi-cdef-limitations " - "and file an issue if you think this type should really " - "be supported." % (commontype,)) - result, quals = parser.parse_type_and_quals(cdecl) # recursive - - assert isinstance(result, model.BaseTypeByIdentity) - _CACHE[commontype] = result, quals - return result, quals - - -# ____________________________________________________________ -# extra types for Windows (most of them are in commontypes.c) - - -def win_common_types(): - return { - "UNICODE_STRING": model.StructType( - "_UNICODE_STRING", - ["Length", - "MaximumLength", - "Buffer"], - [model.PrimitiveType("unsigned short"), - model.PrimitiveType("unsigned short"), - model.PointerType(model.PrimitiveType("wchar_t"))], - [-1, -1, -1]), - "PUNICODE_STRING": "UNICODE_STRING *", - "PCUNICODE_STRING": "const UNICODE_STRING *", - - "TBYTE": "set-unicode-needed", - "TCHAR": "set-unicode-needed", - "LPCTSTR": "set-unicode-needed", - "PCTSTR": "set-unicode-needed", - "LPTSTR": "set-unicode-needed", - "PTSTR": "set-unicode-needed", - "PTBYTE": "set-unicode-needed", - "PTCHAR": "set-unicode-needed", - } - -if sys.platform == 'win32': - COMMON_TYPES.update(win_common_types()) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/cparser.py b/pptx-env/lib/python3.12/site-packages/cffi/cparser.py deleted file mode 100644 index dd590d87..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/cparser.py +++ /dev/null @@ -1,1015 +0,0 @@ -from . import model -from .commontypes import COMMON_TYPES, resolve_common_type -from .error import FFIError, CDefError -try: - from . import _pycparser as pycparser -except ImportError: - import pycparser -import weakref, re, sys - -try: - if sys.version_info < (3,): - import thread as _thread - else: - import _thread - lock = _thread.allocate_lock() -except ImportError: - lock = None - -def _workaround_for_static_import_finders(): - # Issue #392: packaging tools like cx_Freeze can not find these - # because pycparser uses exec dynamic import. This is an obscure - # workaround. This function is never called. - import pycparser.yacctab - import pycparser.lextab - -CDEF_SOURCE_STRING = "" -_r_comment = re.compile(r"/\*.*?\*/|//([^\n\\]|\\.)*?$", - re.DOTALL | re.MULTILINE) -_r_define = re.compile(r"^\s*#\s*define\s+([A-Za-z_][A-Za-z_0-9]*)" - r"\b((?:[^\n\\]|\\.)*?)$", - re.DOTALL | re.MULTILINE) -_r_line_directive = re.compile(r"^[ \t]*#[ \t]*(?:line|\d+)\b.*$", re.MULTILINE) -_r_partial_enum = re.compile(r"=\s*\.\.\.\s*[,}]|\.\.\.\s*\}") -_r_enum_dotdotdot = re.compile(r"__dotdotdot\d+__$") -_r_partial_array = re.compile(r"\[\s*\.\.\.\s*\]") -_r_words = re.compile(r"\w+|\S") -_parser_cache = None -_r_int_literal = re.compile(r"-?0?x?[0-9a-f]+[lu]*$", re.IGNORECASE) -_r_stdcall1 = re.compile(r"\b(__stdcall|WINAPI)\b") -_r_stdcall2 = re.compile(r"[(]\s*(__stdcall|WINAPI)\b") -_r_cdecl = re.compile(r"\b__cdecl\b") -_r_extern_python = re.compile(r'\bextern\s*"' - r'(Python|Python\s*\+\s*C|C\s*\+\s*Python)"\s*.') -_r_star_const_space = re.compile( # matches "* const " - r"[*]\s*((const|volatile|restrict)\b\s*)+") -_r_int_dotdotdot = re.compile(r"(\b(int|long|short|signed|unsigned|char)\s*)+" - r"\.\.\.") -_r_float_dotdotdot = re.compile(r"\b(double|float)\s*\.\.\.") - -def _get_parser(): - global _parser_cache - if _parser_cache is None: - _parser_cache = pycparser.CParser() - return _parser_cache - -def _workaround_for_old_pycparser(csource): - # Workaround for a pycparser issue (fixed between pycparser 2.10 and - # 2.14): "char*const***" gives us a wrong syntax tree, the same as - # for "char***(*const)". This means we can't tell the difference - # afterwards. But "char(*const(***))" gives us the right syntax - # tree. The issue only occurs if there are several stars in - # sequence with no parenthesis in between, just possibly qualifiers. - # Attempt to fix it by adding some parentheses in the source: each - # time we see "* const" or "* const *", we add an opening - # parenthesis before each star---the hard part is figuring out where - # to close them. - parts = [] - while True: - match = _r_star_const_space.search(csource) - if not match: - break - #print repr(''.join(parts)+csource), '=>', - parts.append(csource[:match.start()]) - parts.append('('); closing = ')' - parts.append(match.group()) # e.g. "* const " - endpos = match.end() - if csource.startswith('*', endpos): - parts.append('('); closing += ')' - level = 0 - i = endpos - while i < len(csource): - c = csource[i] - if c == '(': - level += 1 - elif c == ')': - if level == 0: - break - level -= 1 - elif c in ',;=': - if level == 0: - break - i += 1 - csource = csource[endpos:i] + closing + csource[i:] - #print repr(''.join(parts)+csource) - parts.append(csource) - return ''.join(parts) - -def _preprocess_extern_python(csource): - # input: `extern "Python" int foo(int);` or - # `extern "Python" { int foo(int); }` - # output: - # void __cffi_extern_python_start; - # int foo(int); - # void __cffi_extern_python_stop; - # - # input: `extern "Python+C" int foo(int);` - # output: - # void __cffi_extern_python_plus_c_start; - # int foo(int); - # void __cffi_extern_python_stop; - parts = [] - while True: - match = _r_extern_python.search(csource) - if not match: - break - endpos = match.end() - 1 - #print - #print ''.join(parts)+csource - #print '=>' - parts.append(csource[:match.start()]) - if 'C' in match.group(1): - parts.append('void __cffi_extern_python_plus_c_start; ') - else: - parts.append('void __cffi_extern_python_start; ') - if csource[endpos] == '{': - # grouping variant - closing = csource.find('}', endpos) - if closing < 0: - raise CDefError("'extern \"Python\" {': no '}' found") - if csource.find('{', endpos + 1, closing) >= 0: - raise NotImplementedError("cannot use { } inside a block " - "'extern \"Python\" { ... }'") - parts.append(csource[endpos+1:closing]) - csource = csource[closing+1:] - else: - # non-grouping variant - semicolon = csource.find(';', endpos) - if semicolon < 0: - raise CDefError("'extern \"Python\": no ';' found") - parts.append(csource[endpos:semicolon+1]) - csource = csource[semicolon+1:] - parts.append(' void __cffi_extern_python_stop;') - #print ''.join(parts)+csource - #print - parts.append(csource) - return ''.join(parts) - -def _warn_for_string_literal(csource): - if '"' not in csource: - return - for line in csource.splitlines(): - if '"' in line and not line.lstrip().startswith('#'): - import warnings - warnings.warn("String literal found in cdef() or type source. " - "String literals are ignored here, but you should " - "remove them anyway because some character sequences " - "confuse pre-parsing.") - break - -def _warn_for_non_extern_non_static_global_variable(decl): - if not decl.storage: - import warnings - warnings.warn("Global variable '%s' in cdef(): for consistency " - "with C it should have a storage class specifier " - "(usually 'extern')" % (decl.name,)) - -def _remove_line_directives(csource): - # _r_line_directive matches whole lines, without the final \n, if they - # start with '#line' with some spacing allowed, or '#NUMBER'. This - # function stores them away and replaces them with exactly the string - # '#line@N', where N is the index in the list 'line_directives'. - line_directives = [] - def replace(m): - i = len(line_directives) - line_directives.append(m.group()) - return '#line@%d' % i - csource = _r_line_directive.sub(replace, csource) - return csource, line_directives - -def _put_back_line_directives(csource, line_directives): - def replace(m): - s = m.group() - if not s.startswith('#line@'): - raise AssertionError("unexpected #line directive " - "(should have been processed and removed") - return line_directives[int(s[6:])] - return _r_line_directive.sub(replace, csource) - -def _preprocess(csource): - # First, remove the lines of the form '#line N "filename"' because - # the "filename" part could confuse the rest - csource, line_directives = _remove_line_directives(csource) - # Remove comments. NOTE: this only work because the cdef() section - # should not contain any string literals (except in line directives)! - def replace_keeping_newlines(m): - return ' ' + m.group().count('\n') * '\n' - csource = _r_comment.sub(replace_keeping_newlines, csource) - # Remove the "#define FOO x" lines - macros = {} - for match in _r_define.finditer(csource): - macroname, macrovalue = match.groups() - macrovalue = macrovalue.replace('\\\n', '').strip() - macros[macroname] = macrovalue - csource = _r_define.sub('', csource) - # - if pycparser.__version__ < '2.14': - csource = _workaround_for_old_pycparser(csource) - # - # BIG HACK: replace WINAPI or __stdcall with "volatile const". - # It doesn't make sense for the return type of a function to be - # "volatile volatile const", so we abuse it to detect __stdcall... - # Hack number 2 is that "int(volatile *fptr)();" is not valid C - # syntax, so we place the "volatile" before the opening parenthesis. - csource = _r_stdcall2.sub(' volatile volatile const(', csource) - csource = _r_stdcall1.sub(' volatile volatile const ', csource) - csource = _r_cdecl.sub(' ', csource) - # - # Replace `extern "Python"` with start/end markers - csource = _preprocess_extern_python(csource) - # - # Now there should not be any string literal left; warn if we get one - _warn_for_string_literal(csource) - # - # Replace "[...]" with "[__dotdotdotarray__]" - csource = _r_partial_array.sub('[__dotdotdotarray__]', csource) - # - # Replace "...}" with "__dotdotdotNUM__}". This construction should - # occur only at the end of enums; at the end of structs we have "...;}" - # and at the end of vararg functions "...);". Also replace "=...[,}]" - # with ",__dotdotdotNUM__[,}]": this occurs in the enums too, when - # giving an unknown value. - matches = list(_r_partial_enum.finditer(csource)) - for number, match in enumerate(reversed(matches)): - p = match.start() - if csource[p] == '=': - p2 = csource.find('...', p, match.end()) - assert p2 > p - csource = '%s,__dotdotdot%d__ %s' % (csource[:p], number, - csource[p2+3:]) - else: - assert csource[p:p+3] == '...' - csource = '%s __dotdotdot%d__ %s' % (csource[:p], number, - csource[p+3:]) - # Replace "int ..." or "unsigned long int..." with "__dotdotdotint__" - csource = _r_int_dotdotdot.sub(' __dotdotdotint__ ', csource) - # Replace "float ..." or "double..." with "__dotdotdotfloat__" - csource = _r_float_dotdotdot.sub(' __dotdotdotfloat__ ', csource) - # Replace all remaining "..." with the same name, "__dotdotdot__", - # which is declared with a typedef for the purpose of C parsing. - csource = csource.replace('...', ' __dotdotdot__ ') - # Finally, put back the line directives - csource = _put_back_line_directives(csource, line_directives) - return csource, macros - -def _common_type_names(csource): - # Look in the source for what looks like usages of types from the - # list of common types. A "usage" is approximated here as the - # appearance of the word, minus a "definition" of the type, which - # is the last word in a "typedef" statement. Approximative only - # but should be fine for all the common types. - look_for_words = set(COMMON_TYPES) - look_for_words.add(';') - look_for_words.add(',') - look_for_words.add('(') - look_for_words.add(')') - look_for_words.add('typedef') - words_used = set() - is_typedef = False - paren = 0 - previous_word = '' - for word in _r_words.findall(csource): - if word in look_for_words: - if word == ';': - if is_typedef: - words_used.discard(previous_word) - look_for_words.discard(previous_word) - is_typedef = False - elif word == 'typedef': - is_typedef = True - paren = 0 - elif word == '(': - paren += 1 - elif word == ')': - paren -= 1 - elif word == ',': - if is_typedef and paren == 0: - words_used.discard(previous_word) - look_for_words.discard(previous_word) - else: # word in COMMON_TYPES - words_used.add(word) - previous_word = word - return words_used - - -class Parser(object): - - def __init__(self): - self._declarations = {} - self._included_declarations = set() - self._anonymous_counter = 0 - self._structnode2type = weakref.WeakKeyDictionary() - self._options = {} - self._int_constants = {} - self._recomplete = [] - self._uses_new_feature = None - - def _parse(self, csource): - csource, macros = _preprocess(csource) - # XXX: for more efficiency we would need to poke into the - # internals of CParser... the following registers the - # typedefs, because their presence or absence influences the - # parsing itself (but what they are typedef'ed to plays no role) - ctn = _common_type_names(csource) - typenames = [] - for name in sorted(self._declarations): - if name.startswith('typedef '): - name = name[8:] - typenames.append(name) - ctn.discard(name) - typenames += sorted(ctn) - # - csourcelines = [] - csourcelines.append('# 1 ""') - for typename in typenames: - csourcelines.append('typedef int %s;' % typename) - csourcelines.append('typedef int __dotdotdotint__, __dotdotdotfloat__,' - ' __dotdotdot__;') - # this forces pycparser to consider the following in the file - # called from line 1 - csourcelines.append('# 1 "%s"' % (CDEF_SOURCE_STRING,)) - csourcelines.append(csource) - csourcelines.append('') # see test_missing_newline_bug - fullcsource = '\n'.join(csourcelines) - if lock is not None: - lock.acquire() # pycparser is not thread-safe... - try: - ast = _get_parser().parse(fullcsource) - except pycparser.c_parser.ParseError as e: - self.convert_pycparser_error(e, csource) - finally: - if lock is not None: - lock.release() - # csource will be used to find buggy source text - return ast, macros, csource - - def _convert_pycparser_error(self, e, csource): - # xxx look for ":NUM:" at the start of str(e) - # and interpret that as a line number. This will not work if - # the user gives explicit ``# NUM "FILE"`` directives. - line = None - msg = str(e) - match = re.match(r"%s:(\d+):" % (CDEF_SOURCE_STRING,), msg) - if match: - linenum = int(match.group(1), 10) - csourcelines = csource.splitlines() - if 1 <= linenum <= len(csourcelines): - line = csourcelines[linenum-1] - return line - - def convert_pycparser_error(self, e, csource): - line = self._convert_pycparser_error(e, csource) - - msg = str(e) - if line: - msg = 'cannot parse "%s"\n%s' % (line.strip(), msg) - else: - msg = 'parse error\n%s' % (msg,) - raise CDefError(msg) - - def parse(self, csource, override=False, packed=False, pack=None, - dllexport=False): - if packed: - if packed != True: - raise ValueError("'packed' should be False or True; use " - "'pack' to give another value") - if pack: - raise ValueError("cannot give both 'pack' and 'packed'") - pack = 1 - elif pack: - if pack & (pack - 1): - raise ValueError("'pack' must be a power of two, not %r" % - (pack,)) - else: - pack = 0 - prev_options = self._options - try: - self._options = {'override': override, - 'packed': pack, - 'dllexport': dllexport} - self._internal_parse(csource) - finally: - self._options = prev_options - - def _internal_parse(self, csource): - ast, macros, csource = self._parse(csource) - # add the macros - self._process_macros(macros) - # find the first "__dotdotdot__" and use that as a separator - # between the repeated typedefs and the real csource - iterator = iter(ast.ext) - for decl in iterator: - if decl.name == '__dotdotdot__': - break - else: - assert 0 - current_decl = None - # - try: - self._inside_extern_python = '__cffi_extern_python_stop' - for decl in iterator: - current_decl = decl - if isinstance(decl, pycparser.c_ast.Decl): - self._parse_decl(decl) - elif isinstance(decl, pycparser.c_ast.Typedef): - if not decl.name: - raise CDefError("typedef does not declare any name", - decl) - quals = 0 - if (isinstance(decl.type.type, pycparser.c_ast.IdentifierType) and - decl.type.type.names[-1].startswith('__dotdotdot')): - realtype = self._get_unknown_type(decl) - elif (isinstance(decl.type, pycparser.c_ast.PtrDecl) and - isinstance(decl.type.type, pycparser.c_ast.TypeDecl) and - isinstance(decl.type.type.type, - pycparser.c_ast.IdentifierType) and - decl.type.type.type.names[-1].startswith('__dotdotdot')): - realtype = self._get_unknown_ptr_type(decl) - else: - realtype, quals = self._get_type_and_quals( - decl.type, name=decl.name, partial_length_ok=True, - typedef_example="*(%s *)0" % (decl.name,)) - self._declare('typedef ' + decl.name, realtype, quals=quals) - elif decl.__class__.__name__ == 'Pragma': - # skip pragma, only in pycparser 2.15 - import warnings - warnings.warn( - "#pragma in cdef() are entirely ignored. " - "They should be removed for now, otherwise your " - "code might behave differently in a future version " - "of CFFI if #pragma support gets added. Note that " - "'#pragma pack' needs to be replaced with the " - "'packed' keyword argument to cdef().") - else: - raise CDefError("unexpected <%s>: this construct is valid " - "C but not valid in cdef()" % - decl.__class__.__name__, decl) - except CDefError as e: - if len(e.args) == 1: - e.args = e.args + (current_decl,) - raise - except FFIError as e: - msg = self._convert_pycparser_error(e, csource) - if msg: - e.args = (e.args[0] + "\n *** Err: %s" % msg,) - raise - - def _add_constants(self, key, val): - if key in self._int_constants: - if self._int_constants[key] == val: - return # ignore identical double declarations - raise FFIError( - "multiple declarations of constant: %s" % (key,)) - self._int_constants[key] = val - - def _add_integer_constant(self, name, int_str): - int_str = int_str.lower().rstrip("ul") - neg = int_str.startswith('-') - if neg: - int_str = int_str[1:] - # "010" is not valid oct in py3 - if (int_str.startswith("0") and int_str != '0' - and not int_str.startswith("0x")): - int_str = "0o" + int_str[1:] - pyvalue = int(int_str, 0) - if neg: - pyvalue = -pyvalue - self._add_constants(name, pyvalue) - self._declare('macro ' + name, pyvalue) - - def _process_macros(self, macros): - for key, value in macros.items(): - value = value.strip() - if _r_int_literal.match(value): - self._add_integer_constant(key, value) - elif value == '...': - self._declare('macro ' + key, value) - else: - raise CDefError( - 'only supports one of the following syntax:\n' - ' #define %s ... (literally dot-dot-dot)\n' - ' #define %s NUMBER (with NUMBER an integer' - ' constant, decimal/hex/octal)\n' - 'got:\n' - ' #define %s %s' - % (key, key, key, value)) - - def _declare_function(self, tp, quals, decl): - tp = self._get_type_pointer(tp, quals) - if self._options.get('dllexport'): - tag = 'dllexport_python ' - elif self._inside_extern_python == '__cffi_extern_python_start': - tag = 'extern_python ' - elif self._inside_extern_python == '__cffi_extern_python_plus_c_start': - tag = 'extern_python_plus_c ' - else: - tag = 'function ' - self._declare(tag + decl.name, tp) - - def _parse_decl(self, decl): - node = decl.type - if isinstance(node, pycparser.c_ast.FuncDecl): - tp, quals = self._get_type_and_quals(node, name=decl.name) - assert isinstance(tp, model.RawFunctionType) - self._declare_function(tp, quals, decl) - else: - if isinstance(node, pycparser.c_ast.Struct): - self._get_struct_union_enum_type('struct', node) - elif isinstance(node, pycparser.c_ast.Union): - self._get_struct_union_enum_type('union', node) - elif isinstance(node, pycparser.c_ast.Enum): - self._get_struct_union_enum_type('enum', node) - elif not decl.name: - raise CDefError("construct does not declare any variable", - decl) - # - if decl.name: - tp, quals = self._get_type_and_quals(node, - partial_length_ok=True) - if tp.is_raw_function: - self._declare_function(tp, quals, decl) - elif (tp.is_integer_type() and - hasattr(decl, 'init') and - hasattr(decl.init, 'value') and - _r_int_literal.match(decl.init.value)): - self._add_integer_constant(decl.name, decl.init.value) - elif (tp.is_integer_type() and - isinstance(decl.init, pycparser.c_ast.UnaryOp) and - decl.init.op == '-' and - hasattr(decl.init.expr, 'value') and - _r_int_literal.match(decl.init.expr.value)): - self._add_integer_constant(decl.name, - '-' + decl.init.expr.value) - elif (tp is model.void_type and - decl.name.startswith('__cffi_extern_python_')): - # hack: `extern "Python"` in the C source is replaced - # with "void __cffi_extern_python_start;" and - # "void __cffi_extern_python_stop;" - self._inside_extern_python = decl.name - else: - if self._inside_extern_python !='__cffi_extern_python_stop': - raise CDefError( - "cannot declare constants or " - "variables with 'extern \"Python\"'") - if (quals & model.Q_CONST) and not tp.is_array_type: - self._declare('constant ' + decl.name, tp, quals=quals) - else: - _warn_for_non_extern_non_static_global_variable(decl) - self._declare('variable ' + decl.name, tp, quals=quals) - - def parse_type(self, cdecl): - return self.parse_type_and_quals(cdecl)[0] - - def parse_type_and_quals(self, cdecl): - ast, macros = self._parse('void __dummy(\n%s\n);' % cdecl)[:2] - assert not macros - exprnode = ast.ext[-1].type.args.params[0] - if isinstance(exprnode, pycparser.c_ast.ID): - raise CDefError("unknown identifier '%s'" % (exprnode.name,)) - return self._get_type_and_quals(exprnode.type) - - def _declare(self, name, obj, included=False, quals=0): - if name in self._declarations: - prevobj, prevquals = self._declarations[name] - if prevobj is obj and prevquals == quals: - return - if not self._options.get('override'): - raise FFIError( - "multiple declarations of %s (for interactive usage, " - "try cdef(xx, override=True))" % (name,)) - assert '__dotdotdot__' not in name.split() - self._declarations[name] = (obj, quals) - if included: - self._included_declarations.add(obj) - - def _extract_quals(self, type): - quals = 0 - if isinstance(type, (pycparser.c_ast.TypeDecl, - pycparser.c_ast.PtrDecl)): - if 'const' in type.quals: - quals |= model.Q_CONST - if 'volatile' in type.quals: - quals |= model.Q_VOLATILE - if 'restrict' in type.quals: - quals |= model.Q_RESTRICT - return quals - - def _get_type_pointer(self, type, quals, declname=None): - if isinstance(type, model.RawFunctionType): - return type.as_function_pointer() - if (isinstance(type, model.StructOrUnionOrEnum) and - type.name.startswith('$') and type.name[1:].isdigit() and - type.forcename is None and declname is not None): - return model.NamedPointerType(type, declname, quals) - return model.PointerType(type, quals) - - def _get_type_and_quals(self, typenode, name=None, partial_length_ok=False, - typedef_example=None): - # first, dereference typedefs, if we have it already parsed, we're good - if (isinstance(typenode, pycparser.c_ast.TypeDecl) and - isinstance(typenode.type, pycparser.c_ast.IdentifierType) and - len(typenode.type.names) == 1 and - ('typedef ' + typenode.type.names[0]) in self._declarations): - tp, quals = self._declarations['typedef ' + typenode.type.names[0]] - quals |= self._extract_quals(typenode) - return tp, quals - # - if isinstance(typenode, pycparser.c_ast.ArrayDecl): - # array type - if typenode.dim is None: - length = None - else: - length = self._parse_constant( - typenode.dim, partial_length_ok=partial_length_ok) - # a hack: in 'typedef int foo_t[...][...];', don't use '...' as - # the length but use directly the C expression that would be - # generated by recompiler.py. This lets the typedef be used in - # many more places within recompiler.py - if typedef_example is not None: - if length == '...': - length = '_cffi_array_len(%s)' % (typedef_example,) - typedef_example = "*" + typedef_example - # - tp, quals = self._get_type_and_quals(typenode.type, - partial_length_ok=partial_length_ok, - typedef_example=typedef_example) - return model.ArrayType(tp, length), quals - # - if isinstance(typenode, pycparser.c_ast.PtrDecl): - # pointer type - itemtype, itemquals = self._get_type_and_quals(typenode.type) - tp = self._get_type_pointer(itemtype, itemquals, declname=name) - quals = self._extract_quals(typenode) - return tp, quals - # - if isinstance(typenode, pycparser.c_ast.TypeDecl): - quals = self._extract_quals(typenode) - type = typenode.type - if isinstance(type, pycparser.c_ast.IdentifierType): - # assume a primitive type. get it from .names, but reduce - # synonyms to a single chosen combination - names = list(type.names) - if names != ['signed', 'char']: # keep this unmodified - prefixes = {} - while names: - name = names[0] - if name in ('short', 'long', 'signed', 'unsigned'): - prefixes[name] = prefixes.get(name, 0) + 1 - del names[0] - else: - break - # ignore the 'signed' prefix below, and reorder the others - newnames = [] - for prefix in ('unsigned', 'short', 'long'): - for i in range(prefixes.get(prefix, 0)): - newnames.append(prefix) - if not names: - names = ['int'] # implicitly - if names == ['int']: # but kill it if 'short' or 'long' - if 'short' in prefixes or 'long' in prefixes: - names = [] - names = newnames + names - ident = ' '.join(names) - if ident == 'void': - return model.void_type, quals - if ident == '__dotdotdot__': - raise FFIError(':%d: bad usage of "..."' % - typenode.coord.line) - tp0, quals0 = resolve_common_type(self, ident) - return tp0, (quals | quals0) - # - if isinstance(type, pycparser.c_ast.Struct): - # 'struct foobar' - tp = self._get_struct_union_enum_type('struct', type, name) - return tp, quals - # - if isinstance(type, pycparser.c_ast.Union): - # 'union foobar' - tp = self._get_struct_union_enum_type('union', type, name) - return tp, quals - # - if isinstance(type, pycparser.c_ast.Enum): - # 'enum foobar' - tp = self._get_struct_union_enum_type('enum', type, name) - return tp, quals - # - if isinstance(typenode, pycparser.c_ast.FuncDecl): - # a function type - return self._parse_function_type(typenode, name), 0 - # - # nested anonymous structs or unions end up here - if isinstance(typenode, pycparser.c_ast.Struct): - return self._get_struct_union_enum_type('struct', typenode, name, - nested=True), 0 - if isinstance(typenode, pycparser.c_ast.Union): - return self._get_struct_union_enum_type('union', typenode, name, - nested=True), 0 - # - raise FFIError(":%d: bad or unsupported type declaration" % - typenode.coord.line) - - def _parse_function_type(self, typenode, funcname=None): - params = list(getattr(typenode.args, 'params', [])) - for i, arg in enumerate(params): - if not hasattr(arg, 'type'): - raise CDefError("%s arg %d: unknown type '%s'" - " (if you meant to use the old C syntax of giving" - " untyped arguments, it is not supported)" - % (funcname or 'in expression', i + 1, - getattr(arg, 'name', '?'))) - ellipsis = ( - len(params) > 0 and - isinstance(params[-1].type, pycparser.c_ast.TypeDecl) and - isinstance(params[-1].type.type, - pycparser.c_ast.IdentifierType) and - params[-1].type.type.names == ['__dotdotdot__']) - if ellipsis: - params.pop() - if not params: - raise CDefError( - "%s: a function with only '(...)' as argument" - " is not correct C" % (funcname or 'in expression')) - args = [self._as_func_arg(*self._get_type_and_quals(argdeclnode.type)) - for argdeclnode in params] - if not ellipsis and args == [model.void_type]: - args = [] - result, quals = self._get_type_and_quals(typenode.type) - # the 'quals' on the result type are ignored. HACK: we absure them - # to detect __stdcall functions: we textually replace "__stdcall" - # with "volatile volatile const" above. - abi = None - if hasattr(typenode.type, 'quals'): # else, probable syntax error anyway - if typenode.type.quals[-3:] == ['volatile', 'volatile', 'const']: - abi = '__stdcall' - return model.RawFunctionType(tuple(args), result, ellipsis, abi) - - def _as_func_arg(self, type, quals): - if isinstance(type, model.ArrayType): - return model.PointerType(type.item, quals) - elif isinstance(type, model.RawFunctionType): - return type.as_function_pointer() - else: - return type - - def _get_struct_union_enum_type(self, kind, type, name=None, nested=False): - # First, a level of caching on the exact 'type' node of the AST. - # This is obscure, but needed because pycparser "unrolls" declarations - # such as "typedef struct { } foo_t, *foo_p" and we end up with - # an AST that is not a tree, but a DAG, with the "type" node of the - # two branches foo_t and foo_p of the trees being the same node. - # It's a bit silly but detecting "DAG-ness" in the AST tree seems - # to be the only way to distinguish this case from two independent - # structs. See test_struct_with_two_usages. - try: - return self._structnode2type[type] - except KeyError: - pass - # - # Note that this must handle parsing "struct foo" any number of - # times and always return the same StructType object. Additionally, - # one of these times (not necessarily the first), the fields of - # the struct can be specified with "struct foo { ...fields... }". - # If no name is given, then we have to create a new anonymous struct - # with no caching; in this case, the fields are either specified - # right now or never. - # - force_name = name - name = type.name - # - # get the type or create it if needed - if name is None: - # 'force_name' is used to guess a more readable name for - # anonymous structs, for the common case "typedef struct { } foo". - if force_name is not None: - explicit_name = '$%s' % force_name - else: - self._anonymous_counter += 1 - explicit_name = '$%d' % self._anonymous_counter - tp = None - else: - explicit_name = name - key = '%s %s' % (kind, name) - tp, _ = self._declarations.get(key, (None, None)) - # - if tp is None: - if kind == 'struct': - tp = model.StructType(explicit_name, None, None, None) - elif kind == 'union': - tp = model.UnionType(explicit_name, None, None, None) - elif kind == 'enum': - if explicit_name == '__dotdotdot__': - raise CDefError("Enums cannot be declared with ...") - tp = self._build_enum_type(explicit_name, type.values) - else: - raise AssertionError("kind = %r" % (kind,)) - if name is not None: - self._declare(key, tp) - else: - if kind == 'enum' and type.values is not None: - raise NotImplementedError( - "enum %s: the '{}' declaration should appear on the first " - "time the enum is mentioned, not later" % explicit_name) - if not tp.forcename: - tp.force_the_name(force_name) - if tp.forcename and '$' in tp.name: - self._declare('anonymous %s' % tp.forcename, tp) - # - self._structnode2type[type] = tp - # - # enums: done here - if kind == 'enum': - return tp - # - # is there a 'type.decls'? If yes, then this is the place in the - # C sources that declare the fields. If no, then just return the - # existing type, possibly still incomplete. - if type.decls is None: - return tp - # - if tp.fldnames is not None: - raise CDefError("duplicate declaration of struct %s" % name) - fldnames = [] - fldtypes = [] - fldbitsize = [] - fldquals = [] - for decl in type.decls: - if (isinstance(decl.type, pycparser.c_ast.IdentifierType) and - ''.join(decl.type.names) == '__dotdotdot__'): - # XXX pycparser is inconsistent: 'names' should be a list - # of strings, but is sometimes just one string. Use - # str.join() as a way to cope with both. - self._make_partial(tp, nested) - continue - if decl.bitsize is None: - bitsize = -1 - else: - bitsize = self._parse_constant(decl.bitsize) - self._partial_length = False - type, fqual = self._get_type_and_quals(decl.type, - partial_length_ok=True) - if self._partial_length: - self._make_partial(tp, nested) - if isinstance(type, model.StructType) and type.partial: - self._make_partial(tp, nested) - fldnames.append(decl.name or '') - fldtypes.append(type) - fldbitsize.append(bitsize) - fldquals.append(fqual) - tp.fldnames = tuple(fldnames) - tp.fldtypes = tuple(fldtypes) - tp.fldbitsize = tuple(fldbitsize) - tp.fldquals = tuple(fldquals) - if fldbitsize != [-1] * len(fldbitsize): - if isinstance(tp, model.StructType) and tp.partial: - raise NotImplementedError("%s: using both bitfields and '...;'" - % (tp,)) - tp.packed = self._options.get('packed') - if tp.completed: # must be re-completed: it is not opaque any more - tp.completed = 0 - self._recomplete.append(tp) - return tp - - def _make_partial(self, tp, nested): - if not isinstance(tp, model.StructOrUnion): - raise CDefError("%s cannot be partial" % (tp,)) - if not tp.has_c_name() and not nested: - raise NotImplementedError("%s is partial but has no C name" %(tp,)) - tp.partial = True - - def _parse_constant(self, exprnode, partial_length_ok=False): - # for now, limited to expressions that are an immediate number - # or positive/negative number - if isinstance(exprnode, pycparser.c_ast.Constant): - s = exprnode.value - if '0' <= s[0] <= '9': - s = s.rstrip('uUlL') - try: - if s.startswith('0'): - return int(s, 8) - else: - return int(s, 10) - except ValueError: - if len(s) > 1: - if s.lower()[0:2] == '0x': - return int(s, 16) - elif s.lower()[0:2] == '0b': - return int(s, 2) - raise CDefError("invalid constant %r" % (s,)) - elif s[0] == "'" and s[-1] == "'" and ( - len(s) == 3 or (len(s) == 4 and s[1] == "\\")): - return ord(s[-2]) - else: - raise CDefError("invalid constant %r" % (s,)) - # - if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and - exprnode.op == '+'): - return self._parse_constant(exprnode.expr) - # - if (isinstance(exprnode, pycparser.c_ast.UnaryOp) and - exprnode.op == '-'): - return -self._parse_constant(exprnode.expr) - # load previously defined int constant - if (isinstance(exprnode, pycparser.c_ast.ID) and - exprnode.name in self._int_constants): - return self._int_constants[exprnode.name] - # - if (isinstance(exprnode, pycparser.c_ast.ID) and - exprnode.name == '__dotdotdotarray__'): - if partial_length_ok: - self._partial_length = True - return '...' - raise FFIError(":%d: unsupported '[...]' here, cannot derive " - "the actual array length in this context" - % exprnode.coord.line) - # - if isinstance(exprnode, pycparser.c_ast.BinaryOp): - left = self._parse_constant(exprnode.left) - right = self._parse_constant(exprnode.right) - if exprnode.op == '+': - return left + right - elif exprnode.op == '-': - return left - right - elif exprnode.op == '*': - return left * right - elif exprnode.op == '/': - return self._c_div(left, right) - elif exprnode.op == '%': - return left - self._c_div(left, right) * right - elif exprnode.op == '<<': - return left << right - elif exprnode.op == '>>': - return left >> right - elif exprnode.op == '&': - return left & right - elif exprnode.op == '|': - return left | right - elif exprnode.op == '^': - return left ^ right - # - raise FFIError(":%d: unsupported expression: expected a " - "simple numeric constant" % exprnode.coord.line) - - def _c_div(self, a, b): - result = a // b - if ((a < 0) ^ (b < 0)) and (a % b) != 0: - result += 1 - return result - - def _build_enum_type(self, explicit_name, decls): - if decls is not None: - partial = False - enumerators = [] - enumvalues = [] - nextenumvalue = 0 - for enum in decls.enumerators: - if _r_enum_dotdotdot.match(enum.name): - partial = True - continue - if enum.value is not None: - nextenumvalue = self._parse_constant(enum.value) - enumerators.append(enum.name) - enumvalues.append(nextenumvalue) - self._add_constants(enum.name, nextenumvalue) - nextenumvalue += 1 - enumerators = tuple(enumerators) - enumvalues = tuple(enumvalues) - tp = model.EnumType(explicit_name, enumerators, enumvalues) - tp.partial = partial - else: # opaque enum - tp = model.EnumType(explicit_name, (), ()) - return tp - - def include(self, other): - for name, (tp, quals) in other._declarations.items(): - if name.startswith('anonymous $enum_$'): - continue # fix for test_anonymous_enum_include - kind = name.split(' ', 1)[0] - if kind in ('struct', 'union', 'enum', 'anonymous', 'typedef'): - self._declare(name, tp, included=True, quals=quals) - for k, v in other._int_constants.items(): - self._add_constants(k, v) - - def _get_unknown_type(self, decl): - typenames = decl.type.type.names - if typenames == ['__dotdotdot__']: - return model.unknown_type(decl.name) - - if typenames == ['__dotdotdotint__']: - if self._uses_new_feature is None: - self._uses_new_feature = "'typedef int... %s'" % decl.name - return model.UnknownIntegerType(decl.name) - - if typenames == ['__dotdotdotfloat__']: - # note: not for 'long double' so far - if self._uses_new_feature is None: - self._uses_new_feature = "'typedef float... %s'" % decl.name - return model.UnknownFloatType(decl.name) - - raise FFIError(':%d: unsupported usage of "..." in typedef' - % decl.coord.line) - - def _get_unknown_ptr_type(self, decl): - if decl.type.type.type.names == ['__dotdotdot__']: - return model.unknown_ptr_type(decl.name) - raise FFIError(':%d: unsupported usage of "..." in typedef' - % decl.coord.line) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/error.py b/pptx-env/lib/python3.12/site-packages/cffi/error.py deleted file mode 100644 index 0a27247c..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/error.py +++ /dev/null @@ -1,31 +0,0 @@ - -class FFIError(Exception): - __module__ = 'cffi' - -class CDefError(Exception): - __module__ = 'cffi' - def __str__(self): - try: - current_decl = self.args[1] - filename = current_decl.coord.file - linenum = current_decl.coord.line - prefix = '%s:%d: ' % (filename, linenum) - except (AttributeError, TypeError, IndexError): - prefix = '' - return '%s%s' % (prefix, self.args[0]) - -class VerificationError(Exception): - """ An error raised when verification fails - """ - __module__ = 'cffi' - -class VerificationMissing(Exception): - """ An error raised when incomplete structures are passed into - cdef, but no verification has been done - """ - __module__ = 'cffi' - -class PkgConfigError(Exception): - """ An error raised for missing modules in pkg-config - """ - __module__ = 'cffi' diff --git a/pptx-env/lib/python3.12/site-packages/cffi/ffiplatform.py b/pptx-env/lib/python3.12/site-packages/cffi/ffiplatform.py deleted file mode 100644 index adca28f1..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/ffiplatform.py +++ /dev/null @@ -1,113 +0,0 @@ -import sys, os -from .error import VerificationError - - -LIST_OF_FILE_NAMES = ['sources', 'include_dirs', 'library_dirs', - 'extra_objects', 'depends'] - -def get_extension(srcfilename, modname, sources=(), **kwds): - from cffi._shimmed_dist_utils import Extension - allsources = [srcfilename] - for src in sources: - allsources.append(os.path.normpath(src)) - return Extension(name=modname, sources=allsources, **kwds) - -def compile(tmpdir, ext, compiler_verbose=0, debug=None): - """Compile a C extension module using distutils.""" - - saved_environ = os.environ.copy() - try: - outputfilename = _build(tmpdir, ext, compiler_verbose, debug) - outputfilename = os.path.abspath(outputfilename) - finally: - # workaround for a distutils bugs where some env vars can - # become longer and longer every time it is used - for key, value in saved_environ.items(): - if os.environ.get(key) != value: - os.environ[key] = value - return outputfilename - -def _build(tmpdir, ext, compiler_verbose=0, debug=None): - # XXX compact but horrible :-( - from cffi._shimmed_dist_utils import Distribution, CompileError, LinkError, set_threshold, set_verbosity - - dist = Distribution({'ext_modules': [ext]}) - dist.parse_config_files() - options = dist.get_option_dict('build_ext') - if debug is None: - debug = sys.flags.debug - options['debug'] = ('ffiplatform', debug) - options['force'] = ('ffiplatform', True) - options['build_lib'] = ('ffiplatform', tmpdir) - options['build_temp'] = ('ffiplatform', tmpdir) - # - try: - old_level = set_threshold(0) or 0 - try: - set_verbosity(compiler_verbose) - dist.run_command('build_ext') - cmd_obj = dist.get_command_obj('build_ext') - [soname] = cmd_obj.get_outputs() - finally: - set_threshold(old_level) - except (CompileError, LinkError) as e: - raise VerificationError('%s: %s' % (e.__class__.__name__, e)) - # - return soname - -try: - from os.path import samefile -except ImportError: - def samefile(f1, f2): - return os.path.abspath(f1) == os.path.abspath(f2) - -def maybe_relative_path(path): - if not os.path.isabs(path): - return path # already relative - dir = path - names = [] - while True: - prevdir = dir - dir, name = os.path.split(prevdir) - if dir == prevdir or not dir: - return path # failed to make it relative - names.append(name) - try: - if samefile(dir, os.curdir): - names.reverse() - return os.path.join(*names) - except OSError: - pass - -# ____________________________________________________________ - -try: - int_or_long = (int, long) - import cStringIO -except NameError: - int_or_long = int # Python 3 - import io as cStringIO - -def _flatten(x, f): - if isinstance(x, str): - f.write('%ds%s' % (len(x), x)) - elif isinstance(x, dict): - keys = sorted(x.keys()) - f.write('%dd' % len(keys)) - for key in keys: - _flatten(key, f) - _flatten(x[key], f) - elif isinstance(x, (list, tuple)): - f.write('%dl' % len(x)) - for value in x: - _flatten(value, f) - elif isinstance(x, int_or_long): - f.write('%di' % (x,)) - else: - raise TypeError( - "the keywords to verify() contains unsupported object %r" % (x,)) - -def flatten(x): - f = cStringIO.StringIO() - _flatten(x, f) - return f.getvalue() diff --git a/pptx-env/lib/python3.12/site-packages/cffi/lock.py b/pptx-env/lib/python3.12/site-packages/cffi/lock.py deleted file mode 100644 index db91b715..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/lock.py +++ /dev/null @@ -1,30 +0,0 @@ -import sys - -if sys.version_info < (3,): - try: - from thread import allocate_lock - except ImportError: - from dummy_thread import allocate_lock -else: - try: - from _thread import allocate_lock - except ImportError: - from _dummy_thread import allocate_lock - - -##import sys -##l1 = allocate_lock - -##class allocate_lock(object): -## def __init__(self): -## self._real = l1() -## def __enter__(self): -## for i in range(4, 0, -1): -## print sys._getframe(i).f_code -## print -## return self._real.__enter__() -## def __exit__(self, *args): -## return self._real.__exit__(*args) -## def acquire(self, f): -## assert f is False -## return self._real.acquire(f) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/model.py b/pptx-env/lib/python3.12/site-packages/cffi/model.py deleted file mode 100644 index e5f4cae3..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/model.py +++ /dev/null @@ -1,618 +0,0 @@ -import types -import weakref - -from .lock import allocate_lock -from .error import CDefError, VerificationError, VerificationMissing - -# type qualifiers -Q_CONST = 0x01 -Q_RESTRICT = 0x02 -Q_VOLATILE = 0x04 - -def qualify(quals, replace_with): - if quals & Q_CONST: - replace_with = ' const ' + replace_with.lstrip() - if quals & Q_VOLATILE: - replace_with = ' volatile ' + replace_with.lstrip() - if quals & Q_RESTRICT: - # It seems that __restrict is supported by gcc and msvc. - # If you hit some different compiler, add a #define in - # _cffi_include.h for it (and in its copies, documented there) - replace_with = ' __restrict ' + replace_with.lstrip() - return replace_with - - -class BaseTypeByIdentity(object): - is_array_type = False - is_raw_function = False - - def get_c_name(self, replace_with='', context='a C file', quals=0): - result = self.c_name_with_marker - assert result.count('&') == 1 - # some logic duplication with ffi.getctype()... :-( - replace_with = replace_with.strip() - if replace_with: - if replace_with.startswith('*') and '&[' in result: - replace_with = '(%s)' % replace_with - elif not replace_with[0] in '[(': - replace_with = ' ' + replace_with - replace_with = qualify(quals, replace_with) - result = result.replace('&', replace_with) - if '$' in result: - raise VerificationError( - "cannot generate '%s' in %s: unknown type name" - % (self._get_c_name(), context)) - return result - - def _get_c_name(self): - return self.c_name_with_marker.replace('&', '') - - def has_c_name(self): - return '$' not in self._get_c_name() - - def is_integer_type(self): - return False - - def get_cached_btype(self, ffi, finishlist, can_delay=False): - try: - BType = ffi._cached_btypes[self] - except KeyError: - BType = self.build_backend_type(ffi, finishlist) - BType2 = ffi._cached_btypes.setdefault(self, BType) - assert BType2 is BType - return BType - - def __repr__(self): - return '<%s>' % (self._get_c_name(),) - - def _get_items(self): - return [(name, getattr(self, name)) for name in self._attrs_] - - -class BaseType(BaseTypeByIdentity): - - def __eq__(self, other): - return (self.__class__ == other.__class__ and - self._get_items() == other._get_items()) - - def __ne__(self, other): - return not self == other - - def __hash__(self): - return hash((self.__class__, tuple(self._get_items()))) - - -class VoidType(BaseType): - _attrs_ = () - - def __init__(self): - self.c_name_with_marker = 'void&' - - def build_backend_type(self, ffi, finishlist): - return global_cache(self, ffi, 'new_void_type') - -void_type = VoidType() - - -class BasePrimitiveType(BaseType): - def is_complex_type(self): - return False - - -class PrimitiveType(BasePrimitiveType): - _attrs_ = ('name',) - - ALL_PRIMITIVE_TYPES = { - 'char': 'c', - 'short': 'i', - 'int': 'i', - 'long': 'i', - 'long long': 'i', - 'signed char': 'i', - 'unsigned char': 'i', - 'unsigned short': 'i', - 'unsigned int': 'i', - 'unsigned long': 'i', - 'unsigned long long': 'i', - 'float': 'f', - 'double': 'f', - 'long double': 'f', - '_cffi_float_complex_t': 'j', - '_cffi_double_complex_t': 'j', - '_Bool': 'i', - # the following types are not primitive in the C sense - 'wchar_t': 'c', - 'char16_t': 'c', - 'char32_t': 'c', - 'int8_t': 'i', - 'uint8_t': 'i', - 'int16_t': 'i', - 'uint16_t': 'i', - 'int32_t': 'i', - 'uint32_t': 'i', - 'int64_t': 'i', - 'uint64_t': 'i', - 'int_least8_t': 'i', - 'uint_least8_t': 'i', - 'int_least16_t': 'i', - 'uint_least16_t': 'i', - 'int_least32_t': 'i', - 'uint_least32_t': 'i', - 'int_least64_t': 'i', - 'uint_least64_t': 'i', - 'int_fast8_t': 'i', - 'uint_fast8_t': 'i', - 'int_fast16_t': 'i', - 'uint_fast16_t': 'i', - 'int_fast32_t': 'i', - 'uint_fast32_t': 'i', - 'int_fast64_t': 'i', - 'uint_fast64_t': 'i', - 'intptr_t': 'i', - 'uintptr_t': 'i', - 'intmax_t': 'i', - 'uintmax_t': 'i', - 'ptrdiff_t': 'i', - 'size_t': 'i', - 'ssize_t': 'i', - } - - def __init__(self, name): - assert name in self.ALL_PRIMITIVE_TYPES - self.name = name - self.c_name_with_marker = name + '&' - - def is_char_type(self): - return self.ALL_PRIMITIVE_TYPES[self.name] == 'c' - def is_integer_type(self): - return self.ALL_PRIMITIVE_TYPES[self.name] == 'i' - def is_float_type(self): - return self.ALL_PRIMITIVE_TYPES[self.name] == 'f' - def is_complex_type(self): - return self.ALL_PRIMITIVE_TYPES[self.name] == 'j' - - def build_backend_type(self, ffi, finishlist): - return global_cache(self, ffi, 'new_primitive_type', self.name) - - -class UnknownIntegerType(BasePrimitiveType): - _attrs_ = ('name',) - - def __init__(self, name): - self.name = name - self.c_name_with_marker = name + '&' - - def is_integer_type(self): - return True - - def build_backend_type(self, ffi, finishlist): - raise NotImplementedError("integer type '%s' can only be used after " - "compilation" % self.name) - -class UnknownFloatType(BasePrimitiveType): - _attrs_ = ('name', ) - - def __init__(self, name): - self.name = name - self.c_name_with_marker = name + '&' - - def build_backend_type(self, ffi, finishlist): - raise NotImplementedError("float type '%s' can only be used after " - "compilation" % self.name) - - -class BaseFunctionType(BaseType): - _attrs_ = ('args', 'result', 'ellipsis', 'abi') - - def __init__(self, args, result, ellipsis, abi=None): - self.args = args - self.result = result - self.ellipsis = ellipsis - self.abi = abi - # - reprargs = [arg._get_c_name() for arg in self.args] - if self.ellipsis: - reprargs.append('...') - reprargs = reprargs or ['void'] - replace_with = self._base_pattern % (', '.join(reprargs),) - if abi is not None: - replace_with = replace_with[:1] + abi + ' ' + replace_with[1:] - self.c_name_with_marker = ( - self.result.c_name_with_marker.replace('&', replace_with)) - - -class RawFunctionType(BaseFunctionType): - # Corresponds to a C type like 'int(int)', which is the C type of - # a function, but not a pointer-to-function. The backend has no - # notion of such a type; it's used temporarily by parsing. - _base_pattern = '(&)(%s)' - is_raw_function = True - - def build_backend_type(self, ffi, finishlist): - raise CDefError("cannot render the type %r: it is a function " - "type, not a pointer-to-function type" % (self,)) - - def as_function_pointer(self): - return FunctionPtrType(self.args, self.result, self.ellipsis, self.abi) - - -class FunctionPtrType(BaseFunctionType): - _base_pattern = '(*&)(%s)' - - def build_backend_type(self, ffi, finishlist): - result = self.result.get_cached_btype(ffi, finishlist) - args = [] - for tp in self.args: - args.append(tp.get_cached_btype(ffi, finishlist)) - abi_args = () - if self.abi == "__stdcall": - if not self.ellipsis: # __stdcall ignored for variadic funcs - try: - abi_args = (ffi._backend.FFI_STDCALL,) - except AttributeError: - pass - return global_cache(self, ffi, 'new_function_type', - tuple(args), result, self.ellipsis, *abi_args) - - def as_raw_function(self): - return RawFunctionType(self.args, self.result, self.ellipsis, self.abi) - - -class PointerType(BaseType): - _attrs_ = ('totype', 'quals') - - def __init__(self, totype, quals=0): - self.totype = totype - self.quals = quals - extra = " *&" - if totype.is_array_type: - extra = "(%s)" % (extra.lstrip(),) - extra = qualify(quals, extra) - self.c_name_with_marker = totype.c_name_with_marker.replace('&', extra) - - def build_backend_type(self, ffi, finishlist): - BItem = self.totype.get_cached_btype(ffi, finishlist, can_delay=True) - return global_cache(self, ffi, 'new_pointer_type', BItem) - -voidp_type = PointerType(void_type) - -def ConstPointerType(totype): - return PointerType(totype, Q_CONST) - -const_voidp_type = ConstPointerType(void_type) - - -class NamedPointerType(PointerType): - _attrs_ = ('totype', 'name') - - def __init__(self, totype, name, quals=0): - PointerType.__init__(self, totype, quals) - self.name = name - self.c_name_with_marker = name + '&' - - -class ArrayType(BaseType): - _attrs_ = ('item', 'length') - is_array_type = True - - def __init__(self, item, length): - self.item = item - self.length = length - # - if length is None: - brackets = '&[]' - elif length == '...': - brackets = '&[/*...*/]' - else: - brackets = '&[%s]' % length - self.c_name_with_marker = ( - self.item.c_name_with_marker.replace('&', brackets)) - - def length_is_unknown(self): - return isinstance(self.length, str) - - def resolve_length(self, newlength): - return ArrayType(self.item, newlength) - - def build_backend_type(self, ffi, finishlist): - if self.length_is_unknown(): - raise CDefError("cannot render the type %r: unknown length" % - (self,)) - self.item.get_cached_btype(ffi, finishlist) # force the item BType - BPtrItem = PointerType(self.item).get_cached_btype(ffi, finishlist) - return global_cache(self, ffi, 'new_array_type', BPtrItem, self.length) - -char_array_type = ArrayType(PrimitiveType('char'), None) - - -class StructOrUnionOrEnum(BaseTypeByIdentity): - _attrs_ = ('name',) - forcename = None - - def build_c_name_with_marker(self): - name = self.forcename or '%s %s' % (self.kind, self.name) - self.c_name_with_marker = name + '&' - - def force_the_name(self, forcename): - self.forcename = forcename - self.build_c_name_with_marker() - - def get_official_name(self): - assert self.c_name_with_marker.endswith('&') - return self.c_name_with_marker[:-1] - - -class StructOrUnion(StructOrUnionOrEnum): - fixedlayout = None - completed = 0 - partial = False - packed = 0 - - def __init__(self, name, fldnames, fldtypes, fldbitsize, fldquals=None): - self.name = name - self.fldnames = fldnames - self.fldtypes = fldtypes - self.fldbitsize = fldbitsize - self.fldquals = fldquals - self.build_c_name_with_marker() - - def anonymous_struct_fields(self): - if self.fldtypes is not None: - for name, type in zip(self.fldnames, self.fldtypes): - if name == '' and isinstance(type, StructOrUnion): - yield type - - def enumfields(self, expand_anonymous_struct_union=True): - fldquals = self.fldquals - if fldquals is None: - fldquals = (0,) * len(self.fldnames) - for name, type, bitsize, quals in zip(self.fldnames, self.fldtypes, - self.fldbitsize, fldquals): - if (name == '' and isinstance(type, StructOrUnion) - and expand_anonymous_struct_union): - # nested anonymous struct/union - for result in type.enumfields(): - yield result - else: - yield (name, type, bitsize, quals) - - def force_flatten(self): - # force the struct or union to have a declaration that lists - # directly all fields returned by enumfields(), flattening - # nested anonymous structs/unions. - names = [] - types = [] - bitsizes = [] - fldquals = [] - for name, type, bitsize, quals in self.enumfields(): - names.append(name) - types.append(type) - bitsizes.append(bitsize) - fldquals.append(quals) - self.fldnames = tuple(names) - self.fldtypes = tuple(types) - self.fldbitsize = tuple(bitsizes) - self.fldquals = tuple(fldquals) - - def get_cached_btype(self, ffi, finishlist, can_delay=False): - BType = StructOrUnionOrEnum.get_cached_btype(self, ffi, finishlist, - can_delay) - if not can_delay: - self.finish_backend_type(ffi, finishlist) - return BType - - def finish_backend_type(self, ffi, finishlist): - if self.completed: - if self.completed != 2: - raise NotImplementedError("recursive structure declaration " - "for '%s'" % (self.name,)) - return - BType = ffi._cached_btypes[self] - # - self.completed = 1 - # - if self.fldtypes is None: - pass # not completing it: it's an opaque struct - # - elif self.fixedlayout is None: - fldtypes = [tp.get_cached_btype(ffi, finishlist) - for tp in self.fldtypes] - lst = list(zip(self.fldnames, fldtypes, self.fldbitsize)) - extra_flags = () - if self.packed: - if self.packed == 1: - extra_flags = (8,) # SF_PACKED - else: - extra_flags = (0, self.packed) - ffi._backend.complete_struct_or_union(BType, lst, self, - -1, -1, *extra_flags) - # - else: - fldtypes = [] - fieldofs, fieldsize, totalsize, totalalignment = self.fixedlayout - for i in range(len(self.fldnames)): - fsize = fieldsize[i] - ftype = self.fldtypes[i] - # - if isinstance(ftype, ArrayType) and ftype.length_is_unknown(): - # fix the length to match the total size - BItemType = ftype.item.get_cached_btype(ffi, finishlist) - nlen, nrest = divmod(fsize, ffi.sizeof(BItemType)) - if nrest != 0: - self._verification_error( - "field '%s.%s' has a bogus size?" % ( - self.name, self.fldnames[i] or '{}')) - ftype = ftype.resolve_length(nlen) - self.fldtypes = (self.fldtypes[:i] + (ftype,) + - self.fldtypes[i+1:]) - # - BFieldType = ftype.get_cached_btype(ffi, finishlist) - if isinstance(ftype, ArrayType) and ftype.length is None: - assert fsize == 0 - else: - bitemsize = ffi.sizeof(BFieldType) - if bitemsize != fsize: - self._verification_error( - "field '%s.%s' is declared as %d bytes, but is " - "really %d bytes" % (self.name, - self.fldnames[i] or '{}', - bitemsize, fsize)) - fldtypes.append(BFieldType) - # - lst = list(zip(self.fldnames, fldtypes, self.fldbitsize, fieldofs)) - ffi._backend.complete_struct_or_union(BType, lst, self, - totalsize, totalalignment) - self.completed = 2 - - def _verification_error(self, msg): - raise VerificationError(msg) - - def check_not_partial(self): - if self.partial and self.fixedlayout is None: - raise VerificationMissing(self._get_c_name()) - - def build_backend_type(self, ffi, finishlist): - self.check_not_partial() - finishlist.append(self) - # - return global_cache(self, ffi, 'new_%s_type' % self.kind, - self.get_official_name(), key=self) - - -class StructType(StructOrUnion): - kind = 'struct' - - -class UnionType(StructOrUnion): - kind = 'union' - - -class EnumType(StructOrUnionOrEnum): - kind = 'enum' - partial = False - partial_resolved = False - - def __init__(self, name, enumerators, enumvalues, baseinttype=None): - self.name = name - self.enumerators = enumerators - self.enumvalues = enumvalues - self.baseinttype = baseinttype - self.build_c_name_with_marker() - - def force_the_name(self, forcename): - StructOrUnionOrEnum.force_the_name(self, forcename) - if self.forcename is None: - name = self.get_official_name() - self.forcename = '$' + name.replace(' ', '_') - - def check_not_partial(self): - if self.partial and not self.partial_resolved: - raise VerificationMissing(self._get_c_name()) - - def build_backend_type(self, ffi, finishlist): - self.check_not_partial() - base_btype = self.build_baseinttype(ffi, finishlist) - return global_cache(self, ffi, 'new_enum_type', - self.get_official_name(), - self.enumerators, self.enumvalues, - base_btype, key=self) - - def build_baseinttype(self, ffi, finishlist): - if self.baseinttype is not None: - return self.baseinttype.get_cached_btype(ffi, finishlist) - # - if self.enumvalues: - smallest_value = min(self.enumvalues) - largest_value = max(self.enumvalues) - else: - import warnings - try: - # XXX! The goal is to ensure that the warnings.warn() - # will not suppress the warning. We want to get it - # several times if we reach this point several times. - __warningregistry__.clear() - except NameError: - pass - warnings.warn("%r has no values explicitly defined; " - "guessing that it is equivalent to 'unsigned int'" - % self._get_c_name()) - smallest_value = largest_value = 0 - if smallest_value < 0: # needs a signed type - sign = 1 - candidate1 = PrimitiveType("int") - candidate2 = PrimitiveType("long") - else: - sign = 0 - candidate1 = PrimitiveType("unsigned int") - candidate2 = PrimitiveType("unsigned long") - btype1 = candidate1.get_cached_btype(ffi, finishlist) - btype2 = candidate2.get_cached_btype(ffi, finishlist) - size1 = ffi.sizeof(btype1) - size2 = ffi.sizeof(btype2) - if (smallest_value >= ((-1) << (8*size1-1)) and - largest_value < (1 << (8*size1-sign))): - return btype1 - if (smallest_value >= ((-1) << (8*size2-1)) and - largest_value < (1 << (8*size2-sign))): - return btype2 - raise CDefError("%s values don't all fit into either 'long' " - "or 'unsigned long'" % self._get_c_name()) - -def unknown_type(name, structname=None): - if structname is None: - structname = '$%s' % name - tp = StructType(structname, None, None, None) - tp.force_the_name(name) - tp.origin = "unknown_type" - return tp - -def unknown_ptr_type(name, structname=None): - if structname is None: - structname = '$$%s' % name - tp = StructType(structname, None, None, None) - return NamedPointerType(tp, name) - - -global_lock = allocate_lock() -_typecache_cffi_backend = weakref.WeakValueDictionary() - -def get_typecache(backend): - # returns _typecache_cffi_backend if backend is the _cffi_backend - # module, or type(backend).__typecache if backend is an instance of - # CTypesBackend (or some FakeBackend class during tests) - if isinstance(backend, types.ModuleType): - return _typecache_cffi_backend - with global_lock: - if not hasattr(type(backend), '__typecache'): - type(backend).__typecache = weakref.WeakValueDictionary() - return type(backend).__typecache - -def global_cache(srctype, ffi, funcname, *args, **kwds): - key = kwds.pop('key', (funcname, args)) - assert not kwds - try: - return ffi._typecache[key] - except KeyError: - pass - try: - res = getattr(ffi._backend, funcname)(*args) - except NotImplementedError as e: - raise NotImplementedError("%s: %r: %s" % (funcname, srctype, e)) - # note that setdefault() on WeakValueDictionary is not atomic - # and contains a rare bug (http://bugs.python.org/issue19542); - # we have to use a lock and do it ourselves - cache = ffi._typecache - with global_lock: - res1 = cache.get(key) - if res1 is None: - cache[key] = res - return res - else: - return res1 - -def pointer_cache(ffi, BType): - return global_cache('?', ffi, 'new_pointer_type', BType) - -def attach_exception_info(e, name): - if e.args and type(e.args[0]) is str: - e.args = ('%s: %s' % (name, e.args[0]),) + e.args[1:] diff --git a/pptx-env/lib/python3.12/site-packages/cffi/parse_c_type.h b/pptx-env/lib/python3.12/site-packages/cffi/parse_c_type.h deleted file mode 100644 index 84e4ef85..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/parse_c_type.h +++ /dev/null @@ -1,181 +0,0 @@ - -/* This part is from file 'cffi/parse_c_type.h'. It is copied at the - beginning of C sources generated by CFFI's ffi.set_source(). */ - -typedef void *_cffi_opcode_t; - -#define _CFFI_OP(opcode, arg) (_cffi_opcode_t)(opcode | (((uintptr_t)(arg)) << 8)) -#define _CFFI_GETOP(cffi_opcode) ((unsigned char)(uintptr_t)cffi_opcode) -#define _CFFI_GETARG(cffi_opcode) (((intptr_t)cffi_opcode) >> 8) - -#define _CFFI_OP_PRIMITIVE 1 -#define _CFFI_OP_POINTER 3 -#define _CFFI_OP_ARRAY 5 -#define _CFFI_OP_OPEN_ARRAY 7 -#define _CFFI_OP_STRUCT_UNION 9 -#define _CFFI_OP_ENUM 11 -#define _CFFI_OP_FUNCTION 13 -#define _CFFI_OP_FUNCTION_END 15 -#define _CFFI_OP_NOOP 17 -#define _CFFI_OP_BITFIELD 19 -#define _CFFI_OP_TYPENAME 21 -#define _CFFI_OP_CPYTHON_BLTN_V 23 // varargs -#define _CFFI_OP_CPYTHON_BLTN_N 25 // noargs -#define _CFFI_OP_CPYTHON_BLTN_O 27 // O (i.e. a single arg) -#define _CFFI_OP_CONSTANT 29 -#define _CFFI_OP_CONSTANT_INT 31 -#define _CFFI_OP_GLOBAL_VAR 33 -#define _CFFI_OP_DLOPEN_FUNC 35 -#define _CFFI_OP_DLOPEN_CONST 37 -#define _CFFI_OP_GLOBAL_VAR_F 39 -#define _CFFI_OP_EXTERN_PYTHON 41 - -#define _CFFI_PRIM_VOID 0 -#define _CFFI_PRIM_BOOL 1 -#define _CFFI_PRIM_CHAR 2 -#define _CFFI_PRIM_SCHAR 3 -#define _CFFI_PRIM_UCHAR 4 -#define _CFFI_PRIM_SHORT 5 -#define _CFFI_PRIM_USHORT 6 -#define _CFFI_PRIM_INT 7 -#define _CFFI_PRIM_UINT 8 -#define _CFFI_PRIM_LONG 9 -#define _CFFI_PRIM_ULONG 10 -#define _CFFI_PRIM_LONGLONG 11 -#define _CFFI_PRIM_ULONGLONG 12 -#define _CFFI_PRIM_FLOAT 13 -#define _CFFI_PRIM_DOUBLE 14 -#define _CFFI_PRIM_LONGDOUBLE 15 - -#define _CFFI_PRIM_WCHAR 16 -#define _CFFI_PRIM_INT8 17 -#define _CFFI_PRIM_UINT8 18 -#define _CFFI_PRIM_INT16 19 -#define _CFFI_PRIM_UINT16 20 -#define _CFFI_PRIM_INT32 21 -#define _CFFI_PRIM_UINT32 22 -#define _CFFI_PRIM_INT64 23 -#define _CFFI_PRIM_UINT64 24 -#define _CFFI_PRIM_INTPTR 25 -#define _CFFI_PRIM_UINTPTR 26 -#define _CFFI_PRIM_PTRDIFF 27 -#define _CFFI_PRIM_SIZE 28 -#define _CFFI_PRIM_SSIZE 29 -#define _CFFI_PRIM_INT_LEAST8 30 -#define _CFFI_PRIM_UINT_LEAST8 31 -#define _CFFI_PRIM_INT_LEAST16 32 -#define _CFFI_PRIM_UINT_LEAST16 33 -#define _CFFI_PRIM_INT_LEAST32 34 -#define _CFFI_PRIM_UINT_LEAST32 35 -#define _CFFI_PRIM_INT_LEAST64 36 -#define _CFFI_PRIM_UINT_LEAST64 37 -#define _CFFI_PRIM_INT_FAST8 38 -#define _CFFI_PRIM_UINT_FAST8 39 -#define _CFFI_PRIM_INT_FAST16 40 -#define _CFFI_PRIM_UINT_FAST16 41 -#define _CFFI_PRIM_INT_FAST32 42 -#define _CFFI_PRIM_UINT_FAST32 43 -#define _CFFI_PRIM_INT_FAST64 44 -#define _CFFI_PRIM_UINT_FAST64 45 -#define _CFFI_PRIM_INTMAX 46 -#define _CFFI_PRIM_UINTMAX 47 -#define _CFFI_PRIM_FLOATCOMPLEX 48 -#define _CFFI_PRIM_DOUBLECOMPLEX 49 -#define _CFFI_PRIM_CHAR16 50 -#define _CFFI_PRIM_CHAR32 51 - -#define _CFFI__NUM_PRIM 52 -#define _CFFI__UNKNOWN_PRIM (-1) -#define _CFFI__UNKNOWN_FLOAT_PRIM (-2) -#define _CFFI__UNKNOWN_LONG_DOUBLE (-3) - -#define _CFFI__IO_FILE_STRUCT (-1) - - -struct _cffi_global_s { - const char *name; - void *address; - _cffi_opcode_t type_op; - void *size_or_direct_fn; // OP_GLOBAL_VAR: size, or 0 if unknown - // OP_CPYTHON_BLTN_*: addr of direct function -}; - -struct _cffi_getconst_s { - unsigned long long value; - const struct _cffi_type_context_s *ctx; - int gindex; -}; - -struct _cffi_struct_union_s { - const char *name; - int type_index; // -> _cffi_types, on a OP_STRUCT_UNION - int flags; // _CFFI_F_* flags below - size_t size; - int alignment; - int first_field_index; // -> _cffi_fields array - int num_fields; -}; -#define _CFFI_F_UNION 0x01 // is a union, not a struct -#define _CFFI_F_CHECK_FIELDS 0x02 // complain if fields are not in the - // "standard layout" or if some are missing -#define _CFFI_F_PACKED 0x04 // for CHECK_FIELDS, assume a packed struct -#define _CFFI_F_EXTERNAL 0x08 // in some other ffi.include() -#define _CFFI_F_OPAQUE 0x10 // opaque - -struct _cffi_field_s { - const char *name; - size_t field_offset; - size_t field_size; - _cffi_opcode_t field_type_op; -}; - -struct _cffi_enum_s { - const char *name; - int type_index; // -> _cffi_types, on a OP_ENUM - int type_prim; // _CFFI_PRIM_xxx - const char *enumerators; // comma-delimited string -}; - -struct _cffi_typename_s { - const char *name; - int type_index; /* if opaque, points to a possibly artificial - OP_STRUCT which is itself opaque */ -}; - -struct _cffi_type_context_s { - _cffi_opcode_t *types; - const struct _cffi_global_s *globals; - const struct _cffi_field_s *fields; - const struct _cffi_struct_union_s *struct_unions; - const struct _cffi_enum_s *enums; - const struct _cffi_typename_s *typenames; - int num_globals; - int num_struct_unions; - int num_enums; - int num_typenames; - const char *const *includes; - int num_types; - int flags; /* future extension */ -}; - -struct _cffi_parse_info_s { - const struct _cffi_type_context_s *ctx; - _cffi_opcode_t *output; - unsigned int output_size; - size_t error_location; - const char *error_message; -}; - -struct _cffi_externpy_s { - const char *name; - size_t size_of_result; - void *reserved1, *reserved2; -}; - -#ifdef _CFFI_INTERNAL -static int parse_c_type(struct _cffi_parse_info_s *info, const char *input); -static int search_in_globals(const struct _cffi_type_context_s *ctx, - const char *search, size_t search_len); -static int search_in_struct_unions(const struct _cffi_type_context_s *ctx, - const char *search, size_t search_len); -#endif diff --git a/pptx-env/lib/python3.12/site-packages/cffi/pkgconfig.py b/pptx-env/lib/python3.12/site-packages/cffi/pkgconfig.py deleted file mode 100644 index 5c93f15a..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/pkgconfig.py +++ /dev/null @@ -1,121 +0,0 @@ -# pkg-config, https://www.freedesktop.org/wiki/Software/pkg-config/ integration for cffi -import sys, os, subprocess - -from .error import PkgConfigError - - -def merge_flags(cfg1, cfg2): - """Merge values from cffi config flags cfg2 to cf1 - - Example: - merge_flags({"libraries": ["one"]}, {"libraries": ["two"]}) - {"libraries": ["one", "two"]} - """ - for key, value in cfg2.items(): - if key not in cfg1: - cfg1[key] = value - else: - if not isinstance(cfg1[key], list): - raise TypeError("cfg1[%r] should be a list of strings" % (key,)) - if not isinstance(value, list): - raise TypeError("cfg2[%r] should be a list of strings" % (key,)) - cfg1[key].extend(value) - return cfg1 - - -def call(libname, flag, encoding=sys.getfilesystemencoding()): - """Calls pkg-config and returns the output if found - """ - a = ["pkg-config", "--print-errors"] - a.append(flag) - a.append(libname) - try: - pc = subprocess.Popen(a, stdout=subprocess.PIPE, stderr=subprocess.PIPE) - except EnvironmentError as e: - raise PkgConfigError("cannot run pkg-config: %s" % (str(e).strip(),)) - - bout, berr = pc.communicate() - if pc.returncode != 0: - try: - berr = berr.decode(encoding) - except Exception: - pass - raise PkgConfigError(berr.strip()) - - if sys.version_info >= (3,) and not isinstance(bout, str): # Python 3.x - try: - bout = bout.decode(encoding) - except UnicodeDecodeError: - raise PkgConfigError("pkg-config %s %s returned bytes that cannot " - "be decoded with encoding %r:\n%r" % - (flag, libname, encoding, bout)) - - if os.altsep != '\\' and '\\' in bout: - raise PkgConfigError("pkg-config %s %s returned an unsupported " - "backslash-escaped output:\n%r" % - (flag, libname, bout)) - return bout - - -def flags_from_pkgconfig(libs): - r"""Return compiler line flags for FFI.set_source based on pkg-config output - - Usage - ... - ffibuilder.set_source("_foo", pkgconfig = ["libfoo", "libbar >= 1.8.3"]) - - If pkg-config is installed on build machine, then arguments include_dirs, - library_dirs, libraries, define_macros, extra_compile_args and - extra_link_args are extended with an output of pkg-config for libfoo and - libbar. - - Raises PkgConfigError in case the pkg-config call fails. - """ - - def get_include_dirs(string): - return [x[2:] for x in string.split() if x.startswith("-I")] - - def get_library_dirs(string): - return [x[2:] for x in string.split() if x.startswith("-L")] - - def get_libraries(string): - return [x[2:] for x in string.split() if x.startswith("-l")] - - # convert -Dfoo=bar to list of tuples [("foo", "bar")] expected by distutils - def get_macros(string): - def _macro(x): - x = x[2:] # drop "-D" - if '=' in x: - return tuple(x.split("=", 1)) # "-Dfoo=bar" => ("foo", "bar") - else: - return (x, None) # "-Dfoo" => ("foo", None) - return [_macro(x) for x in string.split() if x.startswith("-D")] - - def get_other_cflags(string): - return [x for x in string.split() if not x.startswith("-I") and - not x.startswith("-D")] - - def get_other_libs(string): - return [x for x in string.split() if not x.startswith("-L") and - not x.startswith("-l")] - - # return kwargs for given libname - def kwargs(libname): - fse = sys.getfilesystemencoding() - all_cflags = call(libname, "--cflags") - all_libs = call(libname, "--libs") - return { - "include_dirs": get_include_dirs(all_cflags), - "library_dirs": get_library_dirs(all_libs), - "libraries": get_libraries(all_libs), - "define_macros": get_macros(all_cflags), - "extra_compile_args": get_other_cflags(all_cflags), - "extra_link_args": get_other_libs(all_libs), - } - - # merge all arguments together - ret = {} - for libname in libs: - lib_flags = kwargs(libname) - merge_flags(ret, lib_flags) - return ret diff --git a/pptx-env/lib/python3.12/site-packages/cffi/recompiler.py b/pptx-env/lib/python3.12/site-packages/cffi/recompiler.py deleted file mode 100644 index 7734a348..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/recompiler.py +++ /dev/null @@ -1,1598 +0,0 @@ -import io, os, sys, sysconfig -from . import ffiplatform, model -from .error import VerificationError -from .cffi_opcode import * - -VERSION_BASE = 0x2601 -VERSION_EMBEDDED = 0x2701 -VERSION_CHAR16CHAR32 = 0x2801 - -USE_LIMITED_API = ((sys.platform != 'win32' or sys.version_info < (3, 0) or - sys.version_info >= (3, 5)) and - not sysconfig.get_config_var("Py_GIL_DISABLED")) # free-threaded doesn't yet support limited API - -class GlobalExpr: - def __init__(self, name, address, type_op, size=0, check_value=0): - self.name = name - self.address = address - self.type_op = type_op - self.size = size - self.check_value = check_value - - def as_c_expr(self): - return ' { "%s", (void *)%s, %s, (void *)%s },' % ( - self.name, self.address, self.type_op.as_c_expr(), self.size) - - def as_python_expr(self): - return "b'%s%s',%d" % (self.type_op.as_python_bytes(), self.name, - self.check_value) - -class FieldExpr: - def __init__(self, name, field_offset, field_size, fbitsize, field_type_op): - self.name = name - self.field_offset = field_offset - self.field_size = field_size - self.fbitsize = fbitsize - self.field_type_op = field_type_op - - def as_c_expr(self): - spaces = " " * len(self.name) - return (' { "%s", %s,\n' % (self.name, self.field_offset) + - ' %s %s,\n' % (spaces, self.field_size) + - ' %s %s },' % (spaces, self.field_type_op.as_c_expr())) - - def as_python_expr(self): - raise NotImplementedError - - def as_field_python_expr(self): - if self.field_type_op.op == OP_NOOP: - size_expr = '' - elif self.field_type_op.op == OP_BITFIELD: - size_expr = format_four_bytes(self.fbitsize) - else: - raise NotImplementedError - return "b'%s%s%s'" % (self.field_type_op.as_python_bytes(), - size_expr, - self.name) - -class StructUnionExpr: - def __init__(self, name, type_index, flags, size, alignment, comment, - first_field_index, c_fields): - self.name = name - self.type_index = type_index - self.flags = flags - self.size = size - self.alignment = alignment - self.comment = comment - self.first_field_index = first_field_index - self.c_fields = c_fields - - def as_c_expr(self): - return (' { "%s", %d, %s,' % (self.name, self.type_index, self.flags) - + '\n %s, %s, ' % (self.size, self.alignment) - + '%d, %d ' % (self.first_field_index, len(self.c_fields)) - + ('/* %s */ ' % self.comment if self.comment else '') - + '},') - - def as_python_expr(self): - flags = eval(self.flags, G_FLAGS) - fields_expr = [c_field.as_field_python_expr() - for c_field in self.c_fields] - return "(b'%s%s%s',%s)" % ( - format_four_bytes(self.type_index), - format_four_bytes(flags), - self.name, - ','.join(fields_expr)) - -class EnumExpr: - def __init__(self, name, type_index, size, signed, allenums): - self.name = name - self.type_index = type_index - self.size = size - self.signed = signed - self.allenums = allenums - - def as_c_expr(self): - return (' { "%s", %d, _cffi_prim_int(%s, %s),\n' - ' "%s" },' % (self.name, self.type_index, - self.size, self.signed, self.allenums)) - - def as_python_expr(self): - prim_index = { - (1, 0): PRIM_UINT8, (1, 1): PRIM_INT8, - (2, 0): PRIM_UINT16, (2, 1): PRIM_INT16, - (4, 0): PRIM_UINT32, (4, 1): PRIM_INT32, - (8, 0): PRIM_UINT64, (8, 1): PRIM_INT64, - }[self.size, self.signed] - return "b'%s%s%s\\x00%s'" % (format_four_bytes(self.type_index), - format_four_bytes(prim_index), - self.name, self.allenums) - -class TypenameExpr: - def __init__(self, name, type_index): - self.name = name - self.type_index = type_index - - def as_c_expr(self): - return ' { "%s", %d },' % (self.name, self.type_index) - - def as_python_expr(self): - return "b'%s%s'" % (format_four_bytes(self.type_index), self.name) - - -# ____________________________________________________________ - - -class Recompiler: - _num_externpy = 0 - - def __init__(self, ffi, module_name, target_is_python=False): - self.ffi = ffi - self.module_name = module_name - self.target_is_python = target_is_python - self._version = VERSION_BASE - - def needs_version(self, ver): - self._version = max(self._version, ver) - - def collect_type_table(self): - self._typesdict = {} - self._generate("collecttype") - # - all_decls = sorted(self._typesdict, key=str) - # - # prepare all FUNCTION bytecode sequences first - self.cffi_types = [] - for tp in all_decls: - if tp.is_raw_function: - assert self._typesdict[tp] is None - self._typesdict[tp] = len(self.cffi_types) - self.cffi_types.append(tp) # placeholder - for tp1 in tp.args: - assert isinstance(tp1, (model.VoidType, - model.BasePrimitiveType, - model.PointerType, - model.StructOrUnionOrEnum, - model.FunctionPtrType)) - if self._typesdict[tp1] is None: - self._typesdict[tp1] = len(self.cffi_types) - self.cffi_types.append(tp1) # placeholder - self.cffi_types.append('END') # placeholder - # - # prepare all OTHER bytecode sequences - for tp in all_decls: - if not tp.is_raw_function and self._typesdict[tp] is None: - self._typesdict[tp] = len(self.cffi_types) - self.cffi_types.append(tp) # placeholder - if tp.is_array_type and tp.length is not None: - self.cffi_types.append('LEN') # placeholder - assert None not in self._typesdict.values() - # - # collect all structs and unions and enums - self._struct_unions = {} - self._enums = {} - for tp in all_decls: - if isinstance(tp, model.StructOrUnion): - self._struct_unions[tp] = None - elif isinstance(tp, model.EnumType): - self._enums[tp] = None - for i, tp in enumerate(sorted(self._struct_unions, - key=lambda tp: tp.name)): - self._struct_unions[tp] = i - for i, tp in enumerate(sorted(self._enums, - key=lambda tp: tp.name)): - self._enums[tp] = i - # - # emit all bytecode sequences now - for tp in all_decls: - method = getattr(self, '_emit_bytecode_' + tp.__class__.__name__) - method(tp, self._typesdict[tp]) - # - # consistency check - for op in self.cffi_types: - assert isinstance(op, CffiOp) - self.cffi_types = tuple(self.cffi_types) # don't change any more - - def _enum_fields(self, tp): - # When producing C, expand all anonymous struct/union fields. - # That's necessary to have C code checking the offsets of the - # individual fields contained in them. When producing Python, - # don't do it and instead write it like it is, with the - # corresponding fields having an empty name. Empty names are - # recognized at runtime when we import the generated Python - # file. - expand_anonymous_struct_union = not self.target_is_python - return tp.enumfields(expand_anonymous_struct_union) - - def _do_collect_type(self, tp): - if not isinstance(tp, model.BaseTypeByIdentity): - if isinstance(tp, tuple): - for x in tp: - self._do_collect_type(x) - return - if tp not in self._typesdict: - self._typesdict[tp] = None - if isinstance(tp, model.FunctionPtrType): - self._do_collect_type(tp.as_raw_function()) - elif isinstance(tp, model.StructOrUnion): - if tp.fldtypes is not None and ( - tp not in self.ffi._parser._included_declarations): - for name1, tp1, _, _ in self._enum_fields(tp): - self._do_collect_type(self._field_type(tp, name1, tp1)) - else: - for _, x in tp._get_items(): - self._do_collect_type(x) - - def _generate(self, step_name): - lst = self.ffi._parser._declarations.items() - for name, (tp, quals) in sorted(lst): - kind, realname = name.split(' ', 1) - try: - method = getattr(self, '_generate_cpy_%s_%s' % (kind, - step_name)) - except AttributeError: - raise VerificationError( - "not implemented in recompile(): %r" % name) - try: - self._current_quals = quals - method(tp, realname) - except Exception as e: - model.attach_exception_info(e, name) - raise - - # ---------- - - ALL_STEPS = ["global", "field", "struct_union", "enum", "typename"] - - def collect_step_tables(self): - # collect the declarations for '_cffi_globals', '_cffi_typenames', etc. - self._lsts = {} - for step_name in self.ALL_STEPS: - self._lsts[step_name] = [] - self._seen_struct_unions = set() - self._generate("ctx") - self._add_missing_struct_unions() - # - for step_name in self.ALL_STEPS: - lst = self._lsts[step_name] - if step_name != "field": - lst.sort(key=lambda entry: entry.name) - self._lsts[step_name] = tuple(lst) # don't change any more - # - # check for a possible internal inconsistency: _cffi_struct_unions - # should have been generated with exactly self._struct_unions - lst = self._lsts["struct_union"] - for tp, i in self._struct_unions.items(): - assert i < len(lst) - assert lst[i].name == tp.name - assert len(lst) == len(self._struct_unions) - # same with enums - lst = self._lsts["enum"] - for tp, i in self._enums.items(): - assert i < len(lst) - assert lst[i].name == tp.name - assert len(lst) == len(self._enums) - - # ---------- - - def _prnt(self, what=''): - self._f.write(what + '\n') - - def write_source_to_f(self, f, preamble): - if self.target_is_python: - assert preamble is None - self.write_py_source_to_f(f) - else: - assert preamble is not None - self.write_c_source_to_f(f, preamble) - - def _rel_readlines(self, filename): - g = open(os.path.join(os.path.dirname(__file__), filename), 'r') - lines = g.readlines() - g.close() - return lines - - def write_c_source_to_f(self, f, preamble): - self._f = f - prnt = self._prnt - if self.ffi._embedding is not None: - prnt('#define _CFFI_USE_EMBEDDING') - if not USE_LIMITED_API: - prnt('#define _CFFI_NO_LIMITED_API') - # - # first the '#include' (actually done by inlining the file's content) - lines = self._rel_readlines('_cffi_include.h') - i = lines.index('#include "parse_c_type.h"\n') - lines[i:i+1] = self._rel_readlines('parse_c_type.h') - prnt(''.join(lines)) - # - # if we have ffi._embedding != None, we give it here as a macro - # and include an extra file - base_module_name = self.module_name.split('.')[-1] - if self.ffi._embedding is not None: - prnt('#define _CFFI_MODULE_NAME "%s"' % (self.module_name,)) - prnt('static const char _CFFI_PYTHON_STARTUP_CODE[] = {') - self._print_string_literal_in_array(self.ffi._embedding) - prnt('0 };') - prnt('#ifdef PYPY_VERSION') - prnt('# define _CFFI_PYTHON_STARTUP_FUNC _cffi_pypyinit_%s' % ( - base_module_name,)) - prnt('#elif PY_MAJOR_VERSION >= 3') - prnt('# define _CFFI_PYTHON_STARTUP_FUNC PyInit_%s' % ( - base_module_name,)) - prnt('#else') - prnt('# define _CFFI_PYTHON_STARTUP_FUNC init%s' % ( - base_module_name,)) - prnt('#endif') - lines = self._rel_readlines('_embedding.h') - i = lines.index('#include "_cffi_errors.h"\n') - lines[i:i+1] = self._rel_readlines('_cffi_errors.h') - prnt(''.join(lines)) - self.needs_version(VERSION_EMBEDDED) - # - # then paste the C source given by the user, verbatim. - prnt('/************************************************************/') - prnt() - prnt(preamble) - prnt() - prnt('/************************************************************/') - prnt() - # - # the declaration of '_cffi_types' - prnt('static void *_cffi_types[] = {') - typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) - for i, op in enumerate(self.cffi_types): - comment = '' - if i in typeindex2type: - comment = ' // ' + typeindex2type[i]._get_c_name() - prnt('/* %2d */ %s,%s' % (i, op.as_c_expr(), comment)) - if not self.cffi_types: - prnt(' 0') - prnt('};') - prnt() - # - # call generate_cpy_xxx_decl(), for every xxx found from - # ffi._parser._declarations. This generates all the functions. - self._seen_constants = set() - self._generate("decl") - # - # the declaration of '_cffi_globals' and '_cffi_typenames' - nums = {} - for step_name in self.ALL_STEPS: - lst = self._lsts[step_name] - nums[step_name] = len(lst) - if nums[step_name] > 0: - prnt('static const struct _cffi_%s_s _cffi_%ss[] = {' % ( - step_name, step_name)) - for entry in lst: - prnt(entry.as_c_expr()) - prnt('};') - prnt() - # - # the declaration of '_cffi_includes' - if self.ffi._included_ffis: - prnt('static const char * const _cffi_includes[] = {') - for ffi_to_include in self.ffi._included_ffis: - try: - included_module_name, included_source = ( - ffi_to_include._assigned_source[:2]) - except AttributeError: - raise VerificationError( - "ffi object %r includes %r, but the latter has not " - "been prepared with set_source()" % ( - self.ffi, ffi_to_include,)) - if included_source is None: - raise VerificationError( - "not implemented yet: ffi.include() of a Python-based " - "ffi inside a C-based ffi") - prnt(' "%s",' % (included_module_name,)) - prnt(' NULL') - prnt('};') - prnt() - # - # the declaration of '_cffi_type_context' - prnt('static const struct _cffi_type_context_s _cffi_type_context = {') - prnt(' _cffi_types,') - for step_name in self.ALL_STEPS: - if nums[step_name] > 0: - prnt(' _cffi_%ss,' % step_name) - else: - prnt(' NULL, /* no %ss */' % step_name) - for step_name in self.ALL_STEPS: - if step_name != "field": - prnt(' %d, /* num_%ss */' % (nums[step_name], step_name)) - if self.ffi._included_ffis: - prnt(' _cffi_includes,') - else: - prnt(' NULL, /* no includes */') - prnt(' %d, /* num_types */' % (len(self.cffi_types),)) - flags = 0 - if self._num_externpy > 0 or self.ffi._embedding is not None: - flags |= 1 # set to mean that we use extern "Python" - prnt(' %d, /* flags */' % flags) - prnt('};') - prnt() - # - # the init function - prnt('#ifdef __GNUC__') - prnt('# pragma GCC visibility push(default) /* for -fvisibility= */') - prnt('#endif') - prnt() - prnt('#ifdef PYPY_VERSION') - prnt('PyMODINIT_FUNC') - prnt('_cffi_pypyinit_%s(const void *p[])' % (base_module_name,)) - prnt('{') - if flags & 1: - prnt(' if (((intptr_t)p[0]) >= 0x0A03) {') - prnt(' _cffi_call_python_org = ' - '(void(*)(struct _cffi_externpy_s *, char *))p[1];') - prnt(' }') - prnt(' p[0] = (const void *)0x%x;' % self._version) - prnt(' p[1] = &_cffi_type_context;') - prnt('#if PY_MAJOR_VERSION >= 3') - prnt(' return NULL;') - prnt('#endif') - prnt('}') - # on Windows, distutils insists on putting init_cffi_xyz in - # 'export_symbols', so instead of fighting it, just give up and - # give it one - prnt('# ifdef _MSC_VER') - prnt(' PyMODINIT_FUNC') - prnt('# if PY_MAJOR_VERSION >= 3') - prnt(' PyInit_%s(void) { return NULL; }' % (base_module_name,)) - prnt('# else') - prnt(' init%s(void) { }' % (base_module_name,)) - prnt('# endif') - prnt('# endif') - prnt('#elif PY_MAJOR_VERSION >= 3') - prnt('PyMODINIT_FUNC') - prnt('PyInit_%s(void)' % (base_module_name,)) - prnt('{') - prnt(' return _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( - self.module_name, self._version)) - prnt('}') - prnt('#else') - prnt('PyMODINIT_FUNC') - prnt('init%s(void)' % (base_module_name,)) - prnt('{') - prnt(' _cffi_init("%s", 0x%x, &_cffi_type_context);' % ( - self.module_name, self._version)) - prnt('}') - prnt('#endif') - prnt() - prnt('#ifdef __GNUC__') - prnt('# pragma GCC visibility pop') - prnt('#endif') - self._version = None - - def _to_py(self, x): - if isinstance(x, str): - return "b'%s'" % (x,) - if isinstance(x, (list, tuple)): - rep = [self._to_py(item) for item in x] - if len(rep) == 1: - rep.append('') - return "(%s)" % (','.join(rep),) - return x.as_python_expr() # Py2: unicode unexpected; Py3: bytes unexp. - - def write_py_source_to_f(self, f): - self._f = f - prnt = self._prnt - # - # header - prnt("# auto-generated file") - prnt("import _cffi_backend") - # - # the 'import' of the included ffis - num_includes = len(self.ffi._included_ffis or ()) - for i in range(num_includes): - ffi_to_include = self.ffi._included_ffis[i] - try: - included_module_name, included_source = ( - ffi_to_include._assigned_source[:2]) - except AttributeError: - raise VerificationError( - "ffi object %r includes %r, but the latter has not " - "been prepared with set_source()" % ( - self.ffi, ffi_to_include,)) - if included_source is not None: - raise VerificationError( - "not implemented yet: ffi.include() of a C-based " - "ffi inside a Python-based ffi") - prnt('from %s import ffi as _ffi%d' % (included_module_name, i)) - prnt() - prnt("ffi = _cffi_backend.FFI('%s'," % (self.module_name,)) - prnt(" _version = 0x%x," % (self._version,)) - self._version = None - # - # the '_types' keyword argument - self.cffi_types = tuple(self.cffi_types) # don't change any more - types_lst = [op.as_python_bytes() for op in self.cffi_types] - prnt(' _types = %s,' % (self._to_py(''.join(types_lst)),)) - typeindex2type = dict([(i, tp) for (tp, i) in self._typesdict.items()]) - # - # the keyword arguments from ALL_STEPS - for step_name in self.ALL_STEPS: - lst = self._lsts[step_name] - if len(lst) > 0 and step_name != "field": - prnt(' _%ss = %s,' % (step_name, self._to_py(lst))) - # - # the '_includes' keyword argument - if num_includes > 0: - prnt(' _includes = (%s,),' % ( - ', '.join(['_ffi%d' % i for i in range(num_includes)]),)) - # - # the footer - prnt(')') - - # ---------- - - def _gettypenum(self, type): - # a KeyError here is a bug. please report it! :-) - return self._typesdict[type] - - def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): - extraarg = '' - if isinstance(tp, model.BasePrimitiveType) and not tp.is_complex_type(): - if tp.is_integer_type() and tp.name != '_Bool': - converter = '_cffi_to_c_int' - extraarg = ', %s' % tp.name - elif isinstance(tp, model.UnknownFloatType): - # don't check with is_float_type(): it may be a 'long - # double' here, and _cffi_to_c_double would loose precision - converter = '(%s)_cffi_to_c_double' % (tp.get_c_name(''),) - else: - cname = tp.get_c_name('') - converter = '(%s)_cffi_to_c_%s' % (cname, - tp.name.replace(' ', '_')) - if cname in ('char16_t', 'char32_t'): - self.needs_version(VERSION_CHAR16CHAR32) - errvalue = '-1' - # - elif isinstance(tp, model.PointerType): - self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, - tovar, errcode) - return - # - elif (isinstance(tp, model.StructOrUnionOrEnum) or - isinstance(tp, model.BasePrimitiveType)): - # a struct (not a struct pointer) as a function argument; - # or, a complex (the same code works) - self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' - % (tovar, self._gettypenum(tp), fromvar)) - self._prnt(' %s;' % errcode) - return - # - elif isinstance(tp, model.FunctionPtrType): - converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') - extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) - errvalue = 'NULL' - # - else: - raise NotImplementedError(tp) - # - self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) - self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( - tovar, tp.get_c_name(''), errvalue)) - self._prnt(' %s;' % errcode) - - def _extra_local_variables(self, tp, localvars, freelines): - if isinstance(tp, model.PointerType): - localvars.add('Py_ssize_t datasize') - localvars.add('struct _cffi_freeme_s *large_args_free = NULL') - freelines.add('if (large_args_free != NULL)' - ' _cffi_free_array_arguments(large_args_free);') - - def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): - self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') - self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( - self._gettypenum(tp), fromvar, tovar)) - self._prnt(' if (datasize != 0) {') - self._prnt(' %s = ((size_t)datasize) <= 640 ? ' - '(%s)alloca((size_t)datasize) : NULL;' % ( - tovar, tp.get_c_name(''))) - self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' - '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) - self._prnt(' datasize, &large_args_free) < 0)') - self._prnt(' %s;' % errcode) - self._prnt(' }') - - def _convert_expr_from_c(self, tp, var, context): - if isinstance(tp, model.BasePrimitiveType): - if tp.is_integer_type() and tp.name != '_Bool': - return '_cffi_from_c_int(%s, %s)' % (var, tp.name) - elif isinstance(tp, model.UnknownFloatType): - return '_cffi_from_c_double(%s)' % (var,) - elif tp.name != 'long double' and not tp.is_complex_type(): - cname = tp.name.replace(' ', '_') - if cname in ('char16_t', 'char32_t'): - self.needs_version(VERSION_CHAR16CHAR32) - return '_cffi_from_c_%s(%s)' % (cname, var) - else: - return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): - return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, model.ArrayType): - return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( - var, self._gettypenum(model.PointerType(tp.item))) - elif isinstance(tp, model.StructOrUnion): - if tp.fldnames is None: - raise TypeError("'%s' is used as %s, but is opaque" % ( - tp._get_c_name(), context)) - return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, model.EnumType): - return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - else: - raise NotImplementedError(tp) - - # ---------- - # typedefs - - def _typedef_type(self, tp, name): - return self._global_type(tp, "(*(%s *)0)" % (name,)) - - def _generate_cpy_typedef_collecttype(self, tp, name): - self._do_collect_type(self._typedef_type(tp, name)) - - def _generate_cpy_typedef_decl(self, tp, name): - pass - - def _typedef_ctx(self, tp, name): - type_index = self._typesdict[tp] - self._lsts["typename"].append(TypenameExpr(name, type_index)) - - def _generate_cpy_typedef_ctx(self, tp, name): - tp = self._typedef_type(tp, name) - self._typedef_ctx(tp, name) - if getattr(tp, "origin", None) == "unknown_type": - self._struct_ctx(tp, tp.name, approxname=None) - elif isinstance(tp, model.NamedPointerType): - self._struct_ctx(tp.totype, tp.totype.name, approxname=tp.name, - named_ptr=tp) - - # ---------- - # function declarations - - def _generate_cpy_function_collecttype(self, tp, name): - self._do_collect_type(tp.as_raw_function()) - if tp.ellipsis and not self.target_is_python: - self._do_collect_type(tp) - - def _generate_cpy_function_decl(self, tp, name): - assert not self.target_is_python - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - # cannot support vararg functions better than this: check for its - # exact type (including the fixed arguments), and build it as a - # constant function pointer (no CPython wrapper) - self._generate_cpy_constant_decl(tp, name) - return - prnt = self._prnt - numargs = len(tp.args) - if numargs == 0: - argname = 'noarg' - elif numargs == 1: - argname = 'arg0' - else: - argname = 'args' - # - # ------------------------------ - # the 'd' version of the function, only for addressof(lib, 'func') - arguments = [] - call_arguments = [] - context = 'argument of %s' % name - for i, type in enumerate(tp.args): - arguments.append(type.get_c_name(' x%d' % i, context)) - call_arguments.append('x%d' % i) - repr_arguments = ', '.join(arguments) - repr_arguments = repr_arguments or 'void' - if tp.abi: - abi = tp.abi + ' ' - else: - abi = '' - name_and_arguments = '%s_cffi_d_%s(%s)' % (abi, name, repr_arguments) - prnt('static %s' % (tp.result.get_c_name(name_and_arguments),)) - prnt('{') - call_arguments = ', '.join(call_arguments) - result_code = 'return ' - if isinstance(tp.result, model.VoidType): - result_code = '' - prnt(' %s%s(%s);' % (result_code, name, call_arguments)) - prnt('}') - # - prnt('#ifndef PYPY_VERSION') # ------------------------------ - # - prnt('static PyObject *') - prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) - prnt('{') - # - context = 'argument of %s' % name - for i, type in enumerate(tp.args): - arg = type.get_c_name(' x%d' % i, context) - prnt(' %s;' % arg) - # - localvars = set() - freelines = set() - for type in tp.args: - self._extra_local_variables(type, localvars, freelines) - for decl in sorted(localvars): - prnt(' %s;' % (decl,)) - # - if not isinstance(tp.result, model.VoidType): - result_code = 'result = ' - context = 'result of %s' % name - result_decl = ' %s;' % tp.result.get_c_name(' result', context) - prnt(result_decl) - prnt(' PyObject *pyresult;') - else: - result_decl = None - result_code = '' - # - if len(tp.args) > 1: - rng = range(len(tp.args)) - for i in rng: - prnt(' PyObject *arg%d;' % i) - prnt() - prnt(' if (!PyArg_UnpackTuple(args, "%s", %d, %d, %s))' % ( - name, len(rng), len(rng), - ', '.join(['&arg%d' % i for i in rng]))) - prnt(' return NULL;') - prnt() - # - for i, type in enumerate(tp.args): - self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, - 'return NULL') - prnt() - # - prnt(' Py_BEGIN_ALLOW_THREADS') - prnt(' _cffi_restore_errno();') - call_arguments = ['x%d' % i for i in range(len(tp.args))] - call_arguments = ', '.join(call_arguments) - prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) - prnt(' _cffi_save_errno();') - prnt(' Py_END_ALLOW_THREADS') - prnt() - # - prnt(' (void)self; /* unused */') - if numargs == 0: - prnt(' (void)noarg; /* unused */') - if result_code: - prnt(' pyresult = %s;' % - self._convert_expr_from_c(tp.result, 'result', 'result type')) - for freeline in freelines: - prnt(' ' + freeline) - prnt(' return pyresult;') - else: - for freeline in freelines: - prnt(' ' + freeline) - prnt(' Py_INCREF(Py_None);') - prnt(' return Py_None;') - prnt('}') - # - prnt('#else') # ------------------------------ - # - # the PyPy version: need to replace struct/union arguments with - # pointers, and if the result is a struct/union, insert a first - # arg that is a pointer to the result. We also do that for - # complex args and return type. - def need_indirection(type): - return (isinstance(type, model.StructOrUnion) or - (isinstance(type, model.PrimitiveType) and - type.is_complex_type())) - difference = False - arguments = [] - call_arguments = [] - context = 'argument of %s' % name - for i, type in enumerate(tp.args): - indirection = '' - if need_indirection(type): - indirection = '*' - difference = True - arg = type.get_c_name(' %sx%d' % (indirection, i), context) - arguments.append(arg) - call_arguments.append('%sx%d' % (indirection, i)) - tp_result = tp.result - if need_indirection(tp_result): - context = 'result of %s' % name - arg = tp_result.get_c_name(' *result', context) - arguments.insert(0, arg) - tp_result = model.void_type - result_decl = None - result_code = '*result = ' - difference = True - if difference: - repr_arguments = ', '.join(arguments) - repr_arguments = repr_arguments or 'void' - name_and_arguments = '%s_cffi_f_%s(%s)' % (abi, name, - repr_arguments) - prnt('static %s' % (tp_result.get_c_name(name_and_arguments),)) - prnt('{') - if result_decl: - prnt(result_decl) - call_arguments = ', '.join(call_arguments) - prnt(' { %s%s(%s); }' % (result_code, name, call_arguments)) - if result_decl: - prnt(' return result;') - prnt('}') - else: - prnt('# define _cffi_f_%s _cffi_d_%s' % (name, name)) - # - prnt('#endif') # ------------------------------ - prnt() - - def _generate_cpy_function_ctx(self, tp, name): - if tp.ellipsis and not self.target_is_python: - self._generate_cpy_constant_ctx(tp, name) - return - type_index = self._typesdict[tp.as_raw_function()] - numargs = len(tp.args) - if self.target_is_python: - meth_kind = OP_DLOPEN_FUNC - elif numargs == 0: - meth_kind = OP_CPYTHON_BLTN_N # 'METH_NOARGS' - elif numargs == 1: - meth_kind = OP_CPYTHON_BLTN_O # 'METH_O' - else: - meth_kind = OP_CPYTHON_BLTN_V # 'METH_VARARGS' - self._lsts["global"].append( - GlobalExpr(name, '_cffi_f_%s' % name, - CffiOp(meth_kind, type_index), - size='_cffi_d_%s' % name)) - - # ---------- - # named structs or unions - - def _field_type(self, tp_struct, field_name, tp_field): - if isinstance(tp_field, model.ArrayType): - actual_length = tp_field.length - if actual_length == '...': - ptr_struct_name = tp_struct.get_c_name('*') - actual_length = '_cffi_array_len(((%s)0)->%s)' % ( - ptr_struct_name, field_name) - tp_item = self._field_type(tp_struct, '%s[0]' % field_name, - tp_field.item) - tp_field = model.ArrayType(tp_item, actual_length) - return tp_field - - def _struct_collecttype(self, tp): - self._do_collect_type(tp) - if self.target_is_python: - # also requires nested anon struct/unions in ABI mode, recursively - for fldtype in tp.anonymous_struct_fields(): - self._struct_collecttype(fldtype) - - def _struct_decl(self, tp, cname, approxname): - if tp.fldtypes is None: - return - prnt = self._prnt - checkfuncname = '_cffi_checkfld_%s' % (approxname,) - prnt('_CFFI_UNUSED_FN') - prnt('static void %s(%s *p)' % (checkfuncname, cname)) - prnt('{') - prnt(' /* only to generate compile-time warnings or errors */') - prnt(' (void)p;') - for fname, ftype, fbitsize, fqual in self._enum_fields(tp): - try: - if ftype.is_integer_type() or fbitsize >= 0: - # accept all integers, but complain on float or double - if fname != '': - prnt(" (void)((p->%s) | 0); /* check that '%s.%s' is " - "an integer */" % (fname, cname, fname)) - continue - # only accept exactly the type declared, except that '[]' - # is interpreted as a '*' and so will match any array length. - # (It would also match '*', but that's harder to detect...) - while (isinstance(ftype, model.ArrayType) - and (ftype.length is None or ftype.length == '...')): - ftype = ftype.item - fname = fname + '[0]' - prnt(' { %s = &p->%s; (void)tmp; }' % ( - ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), - fname)) - except VerificationError as e: - prnt(' /* %s */' % str(e)) # cannot verify it, ignore - prnt('}') - prnt('struct _cffi_align_%s { char x; %s y; };' % (approxname, cname)) - prnt() - - def _struct_ctx(self, tp, cname, approxname, named_ptr=None): - type_index = self._typesdict[tp] - reason_for_not_expanding = None - flags = [] - if isinstance(tp, model.UnionType): - flags.append("_CFFI_F_UNION") - if tp.fldtypes is None: - flags.append("_CFFI_F_OPAQUE") - reason_for_not_expanding = "opaque" - if (tp not in self.ffi._parser._included_declarations and - (named_ptr is None or - named_ptr not in self.ffi._parser._included_declarations)): - if tp.fldtypes is None: - pass # opaque - elif tp.partial or any(tp.anonymous_struct_fields()): - pass # field layout obtained silently from the C compiler - else: - flags.append("_CFFI_F_CHECK_FIELDS") - if tp.packed: - if tp.packed > 1: - raise NotImplementedError( - "%r is declared with 'pack=%r'; only 0 or 1 are " - "supported in API mode (try to use \"...;\", which " - "does not require a 'pack' declaration)" % - (tp, tp.packed)) - flags.append("_CFFI_F_PACKED") - else: - flags.append("_CFFI_F_EXTERNAL") - reason_for_not_expanding = "external" - flags = '|'.join(flags) or '0' - c_fields = [] - if reason_for_not_expanding is None: - enumfields = list(self._enum_fields(tp)) - for fldname, fldtype, fbitsize, fqual in enumfields: - fldtype = self._field_type(tp, fldname, fldtype) - self._check_not_opaque(fldtype, - "field '%s.%s'" % (tp.name, fldname)) - # cname is None for _add_missing_struct_unions() only - op = OP_NOOP - if fbitsize >= 0: - op = OP_BITFIELD - size = '%d /* bits */' % fbitsize - elif cname is None or ( - isinstance(fldtype, model.ArrayType) and - fldtype.length is None): - size = '(size_t)-1' - else: - size = 'sizeof(((%s)0)->%s)' % ( - tp.get_c_name('*') if named_ptr is None - else named_ptr.name, - fldname) - if cname is None or fbitsize >= 0: - offset = '(size_t)-1' - elif named_ptr is not None: - offset = '(size_t)(((char *)&((%s)4096)->%s) - (char *)4096)' % ( - named_ptr.name, fldname) - else: - offset = 'offsetof(%s, %s)' % (tp.get_c_name(''), fldname) - c_fields.append( - FieldExpr(fldname, offset, size, fbitsize, - CffiOp(op, self._typesdict[fldtype]))) - first_field_index = len(self._lsts["field"]) - self._lsts["field"].extend(c_fields) - # - if cname is None: # unknown name, for _add_missing_struct_unions - size = '(size_t)-2' - align = -2 - comment = "unnamed" - else: - if named_ptr is not None: - size = 'sizeof(*(%s)0)' % (named_ptr.name,) - align = '-1 /* unknown alignment */' - else: - size = 'sizeof(%s)' % (cname,) - align = 'offsetof(struct _cffi_align_%s, y)' % (approxname,) - comment = None - else: - size = '(size_t)-1' - align = -1 - first_field_index = -1 - comment = reason_for_not_expanding - self._lsts["struct_union"].append( - StructUnionExpr(tp.name, type_index, flags, size, align, comment, - first_field_index, c_fields)) - self._seen_struct_unions.add(tp) - - def _check_not_opaque(self, tp, location): - while isinstance(tp, model.ArrayType): - tp = tp.item - if isinstance(tp, model.StructOrUnion) and tp.fldtypes is None: - raise TypeError( - "%s is of an opaque type (not declared in cdef())" % location) - - def _add_missing_struct_unions(self): - # not very nice, but some struct declarations might be missing - # because they don't have any known C name. Check that they are - # not partial (we can't complete or verify them!) and emit them - # anonymously. - lst = list(self._struct_unions.items()) - lst.sort(key=lambda tp_order: tp_order[1]) - for tp, order in lst: - if tp not in self._seen_struct_unions: - if tp.partial: - raise NotImplementedError("internal inconsistency: %r is " - "partial but was not seen at " - "this point" % (tp,)) - if tp.name.startswith('$') and tp.name[1:].isdigit(): - approxname = tp.name[1:] - elif tp.name == '_IO_FILE' and tp.forcename == 'FILE': - approxname = 'FILE' - self._typedef_ctx(tp, 'FILE') - else: - raise NotImplementedError("internal inconsistency: %r" % - (tp,)) - self._struct_ctx(tp, None, approxname) - - def _generate_cpy_struct_collecttype(self, tp, name): - self._struct_collecttype(tp) - _generate_cpy_union_collecttype = _generate_cpy_struct_collecttype - - def _struct_names(self, tp): - cname = tp.get_c_name('') - if ' ' in cname: - return cname, cname.replace(' ', '_') - else: - return cname, '_' + cname - - def _generate_cpy_struct_decl(self, tp, name): - self._struct_decl(tp, *self._struct_names(tp)) - _generate_cpy_union_decl = _generate_cpy_struct_decl - - def _generate_cpy_struct_ctx(self, tp, name): - self._struct_ctx(tp, *self._struct_names(tp)) - _generate_cpy_union_ctx = _generate_cpy_struct_ctx - - # ---------- - # 'anonymous' declarations. These are produced for anonymous structs - # or unions; the 'name' is obtained by a typedef. - - def _generate_cpy_anonymous_collecttype(self, tp, name): - if isinstance(tp, model.EnumType): - self._generate_cpy_enum_collecttype(tp, name) - else: - self._struct_collecttype(tp) - - def _generate_cpy_anonymous_decl(self, tp, name): - if isinstance(tp, model.EnumType): - self._generate_cpy_enum_decl(tp) - else: - self._struct_decl(tp, name, 'typedef_' + name) - - def _generate_cpy_anonymous_ctx(self, tp, name): - if isinstance(tp, model.EnumType): - self._enum_ctx(tp, name) - else: - self._struct_ctx(tp, name, 'typedef_' + name) - - # ---------- - # constants, declared with "static const ..." - - def _generate_cpy_const(self, is_int, name, tp=None, category='const', - check_value=None): - if (category, name) in self._seen_constants: - raise VerificationError( - "duplicate declaration of %s '%s'" % (category, name)) - self._seen_constants.add((category, name)) - # - prnt = self._prnt - funcname = '_cffi_%s_%s' % (category, name) - if is_int: - prnt('static int %s(unsigned long long *o)' % funcname) - prnt('{') - prnt(' int n = (%s) <= 0;' % (name,)) - prnt(' *o = (unsigned long long)((%s) | 0);' - ' /* check that %s is an integer */' % (name, name)) - if check_value is not None: - if check_value > 0: - check_value = '%dU' % (check_value,) - prnt(' if (!_cffi_check_int(*o, n, %s))' % (check_value,)) - prnt(' n |= 2;') - prnt(' return n;') - prnt('}') - else: - assert check_value is None - prnt('static void %s(char *o)' % funcname) - prnt('{') - prnt(' *(%s)o = %s;' % (tp.get_c_name('*'), name)) - prnt('}') - prnt() - - def _generate_cpy_constant_collecttype(self, tp, name): - is_int = tp.is_integer_type() - if not is_int or self.target_is_python: - self._do_collect_type(tp) - - def _generate_cpy_constant_decl(self, tp, name): - is_int = tp.is_integer_type() - self._generate_cpy_const(is_int, name, tp) - - def _generate_cpy_constant_ctx(self, tp, name): - if not self.target_is_python and tp.is_integer_type(): - type_op = CffiOp(OP_CONSTANT_INT, -1) - else: - if self.target_is_python: - const_kind = OP_DLOPEN_CONST - else: - const_kind = OP_CONSTANT - type_index = self._typesdict[tp] - type_op = CffiOp(const_kind, type_index) - self._lsts["global"].append( - GlobalExpr(name, '_cffi_const_%s' % name, type_op)) - - # ---------- - # enums - - def _generate_cpy_enum_collecttype(self, tp, name): - self._do_collect_type(tp) - - def _generate_cpy_enum_decl(self, tp, name=None): - for enumerator in tp.enumerators: - self._generate_cpy_const(True, enumerator) - - def _enum_ctx(self, tp, cname): - type_index = self._typesdict[tp] - type_op = CffiOp(OP_ENUM, -1) - if self.target_is_python: - tp.check_not_partial() - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - self._lsts["global"].append( - GlobalExpr(enumerator, '_cffi_const_%s' % enumerator, type_op, - check_value=enumvalue)) - # - if cname is not None and '$' not in cname and not self.target_is_python: - size = "sizeof(%s)" % cname - signed = "((%s)-1) <= 0" % cname - else: - basetp = tp.build_baseinttype(self.ffi, []) - size = self.ffi.sizeof(basetp) - signed = int(int(self.ffi.cast(basetp, -1)) < 0) - allenums = ",".join(tp.enumerators) - self._lsts["enum"].append( - EnumExpr(tp.name, type_index, size, signed, allenums)) - - def _generate_cpy_enum_ctx(self, tp, name): - self._enum_ctx(tp, tp._get_c_name()) - - # ---------- - # macros: for now only for integers - - def _generate_cpy_macro_collecttype(self, tp, name): - pass - - def _generate_cpy_macro_decl(self, tp, name): - if tp == '...': - check_value = None - else: - check_value = tp # an integer - self._generate_cpy_const(True, name, check_value=check_value) - - def _generate_cpy_macro_ctx(self, tp, name): - if tp == '...': - if self.target_is_python: - raise VerificationError( - "cannot use the syntax '...' in '#define %s ...' when " - "using the ABI mode" % (name,)) - check_value = None - else: - check_value = tp # an integer - type_op = CffiOp(OP_CONSTANT_INT, -1) - self._lsts["global"].append( - GlobalExpr(name, '_cffi_const_%s' % name, type_op, - check_value=check_value)) - - # ---------- - # global variables - - def _global_type(self, tp, global_name): - if isinstance(tp, model.ArrayType): - actual_length = tp.length - if actual_length == '...': - actual_length = '_cffi_array_len(%s)' % (global_name,) - tp_item = self._global_type(tp.item, '%s[0]' % global_name) - tp = model.ArrayType(tp_item, actual_length) - return tp - - def _generate_cpy_variable_collecttype(self, tp, name): - self._do_collect_type(self._global_type(tp, name)) - - def _generate_cpy_variable_decl(self, tp, name): - prnt = self._prnt - tp = self._global_type(tp, name) - if isinstance(tp, model.ArrayType) and tp.length is None: - tp = tp.item - ampersand = '' - else: - ampersand = '&' - # This code assumes that casts from "tp *" to "void *" is a - # no-op, i.e. a function that returns a "tp *" can be called - # as if it returned a "void *". This should be generally true - # on any modern machine. The only exception to that rule (on - # uncommon architectures, and as far as I can tell) might be - # if 'tp' were a function type, but that is not possible here. - # (If 'tp' is a function _pointer_ type, then casts from "fn_t - # **" to "void *" are again no-ops, as far as I can tell.) - decl = '*_cffi_var_%s(void)' % (name,) - prnt('static ' + tp.get_c_name(decl, quals=self._current_quals)) - prnt('{') - prnt(' return %s(%s);' % (ampersand, name)) - prnt('}') - prnt() - - def _generate_cpy_variable_ctx(self, tp, name): - tp = self._global_type(tp, name) - type_index = self._typesdict[tp] - if self.target_is_python: - op = OP_GLOBAL_VAR - else: - op = OP_GLOBAL_VAR_F - self._lsts["global"].append( - GlobalExpr(name, '_cffi_var_%s' % name, CffiOp(op, type_index))) - - # ---------- - # extern "Python" - - def _generate_cpy_extern_python_collecttype(self, tp, name): - assert isinstance(tp, model.FunctionPtrType) - self._do_collect_type(tp) - _generate_cpy_dllexport_python_collecttype = \ - _generate_cpy_extern_python_plus_c_collecttype = \ - _generate_cpy_extern_python_collecttype - - def _extern_python_decl(self, tp, name, tag_and_space): - prnt = self._prnt - if isinstance(tp.result, model.VoidType): - size_of_result = '0' - else: - context = 'result of %s' % name - size_of_result = '(int)sizeof(%s)' % ( - tp.result.get_c_name('', context),) - prnt('static struct _cffi_externpy_s _cffi_externpy__%s =' % name) - prnt(' { "%s.%s", %s, 0, 0 };' % ( - self.module_name, name, size_of_result)) - prnt() - # - arguments = [] - context = 'argument of %s' % name - for i, type in enumerate(tp.args): - arg = type.get_c_name(' a%d' % i, context) - arguments.append(arg) - # - repr_arguments = ', '.join(arguments) - repr_arguments = repr_arguments or 'void' - name_and_arguments = '%s(%s)' % (name, repr_arguments) - if tp.abi == "__stdcall": - name_and_arguments = '_cffi_stdcall ' + name_and_arguments - # - def may_need_128_bits(tp): - return (isinstance(tp, model.PrimitiveType) and - tp.name == 'long double') - # - size_of_a = max(len(tp.args)*8, 8) - if may_need_128_bits(tp.result): - size_of_a = max(size_of_a, 16) - if isinstance(tp.result, model.StructOrUnion): - size_of_a = 'sizeof(%s) > %d ? sizeof(%s) : %d' % ( - tp.result.get_c_name(''), size_of_a, - tp.result.get_c_name(''), size_of_a) - prnt('%s%s' % (tag_and_space, tp.result.get_c_name(name_and_arguments))) - prnt('{') - prnt(' char a[%s];' % size_of_a) - prnt(' char *p = a;') - for i, type in enumerate(tp.args): - arg = 'a%d' % i - if (isinstance(type, model.StructOrUnion) or - may_need_128_bits(type)): - arg = '&' + arg - type = model.PointerType(type) - prnt(' *(%s)(p + %d) = %s;' % (type.get_c_name('*'), i*8, arg)) - prnt(' _cffi_call_python(&_cffi_externpy__%s, p);' % name) - if not isinstance(tp.result, model.VoidType): - prnt(' return *(%s)p;' % (tp.result.get_c_name('*'),)) - prnt('}') - prnt() - self._num_externpy += 1 - - def _generate_cpy_extern_python_decl(self, tp, name): - self._extern_python_decl(tp, name, 'static ') - - def _generate_cpy_dllexport_python_decl(self, tp, name): - self._extern_python_decl(tp, name, 'CFFI_DLLEXPORT ') - - def _generate_cpy_extern_python_plus_c_decl(self, tp, name): - self._extern_python_decl(tp, name, '') - - def _generate_cpy_extern_python_ctx(self, tp, name): - if self.target_is_python: - raise VerificationError( - "cannot use 'extern \"Python\"' in the ABI mode") - if tp.ellipsis: - raise NotImplementedError("a vararg function is extern \"Python\"") - type_index = self._typesdict[tp] - type_op = CffiOp(OP_EXTERN_PYTHON, type_index) - self._lsts["global"].append( - GlobalExpr(name, '&_cffi_externpy__%s' % name, type_op, name)) - - _generate_cpy_dllexport_python_ctx = \ - _generate_cpy_extern_python_plus_c_ctx = \ - _generate_cpy_extern_python_ctx - - def _print_string_literal_in_array(self, s): - prnt = self._prnt - prnt('// # NB. this is not a string because of a size limit in MSVC') - if not isinstance(s, bytes): # unicode - s = s.encode('utf-8') # -> bytes - else: - s.decode('utf-8') # got bytes, check for valid utf-8 - try: - s.decode('ascii') - except UnicodeDecodeError: - s = b'# -*- encoding: utf8 -*-\n' + s - for line in s.splitlines(True): - comment = line - if type('//') is bytes: # python2 - line = map(ord, line) # make a list of integers - else: # python3 - # type(line) is bytes, which enumerates like a list of integers - comment = ascii(comment)[1:-1] - prnt(('// ' + comment).rstrip()) - printed_line = '' - for c in line: - if len(printed_line) >= 76: - prnt(printed_line) - printed_line = '' - printed_line += '%d,' % (c,) - prnt(printed_line) - - # ---------- - # emitting the opcodes for individual types - - def _emit_bytecode_VoidType(self, tp, index): - self.cffi_types[index] = CffiOp(OP_PRIMITIVE, PRIM_VOID) - - def _emit_bytecode_PrimitiveType(self, tp, index): - prim_index = PRIMITIVE_TO_INDEX[tp.name] - self.cffi_types[index] = CffiOp(OP_PRIMITIVE, prim_index) - - def _emit_bytecode_UnknownIntegerType(self, tp, index): - s = ('_cffi_prim_int(sizeof(%s), (\n' - ' ((%s)-1) | 0 /* check that %s is an integer type */\n' - ' ) <= 0)' % (tp.name, tp.name, tp.name)) - self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) - - def _emit_bytecode_UnknownFloatType(self, tp, index): - s = ('_cffi_prim_float(sizeof(%s) *\n' - ' (((%s)1) / 2) * 2 /* integer => 0, float => 1 */\n' - ' )' % (tp.name, tp.name)) - self.cffi_types[index] = CffiOp(OP_PRIMITIVE, s) - - def _emit_bytecode_RawFunctionType(self, tp, index): - self.cffi_types[index] = CffiOp(OP_FUNCTION, self._typesdict[tp.result]) - index += 1 - for tp1 in tp.args: - realindex = self._typesdict[tp1] - if index != realindex: - if isinstance(tp1, model.PrimitiveType): - self._emit_bytecode_PrimitiveType(tp1, index) - else: - self.cffi_types[index] = CffiOp(OP_NOOP, realindex) - index += 1 - flags = int(tp.ellipsis) - if tp.abi is not None: - if tp.abi == '__stdcall': - flags |= 2 - else: - raise NotImplementedError("abi=%r" % (tp.abi,)) - self.cffi_types[index] = CffiOp(OP_FUNCTION_END, flags) - - def _emit_bytecode_PointerType(self, tp, index): - self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[tp.totype]) - - _emit_bytecode_ConstPointerType = _emit_bytecode_PointerType - _emit_bytecode_NamedPointerType = _emit_bytecode_PointerType - - def _emit_bytecode_FunctionPtrType(self, tp, index): - raw = tp.as_raw_function() - self.cffi_types[index] = CffiOp(OP_POINTER, self._typesdict[raw]) - - def _emit_bytecode_ArrayType(self, tp, index): - item_index = self._typesdict[tp.item] - if tp.length is None: - self.cffi_types[index] = CffiOp(OP_OPEN_ARRAY, item_index) - elif tp.length == '...': - raise VerificationError( - "type %s badly placed: the '...' array length can only be " - "used on global arrays or on fields of structures" % ( - str(tp).replace('/*...*/', '...'),)) - else: - assert self.cffi_types[index + 1] == 'LEN' - self.cffi_types[index] = CffiOp(OP_ARRAY, item_index) - self.cffi_types[index + 1] = CffiOp(None, str(tp.length)) - - def _emit_bytecode_StructType(self, tp, index): - struct_index = self._struct_unions[tp] - self.cffi_types[index] = CffiOp(OP_STRUCT_UNION, struct_index) - _emit_bytecode_UnionType = _emit_bytecode_StructType - - def _emit_bytecode_EnumType(self, tp, index): - enum_index = self._enums[tp] - self.cffi_types[index] = CffiOp(OP_ENUM, enum_index) - - -if sys.version_info >= (3,): - NativeIO = io.StringIO -else: - class NativeIO(io.BytesIO): - def write(self, s): - if isinstance(s, unicode): - s = s.encode('ascii') - super(NativeIO, self).write(s) - -def _is_file_like(maybefile): - # compare to xml.etree.ElementTree._get_writer - return hasattr(maybefile, 'write') - -def _make_c_or_py_source(ffi, module_name, preamble, target_file, verbose): - if verbose: - print("generating %s" % (target_file,)) - recompiler = Recompiler(ffi, module_name, - target_is_python=(preamble is None)) - recompiler.collect_type_table() - recompiler.collect_step_tables() - if _is_file_like(target_file): - recompiler.write_source_to_f(target_file, preamble) - return True - f = NativeIO() - recompiler.write_source_to_f(f, preamble) - output = f.getvalue() - try: - with open(target_file, 'r') as f1: - if f1.read(len(output) + 1) != output: - raise IOError - if verbose: - print("(already up-to-date)") - return False # already up-to-date - except IOError: - tmp_file = '%s.~%d' % (target_file, os.getpid()) - with open(tmp_file, 'w') as f1: - f1.write(output) - try: - os.rename(tmp_file, target_file) - except OSError: - os.unlink(target_file) - os.rename(tmp_file, target_file) - return True - -def make_c_source(ffi, module_name, preamble, target_c_file, verbose=False): - assert preamble is not None - return _make_c_or_py_source(ffi, module_name, preamble, target_c_file, - verbose) - -def make_py_source(ffi, module_name, target_py_file, verbose=False): - return _make_c_or_py_source(ffi, module_name, None, target_py_file, - verbose) - -def _modname_to_file(outputdir, modname, extension): - parts = modname.split('.') - try: - os.makedirs(os.path.join(outputdir, *parts[:-1])) - except OSError: - pass - parts[-1] += extension - return os.path.join(outputdir, *parts), parts - - -# Aaargh. Distutils is not tested at all for the purpose of compiling -# DLLs that are not extension modules. Here are some hacks to work -# around that, in the _patch_for_*() functions... - -def _patch_meth(patchlist, cls, name, new_meth): - old = getattr(cls, name) - patchlist.append((cls, name, old)) - setattr(cls, name, new_meth) - return old - -def _unpatch_meths(patchlist): - for cls, name, old_meth in reversed(patchlist): - setattr(cls, name, old_meth) - -def _patch_for_embedding(patchlist): - if sys.platform == 'win32': - # we must not remove the manifest when building for embedding! - # FUTURE: this module was removed in setuptools 74; this is likely dead code and should be removed, - # since the toolchain it supports (VS2005-2008) is also long dead. - from cffi._shimmed_dist_utils import MSVCCompiler - if MSVCCompiler is not None: - _patch_meth(patchlist, MSVCCompiler, '_remove_visual_c_ref', - lambda self, manifest_file: manifest_file) - - if sys.platform == 'darwin': - # we must not make a '-bundle', but a '-dynamiclib' instead - from cffi._shimmed_dist_utils import CCompiler - def my_link_shared_object(self, *args, **kwds): - if '-bundle' in self.linker_so: - self.linker_so = list(self.linker_so) - i = self.linker_so.index('-bundle') - self.linker_so[i] = '-dynamiclib' - return old_link_shared_object(self, *args, **kwds) - old_link_shared_object = _patch_meth(patchlist, CCompiler, - 'link_shared_object', - my_link_shared_object) - -def _patch_for_target(patchlist, target): - from cffi._shimmed_dist_utils import build_ext - # if 'target' is different from '*', we need to patch some internal - # method to just return this 'target' value, instead of having it - # built from module_name - if target.endswith('.*'): - target = target[:-2] - if sys.platform == 'win32': - target += '.dll' - elif sys.platform == 'darwin': - target += '.dylib' - else: - target += '.so' - _patch_meth(patchlist, build_ext, 'get_ext_filename', - lambda self, ext_name: target) - - -def recompile(ffi, module_name, preamble, tmpdir='.', call_c_compiler=True, - c_file=None, source_extension='.c', extradir=None, - compiler_verbose=1, target=None, debug=None, - uses_ffiplatform=True, **kwds): - if not isinstance(module_name, str): - module_name = module_name.encode('ascii') - if ffi._windows_unicode: - ffi._apply_windows_unicode(kwds) - if preamble is not None: - if call_c_compiler and _is_file_like(c_file): - raise TypeError("Writing to file-like objects is not supported " - "with call_c_compiler=True") - embedding = (ffi._embedding is not None) - if embedding: - ffi._apply_embedding_fix(kwds) - if c_file is None: - c_file, parts = _modname_to_file(tmpdir, module_name, - source_extension) - if extradir: - parts = [extradir] + parts - ext_c_file = os.path.join(*parts) - else: - ext_c_file = c_file - # - if target is None: - if embedding: - target = '%s.*' % module_name - else: - target = '*' - # - if uses_ffiplatform: - ext = ffiplatform.get_extension(ext_c_file, module_name, **kwds) - else: - ext = None - updated = make_c_source(ffi, module_name, preamble, c_file, - verbose=compiler_verbose) - if call_c_compiler: - patchlist = [] - cwd = os.getcwd() - try: - if embedding: - _patch_for_embedding(patchlist) - if target != '*': - _patch_for_target(patchlist, target) - if compiler_verbose: - if tmpdir == '.': - msg = 'the current directory is' - else: - msg = 'setting the current directory to' - print('%s %r' % (msg, os.path.abspath(tmpdir))) - os.chdir(tmpdir) - outputfilename = ffiplatform.compile('.', ext, - compiler_verbose, debug) - finally: - os.chdir(cwd) - _unpatch_meths(patchlist) - return outputfilename - else: - return ext, updated - else: - if c_file is None: - c_file, _ = _modname_to_file(tmpdir, module_name, '.py') - updated = make_py_source(ffi, module_name, c_file, - verbose=compiler_verbose) - if call_c_compiler: - return c_file - else: - return None, updated - diff --git a/pptx-env/lib/python3.12/site-packages/cffi/setuptools_ext.py b/pptx-env/lib/python3.12/site-packages/cffi/setuptools_ext.py deleted file mode 100644 index 5cdd246f..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/setuptools_ext.py +++ /dev/null @@ -1,229 +0,0 @@ -import os -import sys -import sysconfig - -try: - basestring -except NameError: - # Python 3.x - basestring = str - -def error(msg): - from cffi._shimmed_dist_utils import DistutilsSetupError - raise DistutilsSetupError(msg) - - -def execfile(filename, glob): - # We use execfile() (here rewritten for Python 3) instead of - # __import__() to load the build script. The problem with - # a normal import is that in some packages, the intermediate - # __init__.py files may already try to import the file that - # we are generating. - with open(filename) as f: - src = f.read() - src += '\n' # Python 2.6 compatibility - code = compile(src, filename, 'exec') - exec(code, glob, glob) - - -def add_cffi_module(dist, mod_spec): - from cffi.api import FFI - - if not isinstance(mod_spec, basestring): - error("argument to 'cffi_modules=...' must be a str or a list of str," - " not %r" % (type(mod_spec).__name__,)) - mod_spec = str(mod_spec) - try: - build_file_name, ffi_var_name = mod_spec.split(':') - except ValueError: - error("%r must be of the form 'path/build.py:ffi_variable'" % - (mod_spec,)) - if not os.path.exists(build_file_name): - ext = '' - rewritten = build_file_name.replace('.', '/') + '.py' - if os.path.exists(rewritten): - ext = ' (rewrite cffi_modules to [%r])' % ( - rewritten + ':' + ffi_var_name,) - error("%r does not name an existing file%s" % (build_file_name, ext)) - - mod_vars = {'__name__': '__cffi__', '__file__': build_file_name} - execfile(build_file_name, mod_vars) - - try: - ffi = mod_vars[ffi_var_name] - except KeyError: - error("%r: object %r not found in module" % (mod_spec, - ffi_var_name)) - if not isinstance(ffi, FFI): - ffi = ffi() # maybe it's a function instead of directly an ffi - if not isinstance(ffi, FFI): - error("%r is not an FFI instance (got %r)" % (mod_spec, - type(ffi).__name__)) - if not hasattr(ffi, '_assigned_source'): - error("%r: the set_source() method was not called" % (mod_spec,)) - module_name, source, source_extension, kwds = ffi._assigned_source - if ffi._windows_unicode: - kwds = kwds.copy() - ffi._apply_windows_unicode(kwds) - - if source is None: - _add_py_module(dist, ffi, module_name) - else: - _add_c_module(dist, ffi, module_name, source, source_extension, kwds) - -def _set_py_limited_api(Extension, kwds): - """ - Add py_limited_api to kwds if setuptools >= 26 is in use. - Do not alter the setting if it already exists. - Setuptools takes care of ignoring the flag on Python 2 and PyPy. - - CPython itself should ignore the flag in a debugging version - (by not listing .abi3.so in the extensions it supports), but - it doesn't so far, creating troubles. That's why we check - for "not hasattr(sys, 'gettotalrefcount')" (the 2.7 compatible equivalent - of 'd' not in sys.abiflags). (http://bugs.python.org/issue28401) - - On Windows, with CPython <= 3.4, it's better not to use py_limited_api - because virtualenv *still* doesn't copy PYTHON3.DLL on these versions. - Recently (2020) we started shipping only >= 3.5 wheels, though. So - we'll give it another try and set py_limited_api on Windows >= 3.5. - """ - from cffi._shimmed_dist_utils import log - from cffi import recompiler - - if ('py_limited_api' not in kwds and not hasattr(sys, 'gettotalrefcount') - and recompiler.USE_LIMITED_API): - import setuptools - try: - setuptools_major_version = int(setuptools.__version__.partition('.')[0]) - if setuptools_major_version >= 26: - kwds['py_limited_api'] = True - except ValueError: # certain development versions of setuptools - # If we don't know the version number of setuptools, we - # try to set 'py_limited_api' anyway. At worst, we get a - # warning. - kwds['py_limited_api'] = True - - if sysconfig.get_config_var("Py_GIL_DISABLED"): - if kwds.get('py_limited_api'): - log.info("Ignoring py_limited_api=True for free-threaded build.") - - kwds['py_limited_api'] = False - - if kwds.get('py_limited_api') is False: - # avoid setting Py_LIMITED_API if py_limited_api=False - # which _cffi_include.h does unless _CFFI_NO_LIMITED_API is defined - kwds.setdefault("define_macros", []).append(("_CFFI_NO_LIMITED_API", None)) - return kwds - -def _add_c_module(dist, ffi, module_name, source, source_extension, kwds): - # We are a setuptools extension. Need this build_ext for py_limited_api. - from setuptools.command.build_ext import build_ext - from cffi._shimmed_dist_utils import Extension, log, mkpath - from cffi import recompiler - - allsources = ['$PLACEHOLDER'] - allsources.extend(kwds.pop('sources', [])) - kwds = _set_py_limited_api(Extension, kwds) - ext = Extension(name=module_name, sources=allsources, **kwds) - - def make_mod(tmpdir, pre_run=None): - c_file = os.path.join(tmpdir, module_name + source_extension) - log.info("generating cffi module %r" % c_file) - mkpath(tmpdir) - # a setuptools-only, API-only hook: called with the "ext" and "ffi" - # arguments just before we turn the ffi into C code. To use it, - # subclass the 'distutils.command.build_ext.build_ext' class and - # add a method 'def pre_run(self, ext, ffi)'. - if pre_run is not None: - pre_run(ext, ffi) - updated = recompiler.make_c_source(ffi, module_name, source, c_file) - if not updated: - log.info("already up-to-date") - return c_file - - if dist.ext_modules is None: - dist.ext_modules = [] - dist.ext_modules.append(ext) - - base_class = dist.cmdclass.get('build_ext', build_ext) - class build_ext_make_mod(base_class): - def run(self): - if ext.sources[0] == '$PLACEHOLDER': - pre_run = getattr(self, 'pre_run', None) - ext.sources[0] = make_mod(self.build_temp, pre_run) - base_class.run(self) - dist.cmdclass['build_ext'] = build_ext_make_mod - # NB. multiple runs here will create multiple 'build_ext_make_mod' - # classes. Even in this case the 'build_ext' command should be - # run once; but just in case, the logic above does nothing if - # called again. - - -def _add_py_module(dist, ffi, module_name): - from setuptools.command.build_py import build_py - from setuptools.command.build_ext import build_ext - from cffi._shimmed_dist_utils import log, mkpath - from cffi import recompiler - - def generate_mod(py_file): - log.info("generating cffi module %r" % py_file) - mkpath(os.path.dirname(py_file)) - updated = recompiler.make_py_source(ffi, module_name, py_file) - if not updated: - log.info("already up-to-date") - - base_class = dist.cmdclass.get('build_py', build_py) - class build_py_make_mod(base_class): - def run(self): - base_class.run(self) - module_path = module_name.split('.') - module_path[-1] += '.py' - generate_mod(os.path.join(self.build_lib, *module_path)) - def get_source_files(self): - # This is called from 'setup.py sdist' only. Exclude - # the generate .py module in this case. - saved_py_modules = self.py_modules - try: - if saved_py_modules: - self.py_modules = [m for m in saved_py_modules - if m != module_name] - return base_class.get_source_files(self) - finally: - self.py_modules = saved_py_modules - dist.cmdclass['build_py'] = build_py_make_mod - - # distutils and setuptools have no notion I could find of a - # generated python module. If we don't add module_name to - # dist.py_modules, then things mostly work but there are some - # combination of options (--root and --record) that will miss - # the module. So we add it here, which gives a few apparently - # harmless warnings about not finding the file outside the - # build directory. - # Then we need to hack more in get_source_files(); see above. - if dist.py_modules is None: - dist.py_modules = [] - dist.py_modules.append(module_name) - - # the following is only for "build_ext -i" - base_class_2 = dist.cmdclass.get('build_ext', build_ext) - class build_ext_make_mod(base_class_2): - def run(self): - base_class_2.run(self) - if self.inplace: - # from get_ext_fullpath() in distutils/command/build_ext.py - module_path = module_name.split('.') - package = '.'.join(module_path[:-1]) - build_py = self.get_finalized_command('build_py') - package_dir = build_py.get_package_dir(package) - file_name = module_path[-1] + '.py' - generate_mod(os.path.join(package_dir, file_name)) - dist.cmdclass['build_ext'] = build_ext_make_mod - -def cffi_modules(dist, attr, value): - assert attr == 'cffi_modules' - if isinstance(value, basestring): - value = [value] - - for cffi_module in value: - add_cffi_module(dist, cffi_module) diff --git a/pptx-env/lib/python3.12/site-packages/cffi/vengine_cpy.py b/pptx-env/lib/python3.12/site-packages/cffi/vengine_cpy.py deleted file mode 100644 index 02e6a471..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/vengine_cpy.py +++ /dev/null @@ -1,1087 +0,0 @@ -# -# DEPRECATED: implementation for ffi.verify() -# -import sys -from . import model -from .error import VerificationError -from . import _imp_emulation as imp - - -class VCPythonEngine(object): - _class_key = 'x' - _gen_python_module = True - - def __init__(self, verifier): - self.verifier = verifier - self.ffi = verifier.ffi - self._struct_pending_verification = {} - self._types_of_builtin_functions = {} - - def patch_extension_kwds(self, kwds): - pass - - def find_module(self, module_name, path, so_suffixes): - try: - f, filename, descr = imp.find_module(module_name, path) - except ImportError: - return None - if f is not None: - f.close() - # Note that after a setuptools installation, there are both .py - # and .so files with the same basename. The code here relies on - # imp.find_module() locating the .so in priority. - if descr[0] not in so_suffixes: - return None - return filename - - def collect_types(self): - self._typesdict = {} - self._generate("collecttype") - - def _prnt(self, what=''): - self._f.write(what + '\n') - - def _gettypenum(self, type): - # a KeyError here is a bug. please report it! :-) - return self._typesdict[type] - - def _do_collect_type(self, tp): - if ((not isinstance(tp, model.PrimitiveType) - or tp.name == 'long double') - and tp not in self._typesdict): - num = len(self._typesdict) - self._typesdict[tp] = num - - def write_source_to_f(self): - self.collect_types() - # - # The new module will have a _cffi_setup() function that receives - # objects from the ffi world, and that calls some setup code in - # the module. This setup code is split in several independent - # functions, e.g. one per constant. The functions are "chained" - # by ending in a tail call to each other. - # - # This is further split in two chained lists, depending on if we - # can do it at import-time or if we must wait for _cffi_setup() to - # provide us with the objects. This is needed because we - # need the values of the enum constants in order to build the - # that we may have to pass to _cffi_setup(). - # - # The following two 'chained_list_constants' items contains - # the head of these two chained lists, as a string that gives the - # call to do, if any. - self._chained_list_constants = ['((void)lib,0)', '((void)lib,0)'] - # - prnt = self._prnt - # first paste some standard set of lines that are mostly '#define' - prnt(cffimod_header) - prnt() - # then paste the C source given by the user, verbatim. - prnt(self.verifier.preamble) - prnt() - # - # call generate_cpy_xxx_decl(), for every xxx found from - # ffi._parser._declarations. This generates all the functions. - self._generate("decl") - # - # implement the function _cffi_setup_custom() as calling the - # head of the chained list. - self._generate_setup_custom() - prnt() - # - # produce the method table, including the entries for the - # generated Python->C function wrappers, which are done - # by generate_cpy_function_method(). - prnt('static PyMethodDef _cffi_methods[] = {') - self._generate("method") - prnt(' {"_cffi_setup", _cffi_setup, METH_VARARGS, NULL},') - prnt(' {NULL, NULL, 0, NULL} /* Sentinel */') - prnt('};') - prnt() - # - # standard init. - modname = self.verifier.get_module_name() - constants = self._chained_list_constants[False] - prnt('#if PY_MAJOR_VERSION >= 3') - prnt() - prnt('static struct PyModuleDef _cffi_module_def = {') - prnt(' PyModuleDef_HEAD_INIT,') - prnt(' "%s",' % modname) - prnt(' NULL,') - prnt(' -1,') - prnt(' _cffi_methods,') - prnt(' NULL, NULL, NULL, NULL') - prnt('};') - prnt() - prnt('PyMODINIT_FUNC') - prnt('PyInit_%s(void)' % modname) - prnt('{') - prnt(' PyObject *lib;') - prnt(' lib = PyModule_Create(&_cffi_module_def);') - prnt(' if (lib == NULL)') - prnt(' return NULL;') - prnt(' if (%s < 0 || _cffi_init() < 0) {' % (constants,)) - prnt(' Py_DECREF(lib);') - prnt(' return NULL;') - prnt(' }') - prnt('#if Py_GIL_DISABLED') - prnt(' PyUnstable_Module_SetGIL(lib, Py_MOD_GIL_NOT_USED);') - prnt('#endif') - prnt(' return lib;') - prnt('}') - prnt() - prnt('#else') - prnt() - prnt('PyMODINIT_FUNC') - prnt('init%s(void)' % modname) - prnt('{') - prnt(' PyObject *lib;') - prnt(' lib = Py_InitModule("%s", _cffi_methods);' % modname) - prnt(' if (lib == NULL)') - prnt(' return;') - prnt(' if (%s < 0 || _cffi_init() < 0)' % (constants,)) - prnt(' return;') - prnt(' return;') - prnt('}') - prnt() - prnt('#endif') - - def load_library(self, flags=None): - # XXX review all usages of 'self' here! - # import it as a new extension module - imp.acquire_lock() - try: - if hasattr(sys, "getdlopenflags"): - previous_flags = sys.getdlopenflags() - try: - if hasattr(sys, "setdlopenflags") and flags is not None: - sys.setdlopenflags(flags) - module = imp.load_dynamic(self.verifier.get_module_name(), - self.verifier.modulefilename) - except ImportError as e: - error = "importing %r: %s" % (self.verifier.modulefilename, e) - raise VerificationError(error) - finally: - if hasattr(sys, "setdlopenflags"): - sys.setdlopenflags(previous_flags) - finally: - imp.release_lock() - # - # call loading_cpy_struct() to get the struct layout inferred by - # the C compiler - self._load(module, 'loading') - # - # the C code will need the objects. Collect them in - # order in a list. - revmapping = dict([(value, key) - for (key, value) in self._typesdict.items()]) - lst = [revmapping[i] for i in range(len(revmapping))] - lst = list(map(self.ffi._get_cached_btype, lst)) - # - # build the FFILibrary class and instance and call _cffi_setup(). - # this will set up some fields like '_cffi_types', and only then - # it will invoke the chained list of functions that will really - # build (notably) the constant objects, as if they are - # pointers, and store them as attributes on the 'library' object. - class FFILibrary(object): - _cffi_python_module = module - _cffi_ffi = self.ffi - _cffi_dir = [] - def __dir__(self): - return FFILibrary._cffi_dir + list(self.__dict__) - library = FFILibrary() - if module._cffi_setup(lst, VerificationError, library): - import warnings - warnings.warn("reimporting %r might overwrite older definitions" - % (self.verifier.get_module_name())) - # - # finally, call the loaded_cpy_xxx() functions. This will perform - # the final adjustments, like copying the Python->C wrapper - # functions from the module to the 'library' object, and setting - # up the FFILibrary class with properties for the global C variables. - self._load(module, 'loaded', library=library) - module._cffi_original_ffi = self.ffi - module._cffi_types_of_builtin_funcs = self._types_of_builtin_functions - return library - - def _get_declarations(self): - lst = [(key, tp) for (key, (tp, qual)) in - self.ffi._parser._declarations.items()] - lst.sort() - return lst - - def _generate(self, step_name): - for name, tp in self._get_declarations(): - kind, realname = name.split(' ', 1) - try: - method = getattr(self, '_generate_cpy_%s_%s' % (kind, - step_name)) - except AttributeError: - raise VerificationError( - "not implemented in verify(): %r" % name) - try: - method(tp, realname) - except Exception as e: - model.attach_exception_info(e, name) - raise - - def _load(self, module, step_name, **kwds): - for name, tp in self._get_declarations(): - kind, realname = name.split(' ', 1) - method = getattr(self, '_%s_cpy_%s' % (step_name, kind)) - try: - method(tp, realname, module, **kwds) - except Exception as e: - model.attach_exception_info(e, name) - raise - - def _generate_nothing(self, tp, name): - pass - - def _loaded_noop(self, tp, name, module, **kwds): - pass - - # ---------- - - def _convert_funcarg_to_c(self, tp, fromvar, tovar, errcode): - extraarg = '' - if isinstance(tp, model.PrimitiveType): - if tp.is_integer_type() and tp.name != '_Bool': - converter = '_cffi_to_c_int' - extraarg = ', %s' % tp.name - elif tp.is_complex_type(): - raise VerificationError( - "not implemented in verify(): complex types") - else: - converter = '(%s)_cffi_to_c_%s' % (tp.get_c_name(''), - tp.name.replace(' ', '_')) - errvalue = '-1' - # - elif isinstance(tp, model.PointerType): - self._convert_funcarg_to_c_ptr_or_array(tp, fromvar, - tovar, errcode) - return - # - elif isinstance(tp, (model.StructOrUnion, model.EnumType)): - # a struct (not a struct pointer) as a function argument - self._prnt(' if (_cffi_to_c((char *)&%s, _cffi_type(%d), %s) < 0)' - % (tovar, self._gettypenum(tp), fromvar)) - self._prnt(' %s;' % errcode) - return - # - elif isinstance(tp, model.FunctionPtrType): - converter = '(%s)_cffi_to_c_pointer' % tp.get_c_name('') - extraarg = ', _cffi_type(%d)' % self._gettypenum(tp) - errvalue = 'NULL' - # - else: - raise NotImplementedError(tp) - # - self._prnt(' %s = %s(%s%s);' % (tovar, converter, fromvar, extraarg)) - self._prnt(' if (%s == (%s)%s && PyErr_Occurred())' % ( - tovar, tp.get_c_name(''), errvalue)) - self._prnt(' %s;' % errcode) - - def _extra_local_variables(self, tp, localvars, freelines): - if isinstance(tp, model.PointerType): - localvars.add('Py_ssize_t datasize') - localvars.add('struct _cffi_freeme_s *large_args_free = NULL') - freelines.add('if (large_args_free != NULL)' - ' _cffi_free_array_arguments(large_args_free);') - - def _convert_funcarg_to_c_ptr_or_array(self, tp, fromvar, tovar, errcode): - self._prnt(' datasize = _cffi_prepare_pointer_call_argument(') - self._prnt(' _cffi_type(%d), %s, (char **)&%s);' % ( - self._gettypenum(tp), fromvar, tovar)) - self._prnt(' if (datasize != 0) {') - self._prnt(' %s = ((size_t)datasize) <= 640 ? ' - 'alloca((size_t)datasize) : NULL;' % (tovar,)) - self._prnt(' if (_cffi_convert_array_argument(_cffi_type(%d), %s, ' - '(char **)&%s,' % (self._gettypenum(tp), fromvar, tovar)) - self._prnt(' datasize, &large_args_free) < 0)') - self._prnt(' %s;' % errcode) - self._prnt(' }') - - def _convert_expr_from_c(self, tp, var, context): - if isinstance(tp, model.PrimitiveType): - if tp.is_integer_type() and tp.name != '_Bool': - return '_cffi_from_c_int(%s, %s)' % (var, tp.name) - elif tp.name != 'long double': - return '_cffi_from_c_%s(%s)' % (tp.name.replace(' ', '_'), var) - else: - return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, (model.PointerType, model.FunctionPtrType)): - return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, model.ArrayType): - return '_cffi_from_c_pointer((char *)%s, _cffi_type(%d))' % ( - var, self._gettypenum(model.PointerType(tp.item))) - elif isinstance(tp, model.StructOrUnion): - if tp.fldnames is None: - raise TypeError("'%s' is used as %s, but is opaque" % ( - tp._get_c_name(), context)) - return '_cffi_from_c_struct((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - elif isinstance(tp, model.EnumType): - return '_cffi_from_c_deref((char *)&%s, _cffi_type(%d))' % ( - var, self._gettypenum(tp)) - else: - raise NotImplementedError(tp) - - # ---------- - # typedefs: generates no code so far - - _generate_cpy_typedef_collecttype = _generate_nothing - _generate_cpy_typedef_decl = _generate_nothing - _generate_cpy_typedef_method = _generate_nothing - _loading_cpy_typedef = _loaded_noop - _loaded_cpy_typedef = _loaded_noop - - # ---------- - # function declarations - - def _generate_cpy_function_collecttype(self, tp, name): - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - self._do_collect_type(tp) - else: - # don't call _do_collect_type(tp) in this common case, - # otherwise test_autofilled_struct_as_argument fails - for type in tp.args: - self._do_collect_type(type) - self._do_collect_type(tp.result) - - def _generate_cpy_function_decl(self, tp, name): - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - # cannot support vararg functions better than this: check for its - # exact type (including the fixed arguments), and build it as a - # constant function pointer (no CPython wrapper) - self._generate_cpy_const(False, name, tp) - return - prnt = self._prnt - numargs = len(tp.args) - if numargs == 0: - argname = 'noarg' - elif numargs == 1: - argname = 'arg0' - else: - argname = 'args' - prnt('static PyObject *') - prnt('_cffi_f_%s(PyObject *self, PyObject *%s)' % (name, argname)) - prnt('{') - # - context = 'argument of %s' % name - for i, type in enumerate(tp.args): - prnt(' %s;' % type.get_c_name(' x%d' % i, context)) - # - localvars = set() - freelines = set() - for type in tp.args: - self._extra_local_variables(type, localvars, freelines) - for decl in sorted(localvars): - prnt(' %s;' % (decl,)) - # - if not isinstance(tp.result, model.VoidType): - result_code = 'result = ' - context = 'result of %s' % name - prnt(' %s;' % tp.result.get_c_name(' result', context)) - prnt(' PyObject *pyresult;') - else: - result_code = '' - # - if len(tp.args) > 1: - rng = range(len(tp.args)) - for i in rng: - prnt(' PyObject *arg%d;' % i) - prnt() - prnt(' if (!PyArg_ParseTuple(args, "%s:%s", %s))' % ( - 'O' * numargs, name, ', '.join(['&arg%d' % i for i in rng]))) - prnt(' return NULL;') - prnt() - # - for i, type in enumerate(tp.args): - self._convert_funcarg_to_c(type, 'arg%d' % i, 'x%d' % i, - 'return NULL') - prnt() - # - prnt(' Py_BEGIN_ALLOW_THREADS') - prnt(' _cffi_restore_errno();') - prnt(' { %s%s(%s); }' % ( - result_code, name, - ', '.join(['x%d' % i for i in range(len(tp.args))]))) - prnt(' _cffi_save_errno();') - prnt(' Py_END_ALLOW_THREADS') - prnt() - # - prnt(' (void)self; /* unused */') - if numargs == 0: - prnt(' (void)noarg; /* unused */') - if result_code: - prnt(' pyresult = %s;' % - self._convert_expr_from_c(tp.result, 'result', 'result type')) - for freeline in freelines: - prnt(' ' + freeline) - prnt(' return pyresult;') - else: - for freeline in freelines: - prnt(' ' + freeline) - prnt(' Py_INCREF(Py_None);') - prnt(' return Py_None;') - prnt('}') - prnt() - - def _generate_cpy_function_method(self, tp, name): - if tp.ellipsis: - return - numargs = len(tp.args) - if numargs == 0: - meth = 'METH_NOARGS' - elif numargs == 1: - meth = 'METH_O' - else: - meth = 'METH_VARARGS' - self._prnt(' {"%s", _cffi_f_%s, %s, NULL},' % (name, name, meth)) - - _loading_cpy_function = _loaded_noop - - def _loaded_cpy_function(self, tp, name, module, library): - if tp.ellipsis: - return - func = getattr(module, name) - setattr(library, name, func) - self._types_of_builtin_functions[func] = tp - - # ---------- - # named structs - - _generate_cpy_struct_collecttype = _generate_nothing - def _generate_cpy_struct_decl(self, tp, name): - assert name == tp.name - self._generate_struct_or_union_decl(tp, 'struct', name) - def _generate_cpy_struct_method(self, tp, name): - self._generate_struct_or_union_method(tp, 'struct', name) - def _loading_cpy_struct(self, tp, name, module): - self._loading_struct_or_union(tp, 'struct', name, module) - def _loaded_cpy_struct(self, tp, name, module, **kwds): - self._loaded_struct_or_union(tp) - - _generate_cpy_union_collecttype = _generate_nothing - def _generate_cpy_union_decl(self, tp, name): - assert name == tp.name - self._generate_struct_or_union_decl(tp, 'union', name) - def _generate_cpy_union_method(self, tp, name): - self._generate_struct_or_union_method(tp, 'union', name) - def _loading_cpy_union(self, tp, name, module): - self._loading_struct_or_union(tp, 'union', name, module) - def _loaded_cpy_union(self, tp, name, module, **kwds): - self._loaded_struct_or_union(tp) - - def _generate_struct_or_union_decl(self, tp, prefix, name): - if tp.fldnames is None: - return # nothing to do with opaque structs - checkfuncname = '_cffi_check_%s_%s' % (prefix, name) - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - cname = ('%s %s' % (prefix, name)).strip() - # - prnt = self._prnt - prnt('static void %s(%s *p)' % (checkfuncname, cname)) - prnt('{') - prnt(' /* only to generate compile-time warnings or errors */') - prnt(' (void)p;') - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if (isinstance(ftype, model.PrimitiveType) - and ftype.is_integer_type()) or fbitsize >= 0: - # accept all integers, but complain on float or double - prnt(' (void)((p->%s) << 1);' % fname) - else: - # only accept exactly the type declared. - try: - prnt(' { %s = &p->%s; (void)tmp; }' % ( - ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), - fname)) - except VerificationError as e: - prnt(' /* %s */' % str(e)) # cannot verify it, ignore - prnt('}') - prnt('static PyObject *') - prnt('%s(PyObject *self, PyObject *noarg)' % (layoutfuncname,)) - prnt('{') - prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) - prnt(' static Py_ssize_t nums[] = {') - prnt(' sizeof(%s),' % cname) - prnt(' offsetof(struct _cffi_aligncheck, y),') - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if fbitsize >= 0: - continue # xxx ignore fbitsize for now - prnt(' offsetof(%s, %s),' % (cname, fname)) - if isinstance(ftype, model.ArrayType) and ftype.length is None: - prnt(' 0, /* %s */' % ftype._get_c_name()) - else: - prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) - prnt(' -1') - prnt(' };') - prnt(' (void)self; /* unused */') - prnt(' (void)noarg; /* unused */') - prnt(' return _cffi_get_struct_layout(nums);') - prnt(' /* the next line is not executed, but compiled */') - prnt(' %s(0);' % (checkfuncname,)) - prnt('}') - prnt() - - def _generate_struct_or_union_method(self, tp, prefix, name): - if tp.fldnames is None: - return # nothing to do with opaque structs - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - self._prnt(' {"%s", %s, METH_NOARGS, NULL},' % (layoutfuncname, - layoutfuncname)) - - def _loading_struct_or_union(self, tp, prefix, name, module): - if tp.fldnames is None: - return # nothing to do with opaque structs - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - # - function = getattr(module, layoutfuncname) - layout = function() - if isinstance(tp, model.StructOrUnion) and tp.partial: - # use the function()'s sizes and offsets to guide the - # layout of the struct - totalsize = layout[0] - totalalignment = layout[1] - fieldofs = layout[2::2] - fieldsize = layout[3::2] - tp.force_flatten() - assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) - tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment - else: - cname = ('%s %s' % (prefix, name)).strip() - self._struct_pending_verification[tp] = layout, cname - - def _loaded_struct_or_union(self, tp): - if tp.fldnames is None: - return # nothing to do with opaque structs - self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered - - if tp in self._struct_pending_verification: - # check that the layout sizes and offsets match the real ones - def check(realvalue, expectedvalue, msg): - if realvalue != expectedvalue: - raise VerificationError( - "%s (we have %d, but C compiler says %d)" - % (msg, expectedvalue, realvalue)) - ffi = self.ffi - BStruct = ffi._get_cached_btype(tp) - layout, cname = self._struct_pending_verification.pop(tp) - check(layout[0], ffi.sizeof(BStruct), "wrong total size") - check(layout[1], ffi.alignof(BStruct), "wrong total alignment") - i = 2 - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if fbitsize >= 0: - continue # xxx ignore fbitsize for now - check(layout[i], ffi.offsetof(BStruct, fname), - "wrong offset for field %r" % (fname,)) - if layout[i+1] != 0: - BField = ffi._get_cached_btype(ftype) - check(layout[i+1], ffi.sizeof(BField), - "wrong size for field %r" % (fname,)) - i += 2 - assert i == len(layout) - - # ---------- - # 'anonymous' declarations. These are produced for anonymous structs - # or unions; the 'name' is obtained by a typedef. - - _generate_cpy_anonymous_collecttype = _generate_nothing - - def _generate_cpy_anonymous_decl(self, tp, name): - if isinstance(tp, model.EnumType): - self._generate_cpy_enum_decl(tp, name, '') - else: - self._generate_struct_or_union_decl(tp, '', name) - - def _generate_cpy_anonymous_method(self, tp, name): - if not isinstance(tp, model.EnumType): - self._generate_struct_or_union_method(tp, '', name) - - def _loading_cpy_anonymous(self, tp, name, module): - if isinstance(tp, model.EnumType): - self._loading_cpy_enum(tp, name, module) - else: - self._loading_struct_or_union(tp, '', name, module) - - def _loaded_cpy_anonymous(self, tp, name, module, **kwds): - if isinstance(tp, model.EnumType): - self._loaded_cpy_enum(tp, name, module, **kwds) - else: - self._loaded_struct_or_union(tp) - - # ---------- - # constants, likely declared with '#define' - - def _generate_cpy_const(self, is_int, name, tp=None, category='const', - vartp=None, delayed=True, size_too=False, - check_value=None): - prnt = self._prnt - funcname = '_cffi_%s_%s' % (category, name) - prnt('static int %s(PyObject *lib)' % funcname) - prnt('{') - prnt(' PyObject *o;') - prnt(' int res;') - if not is_int: - prnt(' %s;' % (vartp or tp).get_c_name(' i', name)) - else: - assert category == 'const' - # - if check_value is not None: - self._check_int_constant_value(name, check_value) - # - if not is_int: - if category == 'var': - realexpr = '&' + name - else: - realexpr = name - prnt(' i = (%s);' % (realexpr,)) - prnt(' o = %s;' % (self._convert_expr_from_c(tp, 'i', - 'variable type'),)) - assert delayed - else: - prnt(' o = _cffi_from_c_int_const(%s);' % name) - prnt(' if (o == NULL)') - prnt(' return -1;') - if size_too: - prnt(' {') - prnt(' PyObject *o1 = o;') - prnt(' o = Py_BuildValue("On", o1, (Py_ssize_t)sizeof(%s));' - % (name,)) - prnt(' Py_DECREF(o1);') - prnt(' if (o == NULL)') - prnt(' return -1;') - prnt(' }') - prnt(' res = PyObject_SetAttrString(lib, "%s", o);' % name) - prnt(' Py_DECREF(o);') - prnt(' if (res < 0)') - prnt(' return -1;') - prnt(' return %s;' % self._chained_list_constants[delayed]) - self._chained_list_constants[delayed] = funcname + '(lib)' - prnt('}') - prnt() - - def _generate_cpy_constant_collecttype(self, tp, name): - is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() - if not is_int: - self._do_collect_type(tp) - - def _generate_cpy_constant_decl(self, tp, name): - is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() - self._generate_cpy_const(is_int, name, tp) - - _generate_cpy_constant_method = _generate_nothing - _loading_cpy_constant = _loaded_noop - _loaded_cpy_constant = _loaded_noop - - # ---------- - # enums - - def _check_int_constant_value(self, name, value, err_prefix=''): - prnt = self._prnt - if value <= 0: - prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( - name, name, value)) - else: - prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( - name, name, value)) - prnt(' char buf[64];') - prnt(' if ((%s) <= 0)' % name) - prnt(' snprintf(buf, 63, "%%ld", (long)(%s));' % name) - prnt(' else') - prnt(' snprintf(buf, 63, "%%lu", (unsigned long)(%s));' % - name) - prnt(' PyErr_Format(_cffi_VerificationError,') - prnt(' "%s%s has the real value %s, not %s",') - prnt(' "%s", "%s", buf, "%d");' % ( - err_prefix, name, value)) - prnt(' return -1;') - prnt(' }') - - def _enum_funcname(self, prefix, name): - # "$enum_$1" => "___D_enum____D_1" - name = name.replace('$', '___D_') - return '_cffi_e_%s_%s' % (prefix, name) - - def _generate_cpy_enum_decl(self, tp, name, prefix='enum'): - if tp.partial: - for enumerator in tp.enumerators: - self._generate_cpy_const(True, enumerator, delayed=False) - return - # - funcname = self._enum_funcname(prefix, name) - prnt = self._prnt - prnt('static int %s(PyObject *lib)' % funcname) - prnt('{') - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - self._check_int_constant_value(enumerator, enumvalue, - "enum %s: " % name) - prnt(' return %s;' % self._chained_list_constants[True]) - self._chained_list_constants[True] = funcname + '(lib)' - prnt('}') - prnt() - - _generate_cpy_enum_collecttype = _generate_nothing - _generate_cpy_enum_method = _generate_nothing - - def _loading_cpy_enum(self, tp, name, module): - if tp.partial: - enumvalues = [getattr(module, enumerator) - for enumerator in tp.enumerators] - tp.enumvalues = tuple(enumvalues) - tp.partial_resolved = True - - def _loaded_cpy_enum(self, tp, name, module, library): - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - setattr(library, enumerator, enumvalue) - - # ---------- - # macros: for now only for integers - - def _generate_cpy_macro_decl(self, tp, name): - if tp == '...': - check_value = None - else: - check_value = tp # an integer - self._generate_cpy_const(True, name, check_value=check_value) - - _generate_cpy_macro_collecttype = _generate_nothing - _generate_cpy_macro_method = _generate_nothing - _loading_cpy_macro = _loaded_noop - _loaded_cpy_macro = _loaded_noop - - # ---------- - # global variables - - def _generate_cpy_variable_collecttype(self, tp, name): - if isinstance(tp, model.ArrayType): - tp_ptr = model.PointerType(tp.item) - else: - tp_ptr = model.PointerType(tp) - self._do_collect_type(tp_ptr) - - def _generate_cpy_variable_decl(self, tp, name): - if isinstance(tp, model.ArrayType): - tp_ptr = model.PointerType(tp.item) - self._generate_cpy_const(False, name, tp, vartp=tp_ptr, - size_too = tp.length_is_unknown()) - else: - tp_ptr = model.PointerType(tp) - self._generate_cpy_const(False, name, tp_ptr, category='var') - - _generate_cpy_variable_method = _generate_nothing - _loading_cpy_variable = _loaded_noop - - def _loaded_cpy_variable(self, tp, name, module, library): - value = getattr(library, name) - if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the - # sense that "a=..." is forbidden - if tp.length_is_unknown(): - assert isinstance(value, tuple) - (value, size) = value - BItemType = self.ffi._get_cached_btype(tp.item) - length, rest = divmod(size, self.ffi.sizeof(BItemType)) - if rest != 0: - raise VerificationError( - "bad size: %r does not seem to be an array of %s" % - (name, tp.item)) - tp = tp.resolve_length(length) - # 'value' is a which we have to replace with - # a if the N is actually known - if tp.length is not None: - BArray = self.ffi._get_cached_btype(tp) - value = self.ffi.cast(BArray, value) - setattr(library, name, value) - return - # remove ptr= from the library instance, and replace - # it by a property on the class, which reads/writes into ptr[0]. - ptr = value - delattr(library, name) - def getter(library): - return ptr[0] - def setter(library, value): - ptr[0] = value - setattr(type(library), name, property(getter, setter)) - type(library)._cffi_dir.append(name) - - # ---------- - - def _generate_setup_custom(self): - prnt = self._prnt - prnt('static int _cffi_setup_custom(PyObject *lib)') - prnt('{') - prnt(' return %s;' % self._chained_list_constants[True]) - prnt('}') - -cffimod_header = r''' -#include -#include - -/* this block of #ifs should be kept exactly identical between - c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py - and cffi/_cffi_include.h */ -#if defined(_MSC_VER) -# include /* for alloca() */ -# if _MSC_VER < 1600 /* MSVC < 2010 */ - typedef __int8 int8_t; - typedef __int16 int16_t; - typedef __int32 int32_t; - typedef __int64 int64_t; - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - typedef unsigned __int64 uint64_t; - typedef __int8 int_least8_t; - typedef __int16 int_least16_t; - typedef __int32 int_least32_t; - typedef __int64 int_least64_t; - typedef unsigned __int8 uint_least8_t; - typedef unsigned __int16 uint_least16_t; - typedef unsigned __int32 uint_least32_t; - typedef unsigned __int64 uint_least64_t; - typedef __int8 int_fast8_t; - typedef __int16 int_fast16_t; - typedef __int32 int_fast32_t; - typedef __int64 int_fast64_t; - typedef unsigned __int8 uint_fast8_t; - typedef unsigned __int16 uint_fast16_t; - typedef unsigned __int32 uint_fast32_t; - typedef unsigned __int64 uint_fast64_t; - typedef __int64 intmax_t; - typedef unsigned __int64 uintmax_t; -# else -# include -# endif -# if _MSC_VER < 1800 /* MSVC < 2013 */ -# ifndef __cplusplus - typedef unsigned char _Bool; -# endif -# endif -# define _cffi_float_complex_t _Fcomplex /* include for it */ -# define _cffi_double_complex_t _Dcomplex /* include for it */ -#else -# include -# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) -# include -# endif -# define _cffi_float_complex_t float _Complex -# define _cffi_double_complex_t double _Complex -#endif - -#if PY_MAJOR_VERSION < 3 -# undef PyCapsule_CheckExact -# undef PyCapsule_GetPointer -# define PyCapsule_CheckExact(capsule) (PyCObject_Check(capsule)) -# define PyCapsule_GetPointer(capsule, name) \ - (PyCObject_AsVoidPtr(capsule)) -#endif - -#if PY_MAJOR_VERSION >= 3 -# define PyInt_FromLong PyLong_FromLong -#endif - -#define _cffi_from_c_double PyFloat_FromDouble -#define _cffi_from_c_float PyFloat_FromDouble -#define _cffi_from_c_long PyInt_FromLong -#define _cffi_from_c_ulong PyLong_FromUnsignedLong -#define _cffi_from_c_longlong PyLong_FromLongLong -#define _cffi_from_c_ulonglong PyLong_FromUnsignedLongLong -#define _cffi_from_c__Bool PyBool_FromLong - -#define _cffi_to_c_double PyFloat_AsDouble -#define _cffi_to_c_float PyFloat_AsDouble - -#define _cffi_from_c_int_const(x) \ - (((x) > 0) ? \ - ((unsigned long long)(x) <= (unsigned long long)LONG_MAX) ? \ - PyInt_FromLong((long)(x)) : \ - PyLong_FromUnsignedLongLong((unsigned long long)(x)) : \ - ((long long)(x) >= (long long)LONG_MIN) ? \ - PyInt_FromLong((long)(x)) : \ - PyLong_FromLongLong((long long)(x))) - -#define _cffi_from_c_int(x, type) \ - (((type)-1) > 0 ? /* unsigned */ \ - (sizeof(type) < sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - sizeof(type) == sizeof(long) ? \ - PyLong_FromUnsignedLong((unsigned long)x) : \ - PyLong_FromUnsignedLongLong((unsigned long long)x)) : \ - (sizeof(type) <= sizeof(long) ? \ - PyInt_FromLong((long)x) : \ - PyLong_FromLongLong((long long)x))) - -#define _cffi_to_c_int(o, type) \ - ((type)( \ - sizeof(type) == 1 ? (((type)-1) > 0 ? (type)_cffi_to_c_u8(o) \ - : (type)_cffi_to_c_i8(o)) : \ - sizeof(type) == 2 ? (((type)-1) > 0 ? (type)_cffi_to_c_u16(o) \ - : (type)_cffi_to_c_i16(o)) : \ - sizeof(type) == 4 ? (((type)-1) > 0 ? (type)_cffi_to_c_u32(o) \ - : (type)_cffi_to_c_i32(o)) : \ - sizeof(type) == 8 ? (((type)-1) > 0 ? (type)_cffi_to_c_u64(o) \ - : (type)_cffi_to_c_i64(o)) : \ - (Py_FatalError("unsupported size for type " #type), (type)0))) - -#define _cffi_to_c_i8 \ - ((int(*)(PyObject *))_cffi_exports[1]) -#define _cffi_to_c_u8 \ - ((int(*)(PyObject *))_cffi_exports[2]) -#define _cffi_to_c_i16 \ - ((int(*)(PyObject *))_cffi_exports[3]) -#define _cffi_to_c_u16 \ - ((int(*)(PyObject *))_cffi_exports[4]) -#define _cffi_to_c_i32 \ - ((int(*)(PyObject *))_cffi_exports[5]) -#define _cffi_to_c_u32 \ - ((unsigned int(*)(PyObject *))_cffi_exports[6]) -#define _cffi_to_c_i64 \ - ((long long(*)(PyObject *))_cffi_exports[7]) -#define _cffi_to_c_u64 \ - ((unsigned long long(*)(PyObject *))_cffi_exports[8]) -#define _cffi_to_c_char \ - ((int(*)(PyObject *))_cffi_exports[9]) -#define _cffi_from_c_pointer \ - ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[10]) -#define _cffi_to_c_pointer \ - ((char *(*)(PyObject *, CTypeDescrObject *))_cffi_exports[11]) -#define _cffi_get_struct_layout \ - ((PyObject *(*)(Py_ssize_t[]))_cffi_exports[12]) -#define _cffi_restore_errno \ - ((void(*)(void))_cffi_exports[13]) -#define _cffi_save_errno \ - ((void(*)(void))_cffi_exports[14]) -#define _cffi_from_c_char \ - ((PyObject *(*)(char))_cffi_exports[15]) -#define _cffi_from_c_deref \ - ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[16]) -#define _cffi_to_c \ - ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[17]) -#define _cffi_from_c_struct \ - ((PyObject *(*)(char *, CTypeDescrObject *))_cffi_exports[18]) -#define _cffi_to_c_wchar_t \ - ((wchar_t(*)(PyObject *))_cffi_exports[19]) -#define _cffi_from_c_wchar_t \ - ((PyObject *(*)(wchar_t))_cffi_exports[20]) -#define _cffi_to_c_long_double \ - ((long double(*)(PyObject *))_cffi_exports[21]) -#define _cffi_to_c__Bool \ - ((_Bool(*)(PyObject *))_cffi_exports[22]) -#define _cffi_prepare_pointer_call_argument \ - ((Py_ssize_t(*)(CTypeDescrObject *, PyObject *, char **))_cffi_exports[23]) -#define _cffi_convert_array_from_object \ - ((int(*)(char *, CTypeDescrObject *, PyObject *))_cffi_exports[24]) -#define _CFFI_NUM_EXPORTS 25 - -typedef struct _ctypedescr CTypeDescrObject; - -static void *_cffi_exports[_CFFI_NUM_EXPORTS]; -static PyObject *_cffi_types, *_cffi_VerificationError; - -static int _cffi_setup_custom(PyObject *lib); /* forward */ - -static PyObject *_cffi_setup(PyObject *self, PyObject *args) -{ - PyObject *library; - int was_alive = (_cffi_types != NULL); - (void)self; /* unused */ - if (!PyArg_ParseTuple(args, "OOO", &_cffi_types, &_cffi_VerificationError, - &library)) - return NULL; - Py_INCREF(_cffi_types); - Py_INCREF(_cffi_VerificationError); - if (_cffi_setup_custom(library) < 0) - return NULL; - return PyBool_FromLong(was_alive); -} - -union _cffi_union_alignment_u { - unsigned char m_char; - unsigned short m_short; - unsigned int m_int; - unsigned long m_long; - unsigned long long m_longlong; - float m_float; - double m_double; - long double m_longdouble; -}; - -struct _cffi_freeme_s { - struct _cffi_freeme_s *next; - union _cffi_union_alignment_u alignment; -}; - -#ifdef __GNUC__ - __attribute__((unused)) -#endif -static int _cffi_convert_array_argument(CTypeDescrObject *ctptr, PyObject *arg, - char **output_data, Py_ssize_t datasize, - struct _cffi_freeme_s **freeme) -{ - char *p; - if (datasize < 0) - return -1; - - p = *output_data; - if (p == NULL) { - struct _cffi_freeme_s *fp = (struct _cffi_freeme_s *)PyObject_Malloc( - offsetof(struct _cffi_freeme_s, alignment) + (size_t)datasize); - if (fp == NULL) - return -1; - fp->next = *freeme; - *freeme = fp; - p = *output_data = (char *)&fp->alignment; - } - memset((void *)p, 0, (size_t)datasize); - return _cffi_convert_array_from_object(p, ctptr, arg); -} - -#ifdef __GNUC__ - __attribute__((unused)) -#endif -static void _cffi_free_array_arguments(struct _cffi_freeme_s *freeme) -{ - do { - void *p = (void *)freeme; - freeme = freeme->next; - PyObject_Free(p); - } while (freeme != NULL); -} - -static int _cffi_init(void) -{ - PyObject *module, *c_api_object = NULL; - - module = PyImport_ImportModule("_cffi_backend"); - if (module == NULL) - goto failure; - - c_api_object = PyObject_GetAttrString(module, "_C_API"); - if (c_api_object == NULL) - goto failure; - if (!PyCapsule_CheckExact(c_api_object)) { - PyErr_SetNone(PyExc_ImportError); - goto failure; - } - memcpy(_cffi_exports, PyCapsule_GetPointer(c_api_object, "cffi"), - _CFFI_NUM_EXPORTS * sizeof(void *)); - - Py_DECREF(module); - Py_DECREF(c_api_object); - return 0; - - failure: - Py_XDECREF(module); - Py_XDECREF(c_api_object); - return -1; -} - -#define _cffi_type(num) ((CTypeDescrObject *)PyList_GET_ITEM(_cffi_types, num)) - -/**********/ -''' diff --git a/pptx-env/lib/python3.12/site-packages/cffi/vengine_gen.py b/pptx-env/lib/python3.12/site-packages/cffi/vengine_gen.py deleted file mode 100644 index bffc8212..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/vengine_gen.py +++ /dev/null @@ -1,679 +0,0 @@ -# -# DEPRECATED: implementation for ffi.verify() -# -import sys, os -import types - -from . import model -from .error import VerificationError - - -class VGenericEngine(object): - _class_key = 'g' - _gen_python_module = False - - def __init__(self, verifier): - self.verifier = verifier - self.ffi = verifier.ffi - self.export_symbols = [] - self._struct_pending_verification = {} - - def patch_extension_kwds(self, kwds): - # add 'export_symbols' to the dictionary. Note that we add the - # list before filling it. When we fill it, it will thus also show - # up in kwds['export_symbols']. - kwds.setdefault('export_symbols', self.export_symbols) - - def find_module(self, module_name, path, so_suffixes): - for so_suffix in so_suffixes: - basename = module_name + so_suffix - if path is None: - path = sys.path - for dirname in path: - filename = os.path.join(dirname, basename) - if os.path.isfile(filename): - return filename - - def collect_types(self): - pass # not needed in the generic engine - - def _prnt(self, what=''): - self._f.write(what + '\n') - - def write_source_to_f(self): - prnt = self._prnt - # first paste some standard set of lines that are mostly '#include' - prnt(cffimod_header) - # then paste the C source given by the user, verbatim. - prnt(self.verifier.preamble) - # - # call generate_gen_xxx_decl(), for every xxx found from - # ffi._parser._declarations. This generates all the functions. - self._generate('decl') - # - # on Windows, distutils insists on putting init_cffi_xyz in - # 'export_symbols', so instead of fighting it, just give up and - # give it one - if sys.platform == 'win32': - if sys.version_info >= (3,): - prefix = 'PyInit_' - else: - prefix = 'init' - modname = self.verifier.get_module_name() - prnt("void %s%s(void) { }\n" % (prefix, modname)) - - def load_library(self, flags=0): - # import it with the CFFI backend - backend = self.ffi._backend - # needs to make a path that contains '/', on Posix - filename = os.path.join(os.curdir, self.verifier.modulefilename) - module = backend.load_library(filename, flags) - # - # call loading_gen_struct() to get the struct layout inferred by - # the C compiler - self._load(module, 'loading') - - # build the FFILibrary class and instance, this is a module subclass - # because modules are expected to have usually-constant-attributes and - # in PyPy this means the JIT is able to treat attributes as constant, - # which we want. - class FFILibrary(types.ModuleType): - _cffi_generic_module = module - _cffi_ffi = self.ffi - _cffi_dir = [] - def __dir__(self): - return FFILibrary._cffi_dir - library = FFILibrary("") - # - # finally, call the loaded_gen_xxx() functions. This will set - # up the 'library' object. - self._load(module, 'loaded', library=library) - return library - - def _get_declarations(self): - lst = [(key, tp) for (key, (tp, qual)) in - self.ffi._parser._declarations.items()] - lst.sort() - return lst - - def _generate(self, step_name): - for name, tp in self._get_declarations(): - kind, realname = name.split(' ', 1) - try: - method = getattr(self, '_generate_gen_%s_%s' % (kind, - step_name)) - except AttributeError: - raise VerificationError( - "not implemented in verify(): %r" % name) - try: - method(tp, realname) - except Exception as e: - model.attach_exception_info(e, name) - raise - - def _load(self, module, step_name, **kwds): - for name, tp in self._get_declarations(): - kind, realname = name.split(' ', 1) - method = getattr(self, '_%s_gen_%s' % (step_name, kind)) - try: - method(tp, realname, module, **kwds) - except Exception as e: - model.attach_exception_info(e, name) - raise - - def _generate_nothing(self, tp, name): - pass - - def _loaded_noop(self, tp, name, module, **kwds): - pass - - # ---------- - # typedefs: generates no code so far - - _generate_gen_typedef_decl = _generate_nothing - _loading_gen_typedef = _loaded_noop - _loaded_gen_typedef = _loaded_noop - - # ---------- - # function declarations - - def _generate_gen_function_decl(self, tp, name): - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - # cannot support vararg functions better than this: check for its - # exact type (including the fixed arguments), and build it as a - # constant function pointer (no _cffi_f_%s wrapper) - self._generate_gen_const(False, name, tp) - return - prnt = self._prnt - numargs = len(tp.args) - argnames = [] - for i, type in enumerate(tp.args): - indirection = '' - if isinstance(type, model.StructOrUnion): - indirection = '*' - argnames.append('%sx%d' % (indirection, i)) - context = 'argument of %s' % name - arglist = [type.get_c_name(' %s' % arg, context) - for type, arg in zip(tp.args, argnames)] - tpresult = tp.result - if isinstance(tpresult, model.StructOrUnion): - arglist.insert(0, tpresult.get_c_name(' *r', context)) - tpresult = model.void_type - arglist = ', '.join(arglist) or 'void' - wrappername = '_cffi_f_%s' % name - self.export_symbols.append(wrappername) - if tp.abi: - abi = tp.abi + ' ' - else: - abi = '' - funcdecl = ' %s%s(%s)' % (abi, wrappername, arglist) - context = 'result of %s' % name - prnt(tpresult.get_c_name(funcdecl, context)) - prnt('{') - # - if isinstance(tp.result, model.StructOrUnion): - result_code = '*r = ' - elif not isinstance(tp.result, model.VoidType): - result_code = 'return ' - else: - result_code = '' - prnt(' %s%s(%s);' % (result_code, name, ', '.join(argnames))) - prnt('}') - prnt() - - _loading_gen_function = _loaded_noop - - def _loaded_gen_function(self, tp, name, module, library): - assert isinstance(tp, model.FunctionPtrType) - if tp.ellipsis: - newfunction = self._load_constant(False, tp, name, module) - else: - indirections = [] - base_tp = tp - if (any(isinstance(typ, model.StructOrUnion) for typ in tp.args) - or isinstance(tp.result, model.StructOrUnion)): - indirect_args = [] - for i, typ in enumerate(tp.args): - if isinstance(typ, model.StructOrUnion): - typ = model.PointerType(typ) - indirections.append((i, typ)) - indirect_args.append(typ) - indirect_result = tp.result - if isinstance(indirect_result, model.StructOrUnion): - if indirect_result.fldtypes is None: - raise TypeError("'%s' is used as result type, " - "but is opaque" % ( - indirect_result._get_c_name(),)) - indirect_result = model.PointerType(indirect_result) - indirect_args.insert(0, indirect_result) - indirections.insert(0, ("result", indirect_result)) - indirect_result = model.void_type - tp = model.FunctionPtrType(tuple(indirect_args), - indirect_result, tp.ellipsis) - BFunc = self.ffi._get_cached_btype(tp) - wrappername = '_cffi_f_%s' % name - newfunction = module.load_function(BFunc, wrappername) - for i, typ in indirections: - newfunction = self._make_struct_wrapper(newfunction, i, typ, - base_tp) - setattr(library, name, newfunction) - type(library)._cffi_dir.append(name) - - def _make_struct_wrapper(self, oldfunc, i, tp, base_tp): - backend = self.ffi._backend - BType = self.ffi._get_cached_btype(tp) - if i == "result": - ffi = self.ffi - def newfunc(*args): - res = ffi.new(BType) - oldfunc(res, *args) - return res[0] - else: - def newfunc(*args): - args = args[:i] + (backend.newp(BType, args[i]),) + args[i+1:] - return oldfunc(*args) - newfunc._cffi_base_type = base_tp - return newfunc - - # ---------- - # named structs - - def _generate_gen_struct_decl(self, tp, name): - assert name == tp.name - self._generate_struct_or_union_decl(tp, 'struct', name) - - def _loading_gen_struct(self, tp, name, module): - self._loading_struct_or_union(tp, 'struct', name, module) - - def _loaded_gen_struct(self, tp, name, module, **kwds): - self._loaded_struct_or_union(tp) - - def _generate_gen_union_decl(self, tp, name): - assert name == tp.name - self._generate_struct_or_union_decl(tp, 'union', name) - - def _loading_gen_union(self, tp, name, module): - self._loading_struct_or_union(tp, 'union', name, module) - - def _loaded_gen_union(self, tp, name, module, **kwds): - self._loaded_struct_or_union(tp) - - def _generate_struct_or_union_decl(self, tp, prefix, name): - if tp.fldnames is None: - return # nothing to do with opaque structs - checkfuncname = '_cffi_check_%s_%s' % (prefix, name) - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - cname = ('%s %s' % (prefix, name)).strip() - # - prnt = self._prnt - prnt('static void %s(%s *p)' % (checkfuncname, cname)) - prnt('{') - prnt(' /* only to generate compile-time warnings or errors */') - prnt(' (void)p;') - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if (isinstance(ftype, model.PrimitiveType) - and ftype.is_integer_type()) or fbitsize >= 0: - # accept all integers, but complain on float or double - prnt(' (void)((p->%s) << 1);' % fname) - else: - # only accept exactly the type declared. - try: - prnt(' { %s = &p->%s; (void)tmp; }' % ( - ftype.get_c_name('*tmp', 'field %r'%fname, quals=fqual), - fname)) - except VerificationError as e: - prnt(' /* %s */' % str(e)) # cannot verify it, ignore - prnt('}') - self.export_symbols.append(layoutfuncname) - prnt('intptr_t %s(intptr_t i)' % (layoutfuncname,)) - prnt('{') - prnt(' struct _cffi_aligncheck { char x; %s y; };' % cname) - prnt(' static intptr_t nums[] = {') - prnt(' sizeof(%s),' % cname) - prnt(' offsetof(struct _cffi_aligncheck, y),') - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if fbitsize >= 0: - continue # xxx ignore fbitsize for now - prnt(' offsetof(%s, %s),' % (cname, fname)) - if isinstance(ftype, model.ArrayType) and ftype.length is None: - prnt(' 0, /* %s */' % ftype._get_c_name()) - else: - prnt(' sizeof(((%s *)0)->%s),' % (cname, fname)) - prnt(' -1') - prnt(' };') - prnt(' return nums[i];') - prnt(' /* the next line is not executed, but compiled */') - prnt(' %s(0);' % (checkfuncname,)) - prnt('}') - prnt() - - def _loading_struct_or_union(self, tp, prefix, name, module): - if tp.fldnames is None: - return # nothing to do with opaque structs - layoutfuncname = '_cffi_layout_%s_%s' % (prefix, name) - # - BFunc = self.ffi._typeof_locked("intptr_t(*)(intptr_t)")[0] - function = module.load_function(BFunc, layoutfuncname) - layout = [] - num = 0 - while True: - x = function(num) - if x < 0: break - layout.append(x) - num += 1 - if isinstance(tp, model.StructOrUnion) and tp.partial: - # use the function()'s sizes and offsets to guide the - # layout of the struct - totalsize = layout[0] - totalalignment = layout[1] - fieldofs = layout[2::2] - fieldsize = layout[3::2] - tp.force_flatten() - assert len(fieldofs) == len(fieldsize) == len(tp.fldnames) - tp.fixedlayout = fieldofs, fieldsize, totalsize, totalalignment - else: - cname = ('%s %s' % (prefix, name)).strip() - self._struct_pending_verification[tp] = layout, cname - - def _loaded_struct_or_union(self, tp): - if tp.fldnames is None: - return # nothing to do with opaque structs - self.ffi._get_cached_btype(tp) # force 'fixedlayout' to be considered - - if tp in self._struct_pending_verification: - # check that the layout sizes and offsets match the real ones - def check(realvalue, expectedvalue, msg): - if realvalue != expectedvalue: - raise VerificationError( - "%s (we have %d, but C compiler says %d)" - % (msg, expectedvalue, realvalue)) - ffi = self.ffi - BStruct = ffi._get_cached_btype(tp) - layout, cname = self._struct_pending_verification.pop(tp) - check(layout[0], ffi.sizeof(BStruct), "wrong total size") - check(layout[1], ffi.alignof(BStruct), "wrong total alignment") - i = 2 - for fname, ftype, fbitsize, fqual in tp.enumfields(): - if fbitsize >= 0: - continue # xxx ignore fbitsize for now - check(layout[i], ffi.offsetof(BStruct, fname), - "wrong offset for field %r" % (fname,)) - if layout[i+1] != 0: - BField = ffi._get_cached_btype(ftype) - check(layout[i+1], ffi.sizeof(BField), - "wrong size for field %r" % (fname,)) - i += 2 - assert i == len(layout) - - # ---------- - # 'anonymous' declarations. These are produced for anonymous structs - # or unions; the 'name' is obtained by a typedef. - - def _generate_gen_anonymous_decl(self, tp, name): - if isinstance(tp, model.EnumType): - self._generate_gen_enum_decl(tp, name, '') - else: - self._generate_struct_or_union_decl(tp, '', name) - - def _loading_gen_anonymous(self, tp, name, module): - if isinstance(tp, model.EnumType): - self._loading_gen_enum(tp, name, module, '') - else: - self._loading_struct_or_union(tp, '', name, module) - - def _loaded_gen_anonymous(self, tp, name, module, **kwds): - if isinstance(tp, model.EnumType): - self._loaded_gen_enum(tp, name, module, **kwds) - else: - self._loaded_struct_or_union(tp) - - # ---------- - # constants, likely declared with '#define' - - def _generate_gen_const(self, is_int, name, tp=None, category='const', - check_value=None): - prnt = self._prnt - funcname = '_cffi_%s_%s' % (category, name) - self.export_symbols.append(funcname) - if check_value is not None: - assert is_int - assert category == 'const' - prnt('int %s(char *out_error)' % funcname) - prnt('{') - self._check_int_constant_value(name, check_value) - prnt(' return 0;') - prnt('}') - elif is_int: - assert category == 'const' - prnt('int %s(long long *out_value)' % funcname) - prnt('{') - prnt(' *out_value = (long long)(%s);' % (name,)) - prnt(' return (%s) <= 0;' % (name,)) - prnt('}') - else: - assert tp is not None - assert check_value is None - if category == 'var': - ampersand = '&' - else: - ampersand = '' - extra = '' - if category == 'const' and isinstance(tp, model.StructOrUnion): - extra = 'const *' - ampersand = '&' - prnt(tp.get_c_name(' %s%s(void)' % (extra, funcname), name)) - prnt('{') - prnt(' return (%s%s);' % (ampersand, name)) - prnt('}') - prnt() - - def _generate_gen_constant_decl(self, tp, name): - is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() - self._generate_gen_const(is_int, name, tp) - - _loading_gen_constant = _loaded_noop - - def _load_constant(self, is_int, tp, name, module, check_value=None): - funcname = '_cffi_const_%s' % name - if check_value is not None: - assert is_int - self._load_known_int_constant(module, funcname) - value = check_value - elif is_int: - BType = self.ffi._typeof_locked("long long*")[0] - BFunc = self.ffi._typeof_locked("int(*)(long long*)")[0] - function = module.load_function(BFunc, funcname) - p = self.ffi.new(BType) - negative = function(p) - value = int(p[0]) - if value < 0 and not negative: - BLongLong = self.ffi._typeof_locked("long long")[0] - value += (1 << (8*self.ffi.sizeof(BLongLong))) - else: - assert check_value is None - fntypeextra = '(*)(void)' - if isinstance(tp, model.StructOrUnion): - fntypeextra = '*' + fntypeextra - BFunc = self.ffi._typeof_locked(tp.get_c_name(fntypeextra, name))[0] - function = module.load_function(BFunc, funcname) - value = function() - if isinstance(tp, model.StructOrUnion): - value = value[0] - return value - - def _loaded_gen_constant(self, tp, name, module, library): - is_int = isinstance(tp, model.PrimitiveType) and tp.is_integer_type() - value = self._load_constant(is_int, tp, name, module) - setattr(library, name, value) - type(library)._cffi_dir.append(name) - - # ---------- - # enums - - def _check_int_constant_value(self, name, value): - prnt = self._prnt - if value <= 0: - prnt(' if ((%s) > 0 || (long)(%s) != %dL) {' % ( - name, name, value)) - else: - prnt(' if ((%s) <= 0 || (unsigned long)(%s) != %dUL) {' % ( - name, name, value)) - prnt(' char buf[64];') - prnt(' if ((%s) <= 0)' % name) - prnt(' sprintf(buf, "%%ld", (long)(%s));' % name) - prnt(' else') - prnt(' sprintf(buf, "%%lu", (unsigned long)(%s));' % - name) - prnt(' sprintf(out_error, "%s has the real value %s, not %s",') - prnt(' "%s", buf, "%d");' % (name[:100], value)) - prnt(' return -1;') - prnt(' }') - - def _load_known_int_constant(self, module, funcname): - BType = self.ffi._typeof_locked("char[]")[0] - BFunc = self.ffi._typeof_locked("int(*)(char*)")[0] - function = module.load_function(BFunc, funcname) - p = self.ffi.new(BType, 256) - if function(p) < 0: - error = self.ffi.string(p) - if sys.version_info >= (3,): - error = str(error, 'utf-8') - raise VerificationError(error) - - def _enum_funcname(self, prefix, name): - # "$enum_$1" => "___D_enum____D_1" - name = name.replace('$', '___D_') - return '_cffi_e_%s_%s' % (prefix, name) - - def _generate_gen_enum_decl(self, tp, name, prefix='enum'): - if tp.partial: - for enumerator in tp.enumerators: - self._generate_gen_const(True, enumerator) - return - # - funcname = self._enum_funcname(prefix, name) - self.export_symbols.append(funcname) - prnt = self._prnt - prnt('int %s(char *out_error)' % funcname) - prnt('{') - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - self._check_int_constant_value(enumerator, enumvalue) - prnt(' return 0;') - prnt('}') - prnt() - - def _loading_gen_enum(self, tp, name, module, prefix='enum'): - if tp.partial: - enumvalues = [self._load_constant(True, tp, enumerator, module) - for enumerator in tp.enumerators] - tp.enumvalues = tuple(enumvalues) - tp.partial_resolved = True - else: - funcname = self._enum_funcname(prefix, name) - self._load_known_int_constant(module, funcname) - - def _loaded_gen_enum(self, tp, name, module, library): - for enumerator, enumvalue in zip(tp.enumerators, tp.enumvalues): - setattr(library, enumerator, enumvalue) - type(library)._cffi_dir.append(enumerator) - - # ---------- - # macros: for now only for integers - - def _generate_gen_macro_decl(self, tp, name): - if tp == '...': - check_value = None - else: - check_value = tp # an integer - self._generate_gen_const(True, name, check_value=check_value) - - _loading_gen_macro = _loaded_noop - - def _loaded_gen_macro(self, tp, name, module, library): - if tp == '...': - check_value = None - else: - check_value = tp # an integer - value = self._load_constant(True, tp, name, module, - check_value=check_value) - setattr(library, name, value) - type(library)._cffi_dir.append(name) - - # ---------- - # global variables - - def _generate_gen_variable_decl(self, tp, name): - if isinstance(tp, model.ArrayType): - if tp.length_is_unknown(): - prnt = self._prnt - funcname = '_cffi_sizeof_%s' % (name,) - self.export_symbols.append(funcname) - prnt("size_t %s(void)" % funcname) - prnt("{") - prnt(" return sizeof(%s);" % (name,)) - prnt("}") - tp_ptr = model.PointerType(tp.item) - self._generate_gen_const(False, name, tp_ptr) - else: - tp_ptr = model.PointerType(tp) - self._generate_gen_const(False, name, tp_ptr, category='var') - - _loading_gen_variable = _loaded_noop - - def _loaded_gen_variable(self, tp, name, module, library): - if isinstance(tp, model.ArrayType): # int a[5] is "constant" in the - # sense that "a=..." is forbidden - if tp.length_is_unknown(): - funcname = '_cffi_sizeof_%s' % (name,) - BFunc = self.ffi._typeof_locked('size_t(*)(void)')[0] - function = module.load_function(BFunc, funcname) - size = function() - BItemType = self.ffi._get_cached_btype(tp.item) - length, rest = divmod(size, self.ffi.sizeof(BItemType)) - if rest != 0: - raise VerificationError( - "bad size: %r does not seem to be an array of %s" % - (name, tp.item)) - tp = tp.resolve_length(length) - tp_ptr = model.PointerType(tp.item) - value = self._load_constant(False, tp_ptr, name, module) - # 'value' is a which we have to replace with - # a if the N is actually known - if tp.length is not None: - BArray = self.ffi._get_cached_btype(tp) - value = self.ffi.cast(BArray, value) - setattr(library, name, value) - type(library)._cffi_dir.append(name) - return - # remove ptr= from the library instance, and replace - # it by a property on the class, which reads/writes into ptr[0]. - funcname = '_cffi_var_%s' % name - BFunc = self.ffi._typeof_locked(tp.get_c_name('*(*)(void)', name))[0] - function = module.load_function(BFunc, funcname) - ptr = function() - def getter(library): - return ptr[0] - def setter(library, value): - ptr[0] = value - setattr(type(library), name, property(getter, setter)) - type(library)._cffi_dir.append(name) - -cffimod_header = r''' -#include -#include -#include -#include -#include /* XXX for ssize_t on some platforms */ - -/* this block of #ifs should be kept exactly identical between - c/_cffi_backend.c, cffi/vengine_cpy.py, cffi/vengine_gen.py - and cffi/_cffi_include.h */ -#if defined(_MSC_VER) -# include /* for alloca() */ -# if _MSC_VER < 1600 /* MSVC < 2010 */ - typedef __int8 int8_t; - typedef __int16 int16_t; - typedef __int32 int32_t; - typedef __int64 int64_t; - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - typedef unsigned __int64 uint64_t; - typedef __int8 int_least8_t; - typedef __int16 int_least16_t; - typedef __int32 int_least32_t; - typedef __int64 int_least64_t; - typedef unsigned __int8 uint_least8_t; - typedef unsigned __int16 uint_least16_t; - typedef unsigned __int32 uint_least32_t; - typedef unsigned __int64 uint_least64_t; - typedef __int8 int_fast8_t; - typedef __int16 int_fast16_t; - typedef __int32 int_fast32_t; - typedef __int64 int_fast64_t; - typedef unsigned __int8 uint_fast8_t; - typedef unsigned __int16 uint_fast16_t; - typedef unsigned __int32 uint_fast32_t; - typedef unsigned __int64 uint_fast64_t; - typedef __int64 intmax_t; - typedef unsigned __int64 uintmax_t; -# else -# include -# endif -# if _MSC_VER < 1800 /* MSVC < 2013 */ -# ifndef __cplusplus - typedef unsigned char _Bool; -# endif -# endif -# define _cffi_float_complex_t _Fcomplex /* include for it */ -# define _cffi_double_complex_t _Dcomplex /* include for it */ -#else -# include -# if (defined (__SVR4) && defined (__sun)) || defined(_AIX) || defined(__hpux) -# include -# endif -# define _cffi_float_complex_t float _Complex -# define _cffi_double_complex_t double _Complex -#endif -''' diff --git a/pptx-env/lib/python3.12/site-packages/cffi/verifier.py b/pptx-env/lib/python3.12/site-packages/cffi/verifier.py deleted file mode 100644 index e392a2b7..00000000 --- a/pptx-env/lib/python3.12/site-packages/cffi/verifier.py +++ /dev/null @@ -1,306 +0,0 @@ -# -# DEPRECATED: implementation for ffi.verify() -# -import sys, os, binascii, shutil, io -from . import __version_verifier_modules__ -from . import ffiplatform -from .error import VerificationError - -if sys.version_info >= (3, 3): - import importlib.machinery - def _extension_suffixes(): - return importlib.machinery.EXTENSION_SUFFIXES[:] -else: - import imp - def _extension_suffixes(): - return [suffix for suffix, _, type in imp.get_suffixes() - if type == imp.C_EXTENSION] - - -if sys.version_info >= (3,): - NativeIO = io.StringIO -else: - class NativeIO(io.BytesIO): - def write(self, s): - if isinstance(s, unicode): - s = s.encode('ascii') - super(NativeIO, self).write(s) - - -class Verifier(object): - - def __init__(self, ffi, preamble, tmpdir=None, modulename=None, - ext_package=None, tag='', force_generic_engine=False, - source_extension='.c', flags=None, relative_to=None, **kwds): - if ffi._parser._uses_new_feature: - raise VerificationError( - "feature not supported with ffi.verify(), but only " - "with ffi.set_source(): %s" % (ffi._parser._uses_new_feature,)) - self.ffi = ffi - self.preamble = preamble - if not modulename: - flattened_kwds = ffiplatform.flatten(kwds) - vengine_class = _locate_engine_class(ffi, force_generic_engine) - self._vengine = vengine_class(self) - self._vengine.patch_extension_kwds(kwds) - self.flags = flags - self.kwds = self.make_relative_to(kwds, relative_to) - # - if modulename: - if tag: - raise TypeError("can't specify both 'modulename' and 'tag'") - else: - key = '\x00'.join(['%d.%d' % sys.version_info[:2], - __version_verifier_modules__, - preamble, flattened_kwds] + - ffi._cdefsources) - if sys.version_info >= (3,): - key = key.encode('utf-8') - k1 = hex(binascii.crc32(key[0::2]) & 0xffffffff) - k1 = k1.lstrip('0x').rstrip('L') - k2 = hex(binascii.crc32(key[1::2]) & 0xffffffff) - k2 = k2.lstrip('0').rstrip('L') - modulename = '_cffi_%s_%s%s%s' % (tag, self._vengine._class_key, - k1, k2) - suffix = _get_so_suffixes()[0] - self.tmpdir = tmpdir or _caller_dir_pycache() - self.sourcefilename = os.path.join(self.tmpdir, modulename + source_extension) - self.modulefilename = os.path.join(self.tmpdir, modulename + suffix) - self.ext_package = ext_package - self._has_source = False - self._has_module = False - - def write_source(self, file=None): - """Write the C source code. It is produced in 'self.sourcefilename', - which can be tweaked beforehand.""" - with self.ffi._lock: - if self._has_source and file is None: - raise VerificationError( - "source code already written") - self._write_source(file) - - def compile_module(self): - """Write the C source code (if not done already) and compile it. - This produces a dynamic link library in 'self.modulefilename'.""" - with self.ffi._lock: - if self._has_module: - raise VerificationError("module already compiled") - if not self._has_source: - self._write_source() - self._compile_module() - - def load_library(self): - """Get a C module from this Verifier instance. - Returns an instance of a FFILibrary class that behaves like the - objects returned by ffi.dlopen(), but that delegates all - operations to the C module. If necessary, the C code is written - and compiled first. - """ - with self.ffi._lock: - if not self._has_module: - self._locate_module() - if not self._has_module: - if not self._has_source: - self._write_source() - self._compile_module() - return self._load_library() - - def get_module_name(self): - basename = os.path.basename(self.modulefilename) - # kill both the .so extension and the other .'s, as introduced - # by Python 3: 'basename.cpython-33m.so' - basename = basename.split('.', 1)[0] - # and the _d added in Python 2 debug builds --- but try to be - # conservative and not kill a legitimate _d - if basename.endswith('_d') and hasattr(sys, 'gettotalrefcount'): - basename = basename[:-2] - return basename - - def get_extension(self): - if not self._has_source: - with self.ffi._lock: - if not self._has_source: - self._write_source() - sourcename = ffiplatform.maybe_relative_path(self.sourcefilename) - modname = self.get_module_name() - return ffiplatform.get_extension(sourcename, modname, **self.kwds) - - def generates_python_module(self): - return self._vengine._gen_python_module - - def make_relative_to(self, kwds, relative_to): - if relative_to and os.path.dirname(relative_to): - dirname = os.path.dirname(relative_to) - kwds = kwds.copy() - for key in ffiplatform.LIST_OF_FILE_NAMES: - if key in kwds: - lst = kwds[key] - if not isinstance(lst, (list, tuple)): - raise TypeError("keyword '%s' should be a list or tuple" - % (key,)) - lst = [os.path.join(dirname, fn) for fn in lst] - kwds[key] = lst - return kwds - - # ---------- - - def _locate_module(self): - if not os.path.isfile(self.modulefilename): - if self.ext_package: - try: - pkg = __import__(self.ext_package, None, None, ['__doc__']) - except ImportError: - return # cannot import the package itself, give up - # (e.g. it might be called differently before installation) - path = pkg.__path__ - else: - path = None - filename = self._vengine.find_module(self.get_module_name(), path, - _get_so_suffixes()) - if filename is None: - return - self.modulefilename = filename - self._vengine.collect_types() - self._has_module = True - - def _write_source_to(self, file): - self._vengine._f = file - try: - self._vengine.write_source_to_f() - finally: - del self._vengine._f - - def _write_source(self, file=None): - if file is not None: - self._write_source_to(file) - else: - # Write our source file to an in memory file. - f = NativeIO() - self._write_source_to(f) - source_data = f.getvalue() - - # Determine if this matches the current file - if os.path.exists(self.sourcefilename): - with open(self.sourcefilename, "r") as fp: - needs_written = not (fp.read() == source_data) - else: - needs_written = True - - # Actually write the file out if it doesn't match - if needs_written: - _ensure_dir(self.sourcefilename) - with open(self.sourcefilename, "w") as fp: - fp.write(source_data) - - # Set this flag - self._has_source = True - - def _compile_module(self): - # compile this C source - tmpdir = os.path.dirname(self.sourcefilename) - outputfilename = ffiplatform.compile(tmpdir, self.get_extension()) - try: - same = ffiplatform.samefile(outputfilename, self.modulefilename) - except OSError: - same = False - if not same: - _ensure_dir(self.modulefilename) - shutil.move(outputfilename, self.modulefilename) - self._has_module = True - - def _load_library(self): - assert self._has_module - if self.flags is not None: - return self._vengine.load_library(self.flags) - else: - return self._vengine.load_library() - -# ____________________________________________________________ - -_FORCE_GENERIC_ENGINE = False # for tests - -def _locate_engine_class(ffi, force_generic_engine): - if _FORCE_GENERIC_ENGINE: - force_generic_engine = True - if not force_generic_engine: - if '__pypy__' in sys.builtin_module_names: - force_generic_engine = True - else: - try: - import _cffi_backend - except ImportError: - _cffi_backend = '?' - if ffi._backend is not _cffi_backend: - force_generic_engine = True - if force_generic_engine: - from . import vengine_gen - return vengine_gen.VGenericEngine - else: - from . import vengine_cpy - return vengine_cpy.VCPythonEngine - -# ____________________________________________________________ - -_TMPDIR = None - -def _caller_dir_pycache(): - if _TMPDIR: - return _TMPDIR - result = os.environ.get('CFFI_TMPDIR') - if result: - return result - filename = sys._getframe(2).f_code.co_filename - return os.path.abspath(os.path.join(os.path.dirname(filename), - '__pycache__')) - -def set_tmpdir(dirname): - """Set the temporary directory to use instead of __pycache__.""" - global _TMPDIR - _TMPDIR = dirname - -def cleanup_tmpdir(tmpdir=None, keep_so=False): - """Clean up the temporary directory by removing all files in it - called `_cffi_*.{c,so}` as well as the `build` subdirectory.""" - tmpdir = tmpdir or _caller_dir_pycache() - try: - filelist = os.listdir(tmpdir) - except OSError: - return - if keep_so: - suffix = '.c' # only remove .c files - else: - suffix = _get_so_suffixes()[0].lower() - for fn in filelist: - if fn.lower().startswith('_cffi_') and ( - fn.lower().endswith(suffix) or fn.lower().endswith('.c')): - try: - os.unlink(os.path.join(tmpdir, fn)) - except OSError: - pass - clean_dir = [os.path.join(tmpdir, 'build')] - for dir in clean_dir: - try: - for fn in os.listdir(dir): - fn = os.path.join(dir, fn) - if os.path.isdir(fn): - clean_dir.append(fn) - else: - os.unlink(fn) - except OSError: - pass - -def _get_so_suffixes(): - suffixes = _extension_suffixes() - if not suffixes: - # bah, no C_EXTENSION available. Occurs on pypy without cpyext - if sys.platform == 'win32': - suffixes = [".pyd"] - else: - suffixes = [".so"] - - return suffixes - -def _ensure_dir(filename): - dirname = os.path.dirname(filename) - if dirname and not os.path.isdir(dirname): - os.makedirs(dirname) diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/INSTALLER b/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/INSTALLER deleted file mode 100644 index a1b589e3..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/INSTALLER +++ /dev/null @@ -1 +0,0 @@ -pip diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/METADATA b/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/METADATA deleted file mode 100644 index c00c2e1a..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/METADATA +++ /dev/null @@ -1,63 +0,0 @@ -Metadata-Version: 2.4 -Name: cssselect2 -Version: 0.8.0 -Summary: CSS selectors for Python ElementTree -Keywords: css,elementtree -Author-email: Simon Sapin -Maintainer-email: CourtBouillon -Requires-Python: >=3.9 -Description-Content-Type: text/x-rst -Classifier: Development Status :: 5 - Production/Stable -Classifier: Intended Audience :: Developers -Classifier: License :: OSI Approved :: BSD License -Classifier: Operating System :: OS Independent -Classifier: Programming Language :: Python -Classifier: Programming Language :: Python :: 3 -Classifier: Programming Language :: Python :: 3 :: Only -Classifier: Programming Language :: Python :: 3.9 -Classifier: Programming Language :: Python :: 3.10 -Classifier: Programming Language :: Python :: 3.11 -Classifier: Programming Language :: Python :: 3.12 -Classifier: Programming Language :: Python :: 3.13 -Classifier: Programming Language :: Python :: Implementation :: CPython -Classifier: Programming Language :: Python :: Implementation :: PyPy -Classifier: Topic :: Internet :: WWW/HTTP -License-File: LICENSE -Requires-Dist: tinycss2 -Requires-Dist: webencodings -Requires-Dist: sphinx ; extra == "doc" -Requires-Dist: furo ; extra == "doc" -Requires-Dist: pytest ; extra == "test" -Requires-Dist: ruff ; extra == "test" -Project-URL: Changelog, https://github.com/Kozea/cssselect2/releases -Project-URL: Code, https://github.com/Kozea/cssselect2/ -Project-URL: Documentation, https://doc.courtbouillon.org/cssselect2/ -Project-URL: Donation, https://opencollective.com/courtbouillon -Project-URL: Homepage, https://doc.courtbouillon.org/cssselect2/ -Project-URL: Issues, https://github.com/Kozea/cssselect2/issues -Provides-Extra: doc -Provides-Extra: test - -cssselect2 is a straightforward implementation of CSS4 Selectors for markup -documents (HTML, XML, etc.) that can be read by ElementTree-like parsers -(including cElementTree, lxml, html5lib, etc.) - -* Free software: BSD license -* For Python 3.9+, tested on CPython and PyPy -* Documentation: https://doc.courtbouillon.org/cssselect2 -* Changelog: https://github.com/Kozea/cssselect2/releases -* Code, issues, tests: https://github.com/Kozea/cssselect2 -* Code of conduct: https://www.courtbouillon.org/code-of-conduct.html -* Professional support: https://www.courtbouillon.org -* Donation: https://opencollective.com/courtbouillon - -cssselect2 has been created and developed by Kozea (https://kozea.fr/). -Professional support, maintenance and community management is provided by -CourtBouillon (https://www.courtbouillon.org/). - -Copyrights are retained by their contributors, no copyright assignment is -required to contribute to cssselect2. Unless explicitly stated otherwise, any -contribution intentionally submitted for inclusion is licensed under the BSD -3-clause license, without any additional terms or conditions. For full -authorship information, see the version control history. - diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/RECORD b/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/RECORD deleted file mode 100644 index a83b9da9..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/RECORD +++ /dev/null @@ -1,13 +0,0 @@ -cssselect2-0.8.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4 -cssselect2-0.8.0.dist-info/METADATA,sha256=kSCZCQ5aCztgfgWZU6A_JEYNNKwMh2oK0atzdgwpNUs,2913 -cssselect2-0.8.0.dist-info/RECORD,, -cssselect2-0.8.0.dist-info/WHEEL,sha256=_2ozNFCLWc93bK4WKHCO-eDUENDlo-dgc9cU3qokYO4,82 -cssselect2-0.8.0.dist-info/licenses/LICENSE,sha256=b9lyKaHRsPaotB4Qn0E0JtvAh0seA3RtZswzKCYBwsI,1548 -cssselect2/__init__.py,sha256=NE8miBh2KOpXtqGKNn5exISqDuWFJ_z3VmnxpAiblDI,4289 -cssselect2/__pycache__/__init__.cpython-312.pyc,, -cssselect2/__pycache__/compiler.cpython-312.pyc,, -cssselect2/__pycache__/parser.cpython-312.pyc,, -cssselect2/__pycache__/tree.cpython-312.pyc,, -cssselect2/compiler.py,sha256=c5jvLm9VEo3XLi8aeeUozZn6XlKBaeXAmOr-P2YFOUs,18899 -cssselect2/parser.py,sha256=Kmh5XY03eF2Bs5x53X2pRXDPbtPyMU5RZ3zrGOHgtJQ,16285 -cssselect2/tree.py,sha256=7ewFwKfLGwCYjdCle6hY0gk8_vAQqw7DvU_lmpqt1eg,13634 diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/WHEEL b/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/WHEEL deleted file mode 100644 index 23d2d7e9..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/WHEEL +++ /dev/null @@ -1,4 +0,0 @@ -Wheel-Version: 1.0 -Generator: flit 3.11.0 -Root-Is-Purelib: true -Tag: py3-none-any diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/licenses/LICENSE b/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/licenses/LICENSE deleted file mode 100644 index 520a431b..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2-0.8.0.dist-info/licenses/LICENSE +++ /dev/null @@ -1,29 +0,0 @@ -BSD 3-Clause License - -Copyright (c) 2012-2018, Simon Sapin and contributors (see AUTHORS). -All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from - this software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" -AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE -FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL -DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR -SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER -CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, -OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/__init__.py b/pptx-env/lib/python3.12/site-packages/cssselect2/__init__.py deleted file mode 100644 index e354d0f8..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2/__init__.py +++ /dev/null @@ -1,113 +0,0 @@ -"""CSS4 selectors for Python. - -cssselect2 is a straightforward implementation of CSS4 Selectors for markup -documents (HTML, XML, etc.) that can be read by ElementTree-like parsers -(including cElementTree, lxml, html5lib, etc.) - -""" - -from webencodings import ascii_lower - -# Classes are imported here to expose them at the top level of the module -from .compiler import compile_selector_list # noqa -from .parser import SelectorError # noqa -from .tree import ElementWrapper # noqa - -VERSION = __version__ = '0.8.0' - - -class Matcher: - """A CSS selectors storage that can match against HTML elements.""" - def __init__(self): - self.id_selectors = {} - self.class_selectors = {} - self.lower_local_name_selectors = {} - self.namespace_selectors = {} - self.lang_attr_selectors = [] - self.other_selectors = [] - self.order = 0 - - def add_selector(self, selector, payload): - """Add a selector and its payload to the matcher. - - :param selector: - A :class:`compiler.CompiledSelector` object. - :param payload: - Some data associated to the selector, - such as :class:`declarations ` - parsed from the :attr:`tinycss2.ast.QualifiedRule.content` - of a style rule. - It can be any Python object, - and will be returned as-is by :meth:`match`. - - """ - self.order += 1 - - if selector.never_matches: - return - - entry = ( - selector.test, selector.specificity, self.order, selector.pseudo_element, - payload) - if selector.id is not None: - self.id_selectors.setdefault(selector.id, []).append(entry) - elif selector.class_name is not None: - self.class_selectors.setdefault(selector.class_name, []).append(entry) - elif selector.local_name is not None: - self.lower_local_name_selectors.setdefault( - selector.lower_local_name, []).append(entry) - elif selector.namespace is not None: - self.namespace_selectors.setdefault(selector.namespace, []).append(entry) - elif selector.requires_lang_attr: - self.lang_attr_selectors.append(entry) - else: - self.other_selectors.append(entry) - - def match(self, element): - """Match selectors against the given element. - - :param element: - An :class:`ElementWrapper`. - :returns: - A list of the payload objects associated to selectors that match - element, in order of lowest to highest - :attr:`compiler.CompiledSelector` specificity and in order of - addition with :meth:`add_selector` among selectors of equal - specificity. - - """ - relevant_selectors = [] - - if element.id is not None and element.id in self.id_selectors: - self.add_relevant_selectors( - element, self.id_selectors[element.id], relevant_selectors) - - for class_name in element.classes: - if class_name in self.class_selectors: - self.add_relevant_selectors( - element, self.class_selectors[class_name], relevant_selectors) - - lower_name = ascii_lower(element.local_name) - if lower_name in self.lower_local_name_selectors: - self.add_relevant_selectors( - element, self.lower_local_name_selectors[lower_name], - relevant_selectors) - if element.namespace_url in self.namespace_selectors: - self.add_relevant_selectors( - element, self.namespace_selectors[element.namespace_url], - relevant_selectors) - - if 'lang' in element.etree_element.attrib: - self.add_relevant_selectors( - element, self.lang_attr_selectors, relevant_selectors) - - self.add_relevant_selectors(element, self.other_selectors, relevant_selectors) - - relevant_selectors.sort() - return relevant_selectors - - @staticmethod - def add_relevant_selectors(element, selectors, relevant_selectors): - for test, specificity, order, pseudo, payload in selectors: - if test(element): - relevant_selectors.append((specificity, order, pseudo, payload)) diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 88e84c2c..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/compiler.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/compiler.cpython-312.pyc deleted file mode 100644 index 0dd2fd88..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/compiler.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/parser.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/parser.cpython-312.pyc deleted file mode 100644 index 42d5dce2..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/parser.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/tree.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/tree.cpython-312.pyc deleted file mode 100644 index 0404b973..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/cssselect2/__pycache__/tree.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/compiler.py b/pptx-env/lib/python3.12/site-packages/cssselect2/compiler.py deleted file mode 100644 index 23017662..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2/compiler.py +++ /dev/null @@ -1,426 +0,0 @@ -import re -from urllib.parse import urlparse - -from tinycss2.nth import parse_nth -from webencodings import ascii_lower - -from . import parser -from .parser import SelectorError - -# http://dev.w3.org/csswg/selectors/#whitespace -split_whitespace = re.compile('[^ \t\r\n\f]+').findall - - -def compile_selector_list(input, namespaces=None): - """Compile a (comma-separated) list of selectors. - - :param input: - A string, or an iterable of tinycss2 component values such as - the :attr:`tinycss2.ast.QualifiedRule.prelude` of a style rule. - :param namespaces: - A optional dictionary of all `namespace prefix declarations - `_ in scope for this selector. - Keys are namespace prefixes as strings, or ``None`` for the default - namespace. - Values are namespace URLs as strings. - If omitted, assume that no prefix is declared. - :returns: - A list of opaque :class:`compiler.CompiledSelector` objects. - - """ - return [CompiledSelector(selector) for selector in parser.parse(input, namespaces)] - - -class CompiledSelector: - """Abstract representation of a selector.""" - def __init__(self, parsed_selector): - source = _compile_node(parsed_selector.parsed_tree) - self.never_matches = source == '0' - eval_globals = { - 'split_whitespace': split_whitespace, - 'ascii_lower': ascii_lower, - 'urlparse': urlparse, - } - self.test = eval('lambda el: ' + source, eval_globals, {}) - self.specificity = parsed_selector.specificity - self.pseudo_element = parsed_selector.pseudo_element - self.id = None - self.class_name = None - self.local_name = None - self.lower_local_name = None - self.namespace = None - self.requires_lang_attr = False - - node = parsed_selector.parsed_tree - if isinstance(node, parser.CombinedSelector): - node = node.right - for simple_selector in node.simple_selectors: - if isinstance(simple_selector, parser.IDSelector): - self.id = simple_selector.ident - elif isinstance(simple_selector, parser.ClassSelector): - self.class_name = simple_selector.class_name - elif isinstance(simple_selector, parser.LocalNameSelector): - self.local_name = simple_selector.local_name - self.lower_local_name = simple_selector.lower_local_name - elif isinstance(simple_selector, parser.NamespaceSelector): - self.namespace = simple_selector.namespace - elif isinstance(simple_selector, parser.AttributeSelector): - if simple_selector.name == 'lang': - self.requires_lang_attr = True - - -def _compile_node(selector): - """Return a boolean expression, as a Python source string. - - When evaluated in a context where the `el` variable is an - :class:`cssselect2.tree.Element` object, tells whether the element is a - subject of `selector`. - - """ - # To avoid precedence-related bugs, any sub-expression that is passed - # around must be "atomic": add parentheses when the top-level would be - # an operator. Bare literals and function calls are fine. - - # 1 and 0 are used for True and False to avoid global lookups. - - if isinstance(selector, parser.CombinedSelector): - left_inside = _compile_node(selector.left) - if left_inside == '0': - return '0' # 0 and x == 0 - elif left_inside == '1': - # 1 and x == x, but the element matching 1 still needs to exist. - if selector.combinator in (' ', '>'): - left = 'el.parent is not None' - elif selector.combinator in ('~', '+'): - left = 'el.previous is not None' - else: - raise SelectorError('Unknown combinator', selector.combinator) - # Rebind the `el` name inside a generator-expressions (in a new scope) - # so that 'left_inside' applies to different elements. - elif selector.combinator == ' ': - left = f'any(({left_inside}) for el in el.ancestors)' - elif selector.combinator == '>': - left = ( - f'next(el is not None and ({left_inside}) ' - 'for el in [el.parent])') - elif selector.combinator == '+': - left = ( - f'next(el is not None and ({left_inside}) ' - 'for el in [el.previous])') - elif selector.combinator == '~': - left = f'any(({left_inside}) for el in el.previous_siblings)' - else: - raise SelectorError('Unknown combinator', selector.combinator) - - right = _compile_node(selector.right) - if right == '0': - return '0' # 0 and x == 0 - elif right == '1': - return left # 1 and x == x - else: - # Evaluate combinators right to left - return f'({right}) and ({left})' - - elif isinstance(selector, parser.CompoundSelector): - sub_expressions = [ - expr for expr in [ - _compile_node(selector) - for selector in selector.simple_selectors] - if expr != '1'] - if len(sub_expressions) == 1: - return sub_expressions[0] - elif '0' in sub_expressions: - return '0' - elif sub_expressions: - return ' and '.join(f'({el})' for el in sub_expressions) - else: - return '1' # all([]) == True - - elif isinstance(selector, parser.NegationSelector): - sub_expressions = [ - expr for expr in [ - _compile_node(selector.parsed_tree) - for selector in selector.selector_list] - if expr != '1'] - if not sub_expressions: - return '0' - return f'not ({" or ".join(f"({expr})" for expr in sub_expressions)})' - - elif isinstance(selector, parser.RelationalSelector): - sub_expressions = [] - for relative_selector in selector.selector_list: - expression = _compile_node(relative_selector.selector.parsed_tree) - if expression == '0': - continue - if relative_selector.combinator == ' ': - elements = 'list(el.iter_subtree())[1:]' - elif relative_selector.combinator == '>': - elements = 'el.iter_children()' - elif relative_selector.combinator == '+': - elements = 'list(el.iter_next_siblings())[:1]' - elif relative_selector.combinator == '~': - elements = 'el.iter_next_siblings()' - sub_expressions.append(f'(any({expression} for el in {elements}))') - return ' or '.join(sub_expressions) - - elif isinstance(selector, ( - parser.MatchesAnySelector, parser.SpecificityAdjustmentSelector)): - sub_expressions = [ - expr for expr in [ - _compile_node(selector.parsed_tree) - for selector in selector.selector_list] - if expr != '0'] - if not sub_expressions: - return '0' - return ' or '.join(f'({expr})' for expr in sub_expressions) - - elif isinstance(selector, parser.LocalNameSelector): - if selector.lower_local_name == selector.local_name: - return f'el.local_name == {selector.local_name!r}' - else: - return ( - f'el.local_name == ({selector.lower_local_name!r} ' - f'if el.in_html_document else {selector.local_name!r})') - - elif isinstance(selector, parser.NamespaceSelector): - return f'el.namespace_url == {selector.namespace!r}' - - elif isinstance(selector, parser.ClassSelector): - return f'{selector.class_name!r} in el.classes' - - elif isinstance(selector, parser.IDSelector): - return f'el.id == {selector.ident!r}' - - elif isinstance(selector, parser.AttributeSelector): - if selector.namespace is not None: - if selector.namespace: - if selector.name == selector.lower_name: - key = repr(f'{{{selector.namespace}}}{selector.name}') - else: - lower = f'{{{selector.namespace}}}{selector.lower_name}' - name = f'{{{selector.namespace}}}{selector.name}' - key = f'({lower!r} if el.in_html_document else {name!r})' - else: - if selector.name == selector.lower_name: - key = repr(selector.name) - else: - lower, name = selector.lower_name, selector.name - key = f'({lower!r} if el.in_html_document else {name!r})' - value = selector.value - attribute_value = f'el.etree_element.get({key}, "")' - if selector.case_sensitive is False: - value = value.lower() - attribute_value += '.lower()' - if selector.operator is None: - return f'{key} in el.etree_element.attrib' - elif selector.operator == '=': - return ( - f'{key} in el.etree_element.attrib and ' - f'{attribute_value} == {value!r}') - elif selector.operator == '~=': - return ( - '0' if len(value.split()) != 1 or value.strip() != value - else f'{value!r} in split_whitespace({attribute_value})') - elif selector.operator == '|=': - return ( - f'{key} in el.etree_element.attrib and ' - f'{attribute_value} == {value!r} or ' - f'{attribute_value}.startswith({(value + "-")!r})') - elif selector.operator == '^=': - if value: - return f'{attribute_value}.startswith({value!r})' - else: - return '0' - elif selector.operator == '$=': - return ( - f'{attribute_value}.endswith({value!r})' if value else '0') - elif selector.operator == '*=': - return f'{value!r} in {attribute_value}' if value else '0' - else: - raise SelectorError('Unknown attribute operator', selector.operator) - else: # In any namespace - raise NotImplementedError # TODO - - elif isinstance(selector, parser.PseudoClassSelector): - if selector.name in ('link', 'any-link', 'local-link'): - test = html_tag_eq('a', 'area', 'link') - test += ' and el.etree_element.get("href") is not None ' - if selector.name == 'local-link': - test += 'and not urlparse(el.etree_element.get("href")).scheme' - return test - elif selector.name == 'enabled': - input = html_tag_eq( - 'button', 'input', 'select', 'textarea', 'option') - group = html_tag_eq('optgroup', 'menuitem', 'fieldset') - a = html_tag_eq('a', 'area', 'link') - return ( - f'({input} and el.etree_element.get("disabled") is None' - ' and not el.in_disabled_fieldset) or' - f'({group} and el.etree_element.get("disabled") is None) or ' - f'({a} and el.etree_element.get("href") is not None)') - elif selector.name == 'disabled': - input = html_tag_eq( - 'button', 'input', 'select', 'textarea', 'option') - group = html_tag_eq('optgroup', 'menuitem', 'fieldset') - return ( - f'({input} and (el.etree_element.get("disabled") is not None' - ' or el.in_disabled_fieldset)) or' - f'({group} and el.etree_element.get("disabled") is not None)') - elif selector.name == 'checked': - input = html_tag_eq('input', 'menuitem') - option = html_tag_eq('option') - return ( - f'({input} and el.etree_element.get("checked") is not None and' - ' ascii_lower(el.etree_element.get("type", "")) ' - ' in ("checkbox", "radio")) or (' - f'{option} and el.etree_element.get("selected") is not None)') - elif selector.name in ( - 'visited', 'hover', 'active', 'focus', 'focus-within', - 'focus-visible', 'target', 'target-within', 'current', 'past', - 'future', 'playing', 'paused', 'seeking', 'buffering', - 'stalled', 'muted', 'volume-locked', 'user-valid', - 'user-invalid'): - # Not applicable in a static context: never match. - return '0' - elif selector.name in ('root', 'scope'): - return 'el.parent is None' - elif selector.name == 'first-child': - return 'el.index == 0' - elif selector.name == 'last-child': - return 'el.index + 1 == len(el.etree_siblings)' - elif selector.name == 'first-of-type': - return ( - 'all(s.tag != el.etree_element.tag' - ' for s in el.etree_siblings[:el.index])') - elif selector.name == 'last-of-type': - return ( - 'all(s.tag != el.etree_element.tag' - ' for s in el.etree_siblings[el.index + 1:])') - elif selector.name == 'only-child': - return 'len(el.etree_siblings) == 1' - elif selector.name == 'only-of-type': - return ( - 'all(s.tag != el.etree_element.tag or i == el.index' - ' for i, s in enumerate(el.etree_siblings))') - elif selector.name == 'empty': - return 'not (el.etree_children or el.etree_element.text)' - else: - raise SelectorError('Unknown pseudo-class', selector.name) - - elif isinstance(selector, parser.FunctionalPseudoClassSelector): - if selector.name == 'lang': - langs = [] - tokens = [ - token for token in selector.arguments - if token.type not in ('whitespace', 'comment')] - while tokens: - token = tokens.pop(0) - if token.type == 'ident': - langs.append(token.lower_value) - elif token.type == 'string': - langs.append(ascii_lower(token.value)) - else: - raise SelectorError('Invalid arguments for :lang()') - if tokens: - token = tokens.pop(0) - if token.type != 'ident' and token.value != ',': - raise SelectorError('Invalid arguments for :lang()') - return ' or '.join( - f'el.lang == {lang!r} or el.lang.startswith({(lang + "-")!r})' - for lang in langs) - else: - nth = [] - selector_list = [] - current_list = nth - for argument in selector.arguments: - if argument.type == 'ident' and argument.value == 'of': - if current_list is nth: - current_list = selector_list - continue - current_list.append(argument) - - if selector_list: - test = ' and '.join( - _compile_node(selector.parsed_tree) - for selector in parser.parse(selector_list)) - if selector.name == 'nth-child': - count = ( - f'sum(1 for el in el.previous_siblings if ({test}))') - elif selector.name == 'nth-last-child': - count = ( - 'sum(1 for el in' - ' tuple(el.iter_siblings())[el.index + 1:]' - f' if ({test}))') - elif selector.name == 'nth-of-type': - count = ( - 'sum(1 for s in (' - ' el for el in el.previous_siblings' - f' if ({test}))' - ' if s.etree_element.tag == el.etree_element.tag)') - elif selector.name == 'nth-last-of-type': - count = ( - 'sum(1 for s in (' - ' el for el in' - ' tuple(el.iter_siblings())[el.index + 1:]' - f' if ({test}))' - ' if s.etree_element.tag == el.etree_element.tag)') - else: - raise SelectorError('Unknown pseudo-class', selector.name) - count += f'if ({test}) else float("nan")' - else: - if current_list is selector_list: - raise SelectorError( - f'Invalid arguments for :{selector.name}()') - if selector.name == 'nth-child': - count = 'el.index' - elif selector.name == 'nth-last-child': - count = 'len(el.etree_siblings) - el.index - 1' - elif selector.name == 'nth-of-type': - count = ( - 'sum(1 for s in el.etree_siblings[:el.index]' - ' if s.tag == el.etree_element.tag)') - elif selector.name == 'nth-last-of-type': - count = ( - 'sum(1 for s in el.etree_siblings[el.index + 1:]' - ' if s.tag == el.etree_element.tag)') - else: - raise SelectorError('Unknown pseudo-class', selector.name) - - result = parse_nth(nth) - if result is None: - raise SelectorError( - f'Invalid arguments for :{selector.name}()') - a, b = result - # x is the number of siblings before/after the element - # Matches if a positive or zero integer n exists so that: - # x = a*n + b-1 - # x = a*n + B - B = b - 1 # noqa: N806 - if a == 0: - # x = B - return f'({count}) == {B}' - else: - # n = (x - B) / a - return ( - 'next(r == 0 and n >= 0' - f' for n, r in [divmod(({count}) - {B}, {a})])') - - else: - raise TypeError(type(selector), selector) - - -def html_tag_eq(*local_names): - """Generate expression testing equality with HTML local names.""" - if len(local_names) == 1: - tag = f'{{http://www.w3.org/1999/xhtml}}{local_names[0]}' - return ( - f'((el.local_name == {local_names[0]!r}) if el.in_html_document ' - f'else (el.etree_element.tag == {tag!r}))') - else: - names = ', '.join(repr(n) for n in local_names) - tags = ', '.join( - repr(f'{{http://www.w3.org/1999/xhtml}}{name}') - for name in local_names) - return ( - f'((el.local_name in ({names})) if el.in_html_document ' - f'else (el.etree_element.tag in ({tags})))') diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/parser.py b/pptx-env/lib/python3.12/site-packages/cssselect2/parser.py deleted file mode 100644 index 1056d6be..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2/parser.py +++ /dev/null @@ -1,522 +0,0 @@ -from tinycss2 import parse_component_value_list - -__all__ = ['parse'] - -SUPPORTED_PSEUDO_ELEMENTS = { - # As per CSS Pseudo-Elements Module Level 4 - 'first-line', 'first-letter', 'prefix', 'postfix', 'selection', - 'target-text', 'spelling-error', 'grammar-error', 'before', 'after', - 'marker', 'placeholder', 'file-selector-button', - # As per CSS Generated Content for Paged Media Module - 'footnote-call', 'footnote-marker', - # As per CSS Scoping Module Level 1 - 'content', 'shadow', -} - - -def parse(input, namespaces=None, forgiving=False, relative=False): - """Yield tinycss2 selectors found in given ``input``. - - :param input: - A string, or an iterable of tinycss2 component values. - - """ - if isinstance(input, str): - input = parse_component_value_list(input) - tokens = TokenStream(input) - namespaces = namespaces or {} - try: - yield parse_selector(tokens, namespaces, relative) - except SelectorError as exception: - if forgiving: - return - raise exception - while 1: - next = tokens.next() - if next is None: - return - elif next == ',': - try: - yield parse_selector(tokens, namespaces, relative) - except SelectorError as exception: - if not forgiving: - raise exception - else: - if not forgiving: - raise SelectorError(next, f'unexpected {next.type} token.') - - -def parse_selector(tokens, namespaces, relative=False): - tokens.skip_whitespace_and_comment() - if relative: - peek = tokens.peek() - if peek in ('>', '+', '~'): - initial_combinator = peek.value - tokens.next() - else: - initial_combinator = ' ' - tokens.skip_whitespace_and_comment() - result, pseudo_element = parse_compound_selector(tokens, namespaces) - while 1: - has_whitespace = tokens.skip_whitespace() - while tokens.skip_comment(): - has_whitespace = tokens.skip_whitespace() or has_whitespace - selector = Selector(result, pseudo_element) - if relative: - selector = RelativeSelector(initial_combinator, selector) - if pseudo_element is not None: - return selector - peek = tokens.peek() - if peek is None or peek == ',': - return selector - elif peek in ('>', '+', '~'): - combinator = peek.value - tokens.next() - elif has_whitespace: - combinator = ' ' - else: - return selector - compound, pseudo_element = parse_compound_selector(tokens, namespaces) - result = CombinedSelector(result, combinator, compound) - - -def parse_compound_selector(tokens, namespaces): - type_selectors = parse_type_selector(tokens, namespaces) - simple_selectors = type_selectors if type_selectors is not None else [] - while 1: - simple_selector, pseudo_element = parse_simple_selector( - tokens, namespaces) - if pseudo_element is not None or simple_selector is None: - break - simple_selectors.append(simple_selector) - - if simple_selectors or (type_selectors, pseudo_element) != (None, None): - return CompoundSelector(simple_selectors), pseudo_element - - peek = tokens.peek() - peek_type = peek.type if peek else 'EOF' - raise SelectorError(peek, f'expected a compound selector, got {peek_type}') - - -def parse_type_selector(tokens, namespaces): - tokens.skip_whitespace() - qualified_name = parse_qualified_name(tokens, namespaces) - if qualified_name is None: - return None - - simple_selectors = [] - namespace, local_name = qualified_name - if local_name is not None: - simple_selectors.append(LocalNameSelector(local_name)) - if namespace is not None: - simple_selectors.append(NamespaceSelector(namespace)) - return simple_selectors - - -def parse_simple_selector(tokens, namespaces): - peek = tokens.peek() - if peek is None: - return None, None - if peek.type == 'hash' and peek.is_identifier: - tokens.next() - return IDSelector(peek.value), None - elif peek == '.': - tokens.next() - next = tokens.next() - if next is None or next.type != 'ident': - raise SelectorError(next, f'Expected a class name, got {next}') - return ClassSelector(next.value), None - elif peek.type == '[] block': - tokens.next() - attr = parse_attribute_selector(TokenStream(peek.content), namespaces) - return attr, None - elif peek == ':': - tokens.next() - next = tokens.next() - if next == ':': - next = tokens.next() - if next is None or next.type != 'ident': - raise SelectorError(next, f'Expected a pseudo-element name, got {next}') - value = next.lower_value - if value not in SUPPORTED_PSEUDO_ELEMENTS: - raise SelectorError( - next, f'Expected a supported pseudo-element, got {value}') - return None, value - elif next is not None and next.type == 'ident': - name = next.lower_value - if name in ('before', 'after', 'first-line', 'first-letter'): - return None, name - else: - return PseudoClassSelector(name), None - elif next is not None and next.type == 'function': - name = next.lower_name - if name in ('is', 'where', 'not', 'has'): - return parse_logical_combination(next, namespaces, name), None - else: - return (FunctionalPseudoClassSelector(name, next.arguments), None) - else: - raise SelectorError(next, f'unexpected {next} token.') - else: - return None, None - - -def parse_logical_combination(matches_any_token, namespaces, name): - forgiving = True - relative = False - if name == 'is': - selector_class = MatchesAnySelector - elif name == 'where': - selector_class = SpecificityAdjustmentSelector - elif name == 'not': - forgiving = False - selector_class = NegationSelector - elif name == 'has': - relative = True - selector_class = RelationalSelector - - selectors = [ - selector for selector in - parse(matches_any_token.arguments, namespaces, forgiving, relative) - if selector.pseudo_element is None] - return selector_class(selectors) - - -def parse_attribute_selector(tokens, namespaces): - tokens.skip_whitespace() - qualified_name = parse_qualified_name(tokens, namespaces, is_attribute=True) - if qualified_name is None: - next = tokens.next() - raise SelectorError(next, f'expected attribute name, got {next}') - namespace, local_name = qualified_name - - tokens.skip_whitespace() - peek = tokens.peek() - if peek is None: - operator = None - value = None - elif peek in ('=', '~=', '|=', '^=', '$=', '*='): - operator = peek.value - tokens.next() - tokens.skip_whitespace() - next = tokens.next() - if next is None or next.type not in ('ident', 'string'): - next_type = 'None' if next is None else next.type - raise SelectorError(next, f'expected attribute value, got {next_type}') - value = next.value - else: - raise SelectorError(peek, f'expected attribute selector operator, got {peek}') - - tokens.skip_whitespace() - next = tokens.next() - case_sensitive = None - if next is not None: - if next.type == 'ident' and next.value.lower() == 'i': - case_sensitive = False - elif next.type == 'ident' and next.value.lower() == 's': - case_sensitive = True - else: - raise SelectorError(next, f'expected ], got {next.type}') - return AttributeSelector(namespace, local_name, operator, value, case_sensitive) - - -def parse_qualified_name(tokens, namespaces, is_attribute=False): - """Return ``(namespace, local)`` for given tokens. - - Can also return ``None`` for a wildcard. - - The empty string for ``namespace`` means "no namespace". - - """ - peek = tokens.peek() - if peek is None: - return None - if peek.type == 'ident': - first_ident = tokens.next() - peek = tokens.peek() - if peek != '|': - namespace = '' if is_attribute else namespaces.get(None, None) - return namespace, (first_ident.value, first_ident.lower_value) - tokens.next() - namespace = namespaces.get(first_ident.value) - if namespace is None: - raise SelectorError( - first_ident, f'undefined namespace prefix: {first_ident.value}') - elif peek == '*': - next = tokens.next() - peek = tokens.peek() - if peek != '|': - if is_attribute: - raise SelectorError(next, f'expected local name, got {next.type}') - return namespaces.get(None, None), None - tokens.next() - namespace = None - elif peek == '|': - tokens.next() - namespace = '' - else: - return None - - # If we get here, we just consumed '|' and set ``namespace`` - next = tokens.next() - if next.type == 'ident': - return namespace, (next.value, next.lower_value) - elif next == '*' and not is_attribute: - return namespace, None - else: - raise SelectorError(next, f'expected local name, got {next.type}') - - -class SelectorError(ValueError): - """A specialized ``ValueError`` for invalid selectors.""" - - -class TokenStream: - def __init__(self, tokens): - self.tokens = iter(tokens) - self.peeked = [] # In reversed order - - def next(self): - if self.peeked: - return self.peeked.pop() - else: - return next(self.tokens, None) - - def peek(self): - if not self.peeked: - self.peeked.append(next(self.tokens, None)) - return self.peeked[-1] - - def skip(self, skip_types): - found = False - while 1: - peek = self.peek() - if peek is None or peek.type not in skip_types: - break - self.next() - found = True - return found - - def skip_whitespace(self): - return self.skip(['whitespace']) - - def skip_comment(self): - return self.skip(['comment']) - - def skip_whitespace_and_comment(self): - return self.skip(['comment', 'whitespace']) - - -class Selector: - def __init__(self, tree, pseudo_element=None): - self.parsed_tree = tree - self.pseudo_element = pseudo_element - if pseudo_element is None: - #: Tuple of 3 integers: http://www.w3.org/TR/selectors/#specificity - self.specificity = tree.specificity - else: - a, b, c = tree.specificity - self.specificity = a, b, c + 1 - - def __repr__(self): - pseudo = f'::{self.pseudo_element}' if self.pseudo_element else '' - return f'{self.parsed_tree!r}{pseudo}' - - -class RelativeSelector: - def __init__(self, combinator, selector): - self.combinator = combinator - self.selector = selector - - @property - def specificity(self): - return self.selector.specificity - - @property - def pseudo_element(self): - return self.selector.pseudo_element - - def __repr__(self): - return ( - f'{self.selector!r}' if self.combinator == ' ' - else f'{self.combinator} {self.selector!r}') - - -class CombinedSelector: - def __init__(self, left, combinator, right): - #: Combined or compound selector - self.left = left - # One of `` `` (a single space), ``>``, ``+`` or ``~``. - self.combinator = combinator - #: compound selector - self.right = right - - @property - def specificity(self): - a1, b1, c1 = self.left.specificity - a2, b2, c2 = self.right.specificity - return a1 + a2, b1 + b2, c1 + c2 - - def __repr__(self): - return f'{self.left!r}{self.combinator}{self.right!r}' - - -class CompoundSelector: - def __init__(self, simple_selectors): - self.simple_selectors = simple_selectors - - @property - def specificity(self): - if self.simple_selectors: - # zip(*foo) turns [(a1, b1, c1), (a2, b2, c2), ...] - # into [(a1, a2, ...), (b1, b2, ...), (c1, c2, ...)] - return tuple(map(sum, zip( - *(sel.specificity for sel in self.simple_selectors)))) - else: - return 0, 0, 0 - - def __repr__(self): - return ''.join(map(repr, self.simple_selectors)) - - -class LocalNameSelector: - specificity = 0, 0, 1 - - def __init__(self, local_name): - self.local_name, self.lower_local_name = local_name - - def __repr__(self): - return self.local_name - - -class NamespaceSelector: - specificity = 0, 0, 0 - - def __init__(self, namespace): - #: The namespace URL as a string, - #: or the empty string for elements not in any namespace. - self.namespace = namespace - - def __repr__(self): - return '|' if self.namespace == '' else f'{{{self.namespace}}}|' - - -class IDSelector: - specificity = 1, 0, 0 - - def __init__(self, ident): - self.ident = ident - - def __repr__(self): - return f'#{self.ident}' - - -class ClassSelector: - specificity = 0, 1, 0 - - def __init__(self, class_name): - self.class_name = class_name - - def __repr__(self): - return f'.{self.class_name}' - - -class AttributeSelector: - specificity = 0, 1, 0 - - def __init__(self, namespace, name, operator, value, case_sensitive): - self.namespace = namespace - self.name, self.lower_name = name - #: A string like ``=`` or ``~=``, or None for ``[attr]`` selectors - self.operator = operator - #: A string, or None for ``[attr]`` selectors - self.value = value - #: ``True`` if case-sensitive, ``False`` if case-insensitive, ``None`` - #: if depends on the document language - self.case_sensitive = case_sensitive - - def __repr__(self): - namespace = '*|' if self.namespace is None else f'{{{self.namespace}}}' - case_sensitive = ( - '' if self.case_sensitive is None else - f' {"s" if self.case_sensitive else "i"}') - return ( - f'[{namespace}{self.name}{self.operator}{self.value!r}' - f'{case_sensitive}]') - - -class PseudoClassSelector: - specificity = 0, 1, 0 - - def __init__(self, name): - self.name = name - - def __repr__(self): - return ':' + self.name - - -class FunctionalPseudoClassSelector: - specificity = 0, 1, 0 - - def __init__(self, name, arguments): - self.name = name - self.arguments = arguments - - def __repr__(self): - return f':{self.name}{tuple(self.arguments)!r}' - - -class NegationSelector: - def __init__(self, selector_list): - self.selector_list = selector_list - - @property - def specificity(self): - if self.selector_list: - return max(selector.specificity for selector in self.selector_list) - else: - return (0, 0, 0) - - def __repr__(self): - return f':not({", ".join(repr(sel) for sel in self.selector_list)})' - - -class RelationalSelector: - def __init__(self, selector_list): - self.selector_list = selector_list - - @property - def specificity(self): - if self.selector_list: - return max(selector.specificity for selector in self.selector_list) - else: - return (0, 0, 0) - - def __repr__(self): - return f':has({", ".join(repr(sel) for sel in self.selector_list)})' - - -class MatchesAnySelector: - def __init__(self, selector_list): - self.selector_list = selector_list - - @property - def specificity(self): - if self.selector_list: - return max(selector.specificity for selector in self.selector_list) - else: - return (0, 0, 0) - - def __repr__(self): - return f':is({", ".join(repr(sel) for sel in self.selector_list)})' - - -class SpecificityAdjustmentSelector: - def __init__(self, selector_list): - self.selector_list = selector_list - - @property - def specificity(self): - return (0, 0, 0) - - def __repr__(self): - return f':where({", ".join(repr(sel) for sel in self.selector_list)})' diff --git a/pptx-env/lib/python3.12/site-packages/cssselect2/tree.py b/pptx-env/lib/python3.12/site-packages/cssselect2/tree.py deleted file mode 100644 index 5108049f..00000000 --- a/pptx-env/lib/python3.12/site-packages/cssselect2/tree.py +++ /dev/null @@ -1,385 +0,0 @@ -from functools import cached_property -from warnings import warn - -from webencodings import ascii_lower - -from .compiler import compile_selector_list, split_whitespace - - -class ElementWrapper: - """Wrapper of :class:`xml.etree.ElementTree.Element` for Selector matching. - - This class should not be instanciated directly. :meth:`from_xml_root` or - :meth:`from_html_root` should be used for the root element of a document, - and other elements should be accessed (and wrappers generated) using - methods such as :meth:`iter_children` and :meth:`iter_subtree`. - - :class:`ElementWrapper` objects compare equal if their underlying - :class:`xml.etree.ElementTree.Element` do. - - """ - @classmethod - def from_xml_root(cls, root, content_language=None): - """Wrap for selector matching the root of an XML or XHTML document. - - :param root: - An ElementTree :class:`xml.etree.ElementTree.Element` - for the root element of a document. - If the given element is not the root, - selector matching will behave is if it were. - In other words, selectors will be not be `scoped`_ - to the subtree rooted at that element. - :returns: - A new :class:`ElementWrapper` - - .. _scoped: https://drafts.csswg.org/selectors-4/#scoping - - """ - return cls._from_root(root, content_language, in_html_document=False) - - @classmethod - def from_html_root(cls, root, content_language=None): - """Same as :meth:`from_xml_root` with case-insensitive attribute names. - - Useful for documents parsed with an HTML parser like html5lib, which - should be the case of documents with the ``text/html`` MIME type. - - """ - return cls._from_root(root, content_language, in_html_document=True) - - @classmethod - def _from_root(cls, root, content_language, in_html_document=True): - if hasattr(root, 'getroot'): - root = root.getroot() - return cls( - root, parent=None, index=0, previous=None, - in_html_document=in_html_document, content_language=content_language) - - def __init__(self, etree_element, parent, index, previous, - in_html_document, content_language=None): - #: The underlying ElementTree :class:`xml.etree.ElementTree.Element` - self.etree_element = etree_element - #: The parent :class:`ElementWrapper`, - #: or :obj:`None` for the root element. - self.parent = parent - #: The previous sibling :class:`ElementWrapper`, - #: or :obj:`None` for the root element. - self.previous = previous - if parent is not None: - #: The :attr:`parent`’s children - #: as a list of - #: ElementTree :class:`xml.etree.ElementTree.Element`\ s. - #: For the root (which has no parent) - self.etree_siblings = parent.etree_children - else: - self.etree_siblings = [etree_element] - #: The position within the :attr:`parent`’s children, counting from 0. - #: ``e.etree_siblings[e.index]`` is always ``e.etree_element``. - self.index = index - self.in_html_document = in_html_document - self.transport_content_language = content_language - - # Cache - self._ancestors = None - self._previous_siblings = None - - def __eq__(self, other): - return ( - type(self) is type(other) and - self.etree_element == other.etree_element) - - def __ne__(self, other): - return not (self == other) - - def __hash__(self): - return hash((type(self), self.etree_element)) - - def __iter__(self): - yield from self.iter_children() - - @property - def ancestors(self): - """Tuple of existing ancestors. - - Tuple of existing :class:`ElementWrapper` objects for this element’s - ancestors, in reversed tree order, from :attr:`parent` to the root. - - """ - if self._ancestors is None: - self._ancestors = ( - () if self.parent is None else (*self.parent.ancestors, self.parent)) - return self._ancestors - - @property - def previous_siblings(self): - """Tuple of previous siblings. - - Tuple of existing :class:`ElementWrapper` objects for this element’s - previous siblings, in reversed tree order. - - """ - if self._previous_siblings is None: - self._previous_siblings = ( - () if self.previous is None else - (*self.previous.previous_siblings, self.previous)) - return self._previous_siblings - - def iter_ancestors(self): - """Iterate over ancestors. - - Return an iterator of existing :class:`ElementWrapper` objects for this - element’s ancestors, in reversed tree order (from :attr:`parent` to the - root). - - The element itself is not included, this is an empty sequence for the - root element. - - This method is deprecated and will be removed in version 0.7.0. Use - :attr:`ancestors` instead. - - """ - warn( - 'This method is deprecated and will be removed in version 0.7.0. ' - 'Use the "ancestors" attribute instead.', - DeprecationWarning) - yield from self.ancestors - - def iter_previous_siblings(self): - """Iterate over previous siblings. - - Return an iterator of existing :class:`ElementWrapper` objects for this - element’s previous siblings, in reversed tree order. - - The element itself is not included, this is an empty sequence for a - first child or the root element. - - This method is deprecated and will be removed in version 0.7.0. Use - :attr:`previous_siblings` instead. - - """ - warn( - 'This method is deprecated and will be removed in version 0.7.0. ' - 'Use the "previous_siblings" attribute instead.', - DeprecationWarning) - yield from self.previous_siblings - - def iter_siblings(self): - """Iterate over siblings. - - Return an iterator of newly-created :class:`ElementWrapper` objects for - this element’s siblings, in tree order. - - """ - if self.parent is None: - yield self - else: - yield from self.parent.iter_children() - - def iter_next_siblings(self): - """Iterate over next siblings. - - Return an iterator of newly-created :class:`ElementWrapper` objects for - this element’s next siblings, in tree order. - - """ - found = False - for sibling in self.iter_siblings(): - if found: - yield sibling - if sibling == self: - found = True - - def iter_children(self): - """Iterate over children. - - Return an iterator of newly-created :class:`ElementWrapper` objects for - this element’s child elements, in tree order. - - """ - child = None - for i, etree_child in enumerate(self.etree_children): - child = type(self)( - etree_child, parent=self, index=i, previous=child, - in_html_document=self.in_html_document) - yield child - - def iter_subtree(self): - """Iterate over subtree. - - Return an iterator of newly-created :class:`ElementWrapper` objects for - the entire subtree rooted at this element, in tree order. - - Unlike in other methods, the element itself *is* included. - - This loops over an entire document: - - .. code-block:: python - - for element in ElementWrapper.from_root(root_etree).iter_subtree(): - ... - - """ - stack = [iter([self])] - while stack: - element = next(stack[-1], None) - if element is None: - stack.pop() - else: - yield element - stack.append(element.iter_children()) - - @staticmethod - def _compile(selectors): - return [ - compiled_selector.test - for selector in selectors - for compiled_selector in ( - [selector] if hasattr(selector, 'test') - else compile_selector_list(selector)) - if compiled_selector.pseudo_element is None and - not compiled_selector.never_matches] - - def matches(self, *selectors): - """Return wether this elememt matches any of the given selectors. - - :param selectors: - Each given selector is either a :class:`compiler.CompiledSelector`, - or an argument to :func:`compile_selector_list`. - - """ - return any(test(self) for test in self._compile(selectors)) - - def query_all(self, *selectors): - """Return elements, in tree order, that match any of given selectors. - - Selectors are `scoped`_ to the subtree rooted at this element. - - .. _scoped: https://drafts.csswg.org/selectors-4/#scoping - - :param selectors: - Each given selector is either a :class:`compiler.CompiledSelector`, - or an argument to :func:`compile_selector_list`. - :returns: - An iterator of newly-created :class:`ElementWrapper` objects. - - """ - tests = self._compile(selectors) - if len(tests) == 1: - return filter(tests[0], self.iter_subtree()) - elif selectors: - return ( - element for element in self.iter_subtree() - if any(test(element) for test in tests)) - else: - return iter(()) - - def query(self, *selectors): - """Return first element that matches any of given selectors. - - :param selectors: - Each given selector is either a :class:`compiler.CompiledSelector`, - or an argument to :func:`compile_selector_list`. - :returns: - A newly-created :class:`ElementWrapper` object, - or :obj:`None` if there is no match. - - """ - return next(self.query_all(*selectors), None) - - @cached_property - def etree_children(self): - """Children as a list of :class:`xml.etree.ElementTree.Element`. - - Other ElementTree nodes such as - :func:`comments ` and - :func:`processing instructions - ` - are not included. - - """ - return [ - element for element in self.etree_element - if isinstance(element.tag, str)] - - @cached_property - def local_name(self): - """The local name of this element, as a string.""" - namespace_url, local_name = _split_etree_tag(self.etree_element.tag) - self.__dict__[str('namespace_url')] = namespace_url - return local_name - - @cached_property - def namespace_url(self): - """The namespace URL of this element, as a string.""" - namespace_url, local_name = _split_etree_tag(self.etree_element.tag) - self.__dict__[str('local_name')] = local_name - return namespace_url - - @cached_property - def id(self): - """The ID of this element, as a string.""" - return self.etree_element.get('id') - - @cached_property - def classes(self): - """The classes of this element, as a :class:`set` of strings.""" - return set(split_whitespace(self.etree_element.get('class', ''))) - - @cached_property - def lang(self): - """The language of this element, as a string.""" - # http://whatwg.org/C#language - xml_lang = self.etree_element.get('{http://www.w3.org/XML/1998/namespace}lang') - if xml_lang is not None: - return ascii_lower(xml_lang) - is_html = ( - self.in_html_document or - self.namespace_url == 'http://www.w3.org/1999/xhtml') - if is_html: - lang = self.etree_element.get('lang') - if lang is not None: - return ascii_lower(lang) - if self.parent is not None: - return self.parent.lang - # Root elememnt - if is_html: - content_language = None - iterator = self.etree_element.iter('{http://www.w3.org/1999/xhtml}meta') - for meta in iterator: - http_equiv = meta.get('http-equiv', '') - if ascii_lower(http_equiv) == 'content-language': - content_language = _parse_content_language(meta.get('content')) - if content_language is not None: - return ascii_lower(content_language) - # Empty string means unknown - return _parse_content_language(self.transport_content_language) or '' - - @cached_property - def in_disabled_fieldset(self): - if self.parent is None: - return False - fieldset = '{http://www.w3.org/1999/xhtml}fieldset' - legend = '{http://www.w3.org/1999/xhtml}legend' - disabled_fieldset = ( - self.parent.etree_element.tag == fieldset and - self.parent.etree_element.get('disabled') is not None and ( - self.etree_element.tag != legend or any( - sibling.etree_element.tag == legend - for sibling in self.iter_previous_siblings()))) - return disabled_fieldset or self.parent.in_disabled_fieldset - - -def _split_etree_tag(tag): - position = tag.rfind('}') - if position == -1: - return '', tag - else: - assert tag[0] == '{' - return tag[1:position], tag[position+1:] - - -def _parse_content_language(value): - if value is not None and ',' not in value: - parts = split_whitespace(value) - if len(parts) == 1: - return parts[0] diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__init__.py b/pptx-env/lib/python3.12/site-packages/fontTools/__init__.py deleted file mode 100644 index 7d2f5af5..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -import logging -from fontTools.misc.loggingTools import configLogger - -log = logging.getLogger(__name__) - -version = __version__ = "4.60.1" - -__all__ = ["version", "log", "configLogger"] diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__main__.py b/pptx-env/lib/python3.12/site-packages/fontTools/__main__.py deleted file mode 100644 index 7c74ad3c..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/__main__.py +++ /dev/null @@ -1,35 +0,0 @@ -import sys - - -def main(args=None): - if args is None: - args = sys.argv[1:] - - # TODO Handle library-wide options. Eg.: - # --unicodedata - # --verbose / other logging stuff - - # TODO Allow a way to run arbitrary modules? Useful for setting - # library-wide options and calling another library. Eg.: - # - # $ fonttools --unicodedata=... fontmake ... - # - # This allows for a git-like command where thirdparty commands - # can be added. Should we just try importing the fonttools - # module first and try without if it fails? - - if len(sys.argv) < 2: - sys.argv.append("help") - if sys.argv[1] == "-h" or sys.argv[1] == "--help": - sys.argv[1] = "help" - mod = "fontTools." + sys.argv[1] - sys.argv[1] = sys.argv[0] + " " + sys.argv[1] - del sys.argv[0] - - import runpy - - runpy.run_module(mod, run_name="__main__") - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index ed92dab4..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/__main__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/__main__.cpython-312.pyc deleted file mode 100644 index 0444a97b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/__main__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/afmLib.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/afmLib.cpython-312.pyc deleted file mode 100644 index a3e0d085..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/afmLib.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/agl.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/agl.cpython-312.pyc deleted file mode 100644 index 4ed61b36..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/agl.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/annotations.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/annotations.cpython-312.pyc deleted file mode 100644 index 9dc74588..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/annotations.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/fontBuilder.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/fontBuilder.cpython-312.pyc deleted file mode 100644 index 466f2c36..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/fontBuilder.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/help.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/help.cpython-312.pyc deleted file mode 100644 index f32f45d3..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/help.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/tfmLib.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/tfmLib.cpython-312.pyc deleted file mode 100644 index d26c8002..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/tfmLib.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/ttx.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/ttx.cpython-312.pyc deleted file mode 100644 index b83dbc87..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/ttx.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/unicode.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/unicode.cpython-312.pyc deleted file mode 100644 index a5e3f075..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/__pycache__/unicode.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/afmLib.py b/pptx-env/lib/python3.12/site-packages/fontTools/afmLib.py deleted file mode 100644 index 0aabf7f6..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/afmLib.py +++ /dev/null @@ -1,439 +0,0 @@ -"""Module for reading and writing AFM (Adobe Font Metrics) files. - -Note that this has been designed to read in AFM files generated by Fontographer -and has not been tested on many other files. In particular, it does not -implement the whole Adobe AFM specification [#f1]_ but, it should read most -"common" AFM files. - -Here is an example of using `afmLib` to read, modify and write an AFM file: - - >>> from fontTools.afmLib import AFM - >>> f = AFM("Tests/afmLib/data/TestAFM.afm") - >>> - >>> # Accessing a pair gets you the kern value - >>> f[("V","A")] - -60 - >>> - >>> # Accessing a glyph name gets you metrics - >>> f["A"] - (65, 668, (8, -25, 660, 666)) - >>> # (charnum, width, bounding box) - >>> - >>> # Accessing an attribute gets you metadata - >>> f.FontName - 'TestFont-Regular' - >>> f.FamilyName - 'TestFont' - >>> f.Weight - 'Regular' - >>> f.XHeight - 500 - >>> f.Ascender - 750 - >>> - >>> # Attributes and items can also be set - >>> f[("A","V")] = -150 # Tighten kerning - >>> f.FontName = "TestFont Squished" - >>> - >>> # And the font written out again (remove the # in front) - >>> #f.write("testfont-squished.afm") - -.. rubric:: Footnotes - -.. [#f1] `Adobe Technote 5004 `_, - Adobe Font Metrics File Format Specification. - -""" - -import re - -# every single line starts with a "word" -identifierRE = re.compile(r"^([A-Za-z]+).*") - -# regular expression to parse char lines -charRE = re.compile( - r"(-?\d+)" # charnum - r"\s*;\s*WX\s+" # ; WX - r"(-?\d+)" # width - r"\s*;\s*N\s+" # ; N - r"([.A-Za-z0-9_]+)" # charname - r"\s*;\s*B\s+" # ; B - r"(-?\d+)" # left - r"\s+" - r"(-?\d+)" # bottom - r"\s+" - r"(-?\d+)" # right - r"\s+" - r"(-?\d+)" # top - r"\s*;\s*" # ; -) - -# regular expression to parse kerning lines -kernRE = re.compile( - r"([.A-Za-z0-9_]+)" # leftchar - r"\s+" - r"([.A-Za-z0-9_]+)" # rightchar - r"\s+" - r"(-?\d+)" # value - r"\s*" -) - -# regular expressions to parse composite info lines of the form: -# Aacute 2 ; PCC A 0 0 ; PCC acute 182 211 ; -compositeRE = re.compile( - r"([.A-Za-z0-9_]+)" # char name - r"\s+" - r"(\d+)" # number of parts - r"\s*;\s*" -) -componentRE = re.compile( - r"PCC\s+" # PPC - r"([.A-Za-z0-9_]+)" # base char name - r"\s+" - r"(-?\d+)" # x offset - r"\s+" - r"(-?\d+)" # y offset - r"\s*;\s*" -) - -preferredAttributeOrder = [ - "FontName", - "FullName", - "FamilyName", - "Weight", - "ItalicAngle", - "IsFixedPitch", - "FontBBox", - "UnderlinePosition", - "UnderlineThickness", - "Version", - "Notice", - "EncodingScheme", - "CapHeight", - "XHeight", - "Ascender", - "Descender", -] - - -class error(Exception): - pass - - -class AFM(object): - _attrs = None - - _keywords = [ - "StartFontMetrics", - "EndFontMetrics", - "StartCharMetrics", - "EndCharMetrics", - "StartKernData", - "StartKernPairs", - "EndKernPairs", - "EndKernData", - "StartComposites", - "EndComposites", - ] - - def __init__(self, path=None): - """AFM file reader. - - Instantiating an object with a path name will cause the file to be opened, - read, and parsed. Alternatively the path can be left unspecified, and a - file can be parsed later with the :meth:`read` method.""" - self._attrs = {} - self._chars = {} - self._kerning = {} - self._index = {} - self._comments = [] - self._composites = {} - if path is not None: - self.read(path) - - def read(self, path): - """Opens, reads and parses a file.""" - lines = readlines(path) - for line in lines: - if not line.strip(): - continue - m = identifierRE.match(line) - if m is None: - raise error("syntax error in AFM file: " + repr(line)) - - pos = m.regs[1][1] - word = line[:pos] - rest = line[pos:].strip() - if word in self._keywords: - continue - if word == "C": - self.parsechar(rest) - elif word == "KPX": - self.parsekernpair(rest) - elif word == "CC": - self.parsecomposite(rest) - else: - self.parseattr(word, rest) - - def parsechar(self, rest): - m = charRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - things = [] - for fr, to in m.regs[1:]: - things.append(rest[fr:to]) - charname = things[2] - del things[2] - charnum, width, l, b, r, t = (int(thing) for thing in things) - self._chars[charname] = charnum, width, (l, b, r, t) - - def parsekernpair(self, rest): - m = kernRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - things = [] - for fr, to in m.regs[1:]: - things.append(rest[fr:to]) - leftchar, rightchar, value = things - value = int(value) - self._kerning[(leftchar, rightchar)] = value - - def parseattr(self, word, rest): - if word == "FontBBox": - l, b, r, t = [int(thing) for thing in rest.split()] - self._attrs[word] = l, b, r, t - elif word == "Comment": - self._comments.append(rest) - else: - try: - value = int(rest) - except (ValueError, OverflowError): - self._attrs[word] = rest - else: - self._attrs[word] = value - - def parsecomposite(self, rest): - m = compositeRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - charname = m.group(1) - ncomponents = int(m.group(2)) - rest = rest[m.regs[0][1] :] - components = [] - while True: - m = componentRE.match(rest) - if m is None: - raise error("syntax error in AFM file: " + repr(rest)) - basechar = m.group(1) - xoffset = int(m.group(2)) - yoffset = int(m.group(3)) - components.append((basechar, xoffset, yoffset)) - rest = rest[m.regs[0][1] :] - if not rest: - break - assert len(components) == ncomponents - self._composites[charname] = components - - def write(self, path, sep="\r"): - """Writes out an AFM font to the given path.""" - import time - - lines = [ - "StartFontMetrics 2.0", - "Comment Generated by afmLib; at %s" - % (time.strftime("%m/%d/%Y %H:%M:%S", time.localtime(time.time()))), - ] - - # write comments, assuming (possibly wrongly!) they should - # all appear at the top - for comment in self._comments: - lines.append("Comment " + comment) - - # write attributes, first the ones we know about, in - # a preferred order - attrs = self._attrs - for attr in preferredAttributeOrder: - if attr in attrs: - value = attrs[attr] - if attr == "FontBBox": - value = "%s %s %s %s" % value - lines.append(attr + " " + str(value)) - # then write the attributes we don't know about, - # in alphabetical order - items = sorted(attrs.items()) - for attr, value in items: - if attr in preferredAttributeOrder: - continue - lines.append(attr + " " + str(value)) - - # write char metrics - lines.append("StartCharMetrics " + repr(len(self._chars))) - items = [ - (charnum, (charname, width, box)) - for charname, (charnum, width, box) in self._chars.items() - ] - - def myKey(a): - """Custom key function to make sure unencoded chars (-1) - end up at the end of the list after sorting.""" - if a[0] == -1: - a = (0xFFFF,) + a[1:] # 0xffff is an arbitrary large number - return a - - items.sort(key=myKey) - - for charnum, (charname, width, (l, b, r, t)) in items: - lines.append( - "C %d ; WX %d ; N %s ; B %d %d %d %d ;" - % (charnum, width, charname, l, b, r, t) - ) - lines.append("EndCharMetrics") - - # write kerning info - lines.append("StartKernData") - lines.append("StartKernPairs " + repr(len(self._kerning))) - items = sorted(self._kerning.items()) - for (leftchar, rightchar), value in items: - lines.append("KPX %s %s %d" % (leftchar, rightchar, value)) - lines.append("EndKernPairs") - lines.append("EndKernData") - - if self._composites: - composites = sorted(self._composites.items()) - lines.append("StartComposites %s" % len(self._composites)) - for charname, components in composites: - line = "CC %s %s ;" % (charname, len(components)) - for basechar, xoffset, yoffset in components: - line = line + " PCC %s %s %s ;" % (basechar, xoffset, yoffset) - lines.append(line) - lines.append("EndComposites") - - lines.append("EndFontMetrics") - - writelines(path, lines, sep) - - def has_kernpair(self, pair): - """Returns `True` if the given glyph pair (specified as a tuple) exists - in the kerning dictionary.""" - return pair in self._kerning - - def kernpairs(self): - """Returns a list of all kern pairs in the kerning dictionary.""" - return list(self._kerning.keys()) - - def has_char(self, char): - """Returns `True` if the given glyph exists in the font.""" - return char in self._chars - - def chars(self): - """Returns a list of all glyph names in the font.""" - return list(self._chars.keys()) - - def comments(self): - """Returns all comments from the file.""" - return self._comments - - def addComment(self, comment): - """Adds a new comment to the file.""" - self._comments.append(comment) - - def addComposite(self, glyphName, components): - """Specifies that the glyph `glyphName` is made up of the given components. - The components list should be of the following form:: - - [ - (glyphname, xOffset, yOffset), - ... - ] - - """ - self._composites[glyphName] = components - - def __getattr__(self, attr): - if attr in self._attrs: - return self._attrs[attr] - else: - raise AttributeError(attr) - - def __setattr__(self, attr, value): - # all attrs *not* starting with "_" are consider to be AFM keywords - if attr[:1] == "_": - self.__dict__[attr] = value - else: - self._attrs[attr] = value - - def __delattr__(self, attr): - # all attrs *not* starting with "_" are consider to be AFM keywords - if attr[:1] == "_": - try: - del self.__dict__[attr] - except KeyError: - raise AttributeError(attr) - else: - try: - del self._attrs[attr] - except KeyError: - raise AttributeError(attr) - - def __getitem__(self, key): - if isinstance(key, tuple): - # key is a tuple, return the kernpair - return self._kerning[key] - else: - # return the metrics instead - return self._chars[key] - - def __setitem__(self, key, value): - if isinstance(key, tuple): - # key is a tuple, set kernpair - self._kerning[key] = value - else: - # set char metrics - self._chars[key] = value - - def __delitem__(self, key): - if isinstance(key, tuple): - # key is a tuple, del kernpair - del self._kerning[key] - else: - # del char metrics - del self._chars[key] - - def __repr__(self): - if hasattr(self, "FullName"): - return "" % self.FullName - else: - return "" % id(self) - - -def readlines(path): - with open(path, "r", encoding="ascii") as f: - data = f.read() - return data.splitlines() - - -def writelines(path, lines, sep="\r"): - with open(path, "w", encoding="ascii", newline=sep) as f: - f.write("\n".join(lines) + "\n") - - -if __name__ == "__main__": - import EasyDialogs - - path = EasyDialogs.AskFileForOpen() - if path: - afm = AFM(path) - char = "A" - if afm.has_char(char): - print(afm[char]) # print charnum, width and boundingbox - pair = ("A", "V") - if afm.has_kernpair(pair): - print(afm[pair]) # print kerning value for pair - print(afm.Version) # various other afm entries have become attributes - print(afm.Weight) - # afm.comments() returns a list of all Comment lines found in the AFM - print(afm.comments()) - # print afm.chars() - # print afm.kernpairs() - print(afm) - afm.write(path + ".muck") diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/agl.py b/pptx-env/lib/python3.12/site-packages/fontTools/agl.py deleted file mode 100644 index d6994628..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/agl.py +++ /dev/null @@ -1,5233 +0,0 @@ -# -*- coding: utf-8 -*- -# The tables below are taken from -# https://github.com/adobe-type-tools/agl-aglfn/raw/4036a9ca80a62f64f9de4f7321a9a045ad0ecfd6/glyphlist.txt -# and -# https://github.com/adobe-type-tools/agl-aglfn/raw/4036a9ca80a62f64f9de4f7321a9a045ad0ecfd6/aglfn.txt -""" -Interface to the Adobe Glyph List - -This module exists to convert glyph names from the Adobe Glyph List -to their Unicode equivalents. Example usage: - - >>> from fontTools.agl import toUnicode - >>> toUnicode("nahiragana") - 'γͺ' - -It also contains two dictionaries, ``UV2AGL`` and ``AGL2UV``, which map from -Unicode codepoints to AGL names and vice versa: - - >>> import fontTools - >>> fontTools.agl.UV2AGL[ord("?")] - 'question' - >>> fontTools.agl.AGL2UV["wcircumflex"] - 373 - -This is used by fontTools when it has to construct glyph names for a font which -doesn't include any (e.g. format 3.0 post tables). -""" - -from fontTools.misc.textTools import tostr -import re - - -_aglText = """\ -# ----------------------------------------------------------- -# Copyright 2002-2019 Adobe (http://www.adobe.com/). -# -# Redistribution and use in source and binary forms, with or -# without modification, are permitted provided that the -# following conditions are met: -# -# Redistributions of source code must retain the above -# copyright notice, this list of conditions and the following -# disclaimer. -# -# Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following -# disclaimer in the documentation and/or other materials -# provided with the distribution. -# -# Neither the name of Adobe nor the names of its contributors -# may be used to endorse or promote products derived from this -# software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND -# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR -# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT -# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) -# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR -# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# ----------------------------------------------------------- -# Name: Adobe Glyph List -# Table version: 2.0 -# Date: September 20, 2002 -# URL: https://github.com/adobe-type-tools/agl-aglfn -# -# Format: two semicolon-delimited fields: -# (1) glyph name--upper/lowercase letters and digits -# (2) Unicode scalar value--four uppercase hexadecimal digits -# -A;0041 -AE;00C6 -AEacute;01FC -AEmacron;01E2 -AEsmall;F7E6 -Aacute;00C1 -Aacutesmall;F7E1 -Abreve;0102 -Abreveacute;1EAE -Abrevecyrillic;04D0 -Abrevedotbelow;1EB6 -Abrevegrave;1EB0 -Abrevehookabove;1EB2 -Abrevetilde;1EB4 -Acaron;01CD -Acircle;24B6 -Acircumflex;00C2 -Acircumflexacute;1EA4 -Acircumflexdotbelow;1EAC -Acircumflexgrave;1EA6 -Acircumflexhookabove;1EA8 -Acircumflexsmall;F7E2 -Acircumflextilde;1EAA -Acute;F6C9 -Acutesmall;F7B4 -Acyrillic;0410 -Adblgrave;0200 -Adieresis;00C4 -Adieresiscyrillic;04D2 -Adieresismacron;01DE -Adieresissmall;F7E4 -Adotbelow;1EA0 -Adotmacron;01E0 -Agrave;00C0 -Agravesmall;F7E0 -Ahookabove;1EA2 -Aiecyrillic;04D4 -Ainvertedbreve;0202 -Alpha;0391 -Alphatonos;0386 -Amacron;0100 -Amonospace;FF21 -Aogonek;0104 -Aring;00C5 -Aringacute;01FA -Aringbelow;1E00 -Aringsmall;F7E5 -Asmall;F761 -Atilde;00C3 -Atildesmall;F7E3 -Aybarmenian;0531 -B;0042 -Bcircle;24B7 -Bdotaccent;1E02 -Bdotbelow;1E04 -Becyrillic;0411 -Benarmenian;0532 -Beta;0392 -Bhook;0181 -Blinebelow;1E06 -Bmonospace;FF22 -Brevesmall;F6F4 -Bsmall;F762 -Btopbar;0182 -C;0043 -Caarmenian;053E -Cacute;0106 -Caron;F6CA -Caronsmall;F6F5 -Ccaron;010C -Ccedilla;00C7 -Ccedillaacute;1E08 -Ccedillasmall;F7E7 -Ccircle;24B8 -Ccircumflex;0108 -Cdot;010A -Cdotaccent;010A -Cedillasmall;F7B8 -Chaarmenian;0549 -Cheabkhasiancyrillic;04BC -Checyrillic;0427 -Chedescenderabkhasiancyrillic;04BE -Chedescendercyrillic;04B6 -Chedieresiscyrillic;04F4 -Cheharmenian;0543 -Chekhakassiancyrillic;04CB -Cheverticalstrokecyrillic;04B8 -Chi;03A7 -Chook;0187 -Circumflexsmall;F6F6 -Cmonospace;FF23 -Coarmenian;0551 -Csmall;F763 -D;0044 -DZ;01F1 -DZcaron;01C4 -Daarmenian;0534 -Dafrican;0189 -Dcaron;010E -Dcedilla;1E10 -Dcircle;24B9 -Dcircumflexbelow;1E12 -Dcroat;0110 -Ddotaccent;1E0A -Ddotbelow;1E0C -Decyrillic;0414 -Deicoptic;03EE -Delta;2206 -Deltagreek;0394 -Dhook;018A -Dieresis;F6CB -DieresisAcute;F6CC -DieresisGrave;F6CD -Dieresissmall;F7A8 -Digammagreek;03DC -Djecyrillic;0402 -Dlinebelow;1E0E -Dmonospace;FF24 -Dotaccentsmall;F6F7 -Dslash;0110 -Dsmall;F764 -Dtopbar;018B -Dz;01F2 -Dzcaron;01C5 -Dzeabkhasiancyrillic;04E0 -Dzecyrillic;0405 -Dzhecyrillic;040F -E;0045 -Eacute;00C9 -Eacutesmall;F7E9 -Ebreve;0114 -Ecaron;011A -Ecedillabreve;1E1C -Echarmenian;0535 -Ecircle;24BA -Ecircumflex;00CA -Ecircumflexacute;1EBE -Ecircumflexbelow;1E18 -Ecircumflexdotbelow;1EC6 -Ecircumflexgrave;1EC0 -Ecircumflexhookabove;1EC2 -Ecircumflexsmall;F7EA -Ecircumflextilde;1EC4 -Ecyrillic;0404 -Edblgrave;0204 -Edieresis;00CB -Edieresissmall;F7EB -Edot;0116 -Edotaccent;0116 -Edotbelow;1EB8 -Efcyrillic;0424 -Egrave;00C8 -Egravesmall;F7E8 -Eharmenian;0537 -Ehookabove;1EBA -Eightroman;2167 -Einvertedbreve;0206 -Eiotifiedcyrillic;0464 -Elcyrillic;041B -Elevenroman;216A -Emacron;0112 -Emacronacute;1E16 -Emacrongrave;1E14 -Emcyrillic;041C -Emonospace;FF25 -Encyrillic;041D -Endescendercyrillic;04A2 -Eng;014A -Enghecyrillic;04A4 -Enhookcyrillic;04C7 -Eogonek;0118 -Eopen;0190 -Epsilon;0395 -Epsilontonos;0388 -Ercyrillic;0420 -Ereversed;018E -Ereversedcyrillic;042D -Escyrillic;0421 -Esdescendercyrillic;04AA -Esh;01A9 -Esmall;F765 -Eta;0397 -Etarmenian;0538 -Etatonos;0389 -Eth;00D0 -Ethsmall;F7F0 -Etilde;1EBC -Etildebelow;1E1A -Euro;20AC -Ezh;01B7 -Ezhcaron;01EE -Ezhreversed;01B8 -F;0046 -Fcircle;24BB -Fdotaccent;1E1E -Feharmenian;0556 -Feicoptic;03E4 -Fhook;0191 -Fitacyrillic;0472 -Fiveroman;2164 -Fmonospace;FF26 -Fourroman;2163 -Fsmall;F766 -G;0047 -GBsquare;3387 -Gacute;01F4 -Gamma;0393 -Gammaafrican;0194 -Gangiacoptic;03EA -Gbreve;011E -Gcaron;01E6 -Gcedilla;0122 -Gcircle;24BC -Gcircumflex;011C -Gcommaaccent;0122 -Gdot;0120 -Gdotaccent;0120 -Gecyrillic;0413 -Ghadarmenian;0542 -Ghemiddlehookcyrillic;0494 -Ghestrokecyrillic;0492 -Gheupturncyrillic;0490 -Ghook;0193 -Gimarmenian;0533 -Gjecyrillic;0403 -Gmacron;1E20 -Gmonospace;FF27 -Grave;F6CE -Gravesmall;F760 -Gsmall;F767 -Gsmallhook;029B -Gstroke;01E4 -H;0048 -H18533;25CF -H18543;25AA -H18551;25AB -H22073;25A1 -HPsquare;33CB -Haabkhasiancyrillic;04A8 -Hadescendercyrillic;04B2 -Hardsigncyrillic;042A -Hbar;0126 -Hbrevebelow;1E2A -Hcedilla;1E28 -Hcircle;24BD -Hcircumflex;0124 -Hdieresis;1E26 -Hdotaccent;1E22 -Hdotbelow;1E24 -Hmonospace;FF28 -Hoarmenian;0540 -Horicoptic;03E8 -Hsmall;F768 -Hungarumlaut;F6CF -Hungarumlautsmall;F6F8 -Hzsquare;3390 -I;0049 -IAcyrillic;042F -IJ;0132 -IUcyrillic;042E -Iacute;00CD -Iacutesmall;F7ED -Ibreve;012C -Icaron;01CF -Icircle;24BE -Icircumflex;00CE -Icircumflexsmall;F7EE -Icyrillic;0406 -Idblgrave;0208 -Idieresis;00CF -Idieresisacute;1E2E -Idieresiscyrillic;04E4 -Idieresissmall;F7EF -Idot;0130 -Idotaccent;0130 -Idotbelow;1ECA -Iebrevecyrillic;04D6 -Iecyrillic;0415 -Ifraktur;2111 -Igrave;00CC -Igravesmall;F7EC -Ihookabove;1EC8 -Iicyrillic;0418 -Iinvertedbreve;020A -Iishortcyrillic;0419 -Imacron;012A -Imacroncyrillic;04E2 -Imonospace;FF29 -Iniarmenian;053B -Iocyrillic;0401 -Iogonek;012E -Iota;0399 -Iotaafrican;0196 -Iotadieresis;03AA -Iotatonos;038A -Ismall;F769 -Istroke;0197 -Itilde;0128 -Itildebelow;1E2C -Izhitsacyrillic;0474 -Izhitsadblgravecyrillic;0476 -J;004A -Jaarmenian;0541 -Jcircle;24BF -Jcircumflex;0134 -Jecyrillic;0408 -Jheharmenian;054B -Jmonospace;FF2A -Jsmall;F76A -K;004B -KBsquare;3385 -KKsquare;33CD -Kabashkircyrillic;04A0 -Kacute;1E30 -Kacyrillic;041A -Kadescendercyrillic;049A -Kahookcyrillic;04C3 -Kappa;039A -Kastrokecyrillic;049E -Kaverticalstrokecyrillic;049C -Kcaron;01E8 -Kcedilla;0136 -Kcircle;24C0 -Kcommaaccent;0136 -Kdotbelow;1E32 -Keharmenian;0554 -Kenarmenian;053F -Khacyrillic;0425 -Kheicoptic;03E6 -Khook;0198 -Kjecyrillic;040C -Klinebelow;1E34 -Kmonospace;FF2B -Koppacyrillic;0480 -Koppagreek;03DE -Ksicyrillic;046E -Ksmall;F76B -L;004C -LJ;01C7 -LL;F6BF -Lacute;0139 -Lambda;039B -Lcaron;013D -Lcedilla;013B -Lcircle;24C1 -Lcircumflexbelow;1E3C -Lcommaaccent;013B -Ldot;013F -Ldotaccent;013F -Ldotbelow;1E36 -Ldotbelowmacron;1E38 -Liwnarmenian;053C -Lj;01C8 -Ljecyrillic;0409 -Llinebelow;1E3A -Lmonospace;FF2C -Lslash;0141 -Lslashsmall;F6F9 -Lsmall;F76C -M;004D -MBsquare;3386 -Macron;F6D0 -Macronsmall;F7AF -Macute;1E3E -Mcircle;24C2 -Mdotaccent;1E40 -Mdotbelow;1E42 -Menarmenian;0544 -Mmonospace;FF2D -Msmall;F76D -Mturned;019C -Mu;039C -N;004E -NJ;01CA -Nacute;0143 -Ncaron;0147 -Ncedilla;0145 -Ncircle;24C3 -Ncircumflexbelow;1E4A -Ncommaaccent;0145 -Ndotaccent;1E44 -Ndotbelow;1E46 -Nhookleft;019D -Nineroman;2168 -Nj;01CB -Njecyrillic;040A -Nlinebelow;1E48 -Nmonospace;FF2E -Nowarmenian;0546 -Nsmall;F76E -Ntilde;00D1 -Ntildesmall;F7F1 -Nu;039D -O;004F -OE;0152 -OEsmall;F6FA -Oacute;00D3 -Oacutesmall;F7F3 -Obarredcyrillic;04E8 -Obarreddieresiscyrillic;04EA -Obreve;014E -Ocaron;01D1 -Ocenteredtilde;019F -Ocircle;24C4 -Ocircumflex;00D4 -Ocircumflexacute;1ED0 -Ocircumflexdotbelow;1ED8 -Ocircumflexgrave;1ED2 -Ocircumflexhookabove;1ED4 -Ocircumflexsmall;F7F4 -Ocircumflextilde;1ED6 -Ocyrillic;041E -Odblacute;0150 -Odblgrave;020C -Odieresis;00D6 -Odieresiscyrillic;04E6 -Odieresissmall;F7F6 -Odotbelow;1ECC -Ogoneksmall;F6FB -Ograve;00D2 -Ogravesmall;F7F2 -Oharmenian;0555 -Ohm;2126 -Ohookabove;1ECE -Ohorn;01A0 -Ohornacute;1EDA -Ohorndotbelow;1EE2 -Ohorngrave;1EDC -Ohornhookabove;1EDE -Ohorntilde;1EE0 -Ohungarumlaut;0150 -Oi;01A2 -Oinvertedbreve;020E -Omacron;014C -Omacronacute;1E52 -Omacrongrave;1E50 -Omega;2126 -Omegacyrillic;0460 -Omegagreek;03A9 -Omegaroundcyrillic;047A -Omegatitlocyrillic;047C -Omegatonos;038F -Omicron;039F -Omicrontonos;038C -Omonospace;FF2F -Oneroman;2160 -Oogonek;01EA -Oogonekmacron;01EC -Oopen;0186 -Oslash;00D8 -Oslashacute;01FE -Oslashsmall;F7F8 -Osmall;F76F -Ostrokeacute;01FE -Otcyrillic;047E -Otilde;00D5 -Otildeacute;1E4C -Otildedieresis;1E4E -Otildesmall;F7F5 -P;0050 -Pacute;1E54 -Pcircle;24C5 -Pdotaccent;1E56 -Pecyrillic;041F -Peharmenian;054A -Pemiddlehookcyrillic;04A6 -Phi;03A6 -Phook;01A4 -Pi;03A0 -Piwrarmenian;0553 -Pmonospace;FF30 -Psi;03A8 -Psicyrillic;0470 -Psmall;F770 -Q;0051 -Qcircle;24C6 -Qmonospace;FF31 -Qsmall;F771 -R;0052 -Raarmenian;054C -Racute;0154 -Rcaron;0158 -Rcedilla;0156 -Rcircle;24C7 -Rcommaaccent;0156 -Rdblgrave;0210 -Rdotaccent;1E58 -Rdotbelow;1E5A -Rdotbelowmacron;1E5C -Reharmenian;0550 -Rfraktur;211C -Rho;03A1 -Ringsmall;F6FC -Rinvertedbreve;0212 -Rlinebelow;1E5E -Rmonospace;FF32 -Rsmall;F772 -Rsmallinverted;0281 -Rsmallinvertedsuperior;02B6 -S;0053 -SF010000;250C -SF020000;2514 -SF030000;2510 -SF040000;2518 -SF050000;253C -SF060000;252C -SF070000;2534 -SF080000;251C -SF090000;2524 -SF100000;2500 -SF110000;2502 -SF190000;2561 -SF200000;2562 -SF210000;2556 -SF220000;2555 -SF230000;2563 -SF240000;2551 -SF250000;2557 -SF260000;255D -SF270000;255C -SF280000;255B -SF360000;255E -SF370000;255F -SF380000;255A -SF390000;2554 -SF400000;2569 -SF410000;2566 -SF420000;2560 -SF430000;2550 -SF440000;256C -SF450000;2567 -SF460000;2568 -SF470000;2564 -SF480000;2565 -SF490000;2559 -SF500000;2558 -SF510000;2552 -SF520000;2553 -SF530000;256B -SF540000;256A -Sacute;015A -Sacutedotaccent;1E64 -Sampigreek;03E0 -Scaron;0160 -Scarondotaccent;1E66 -Scaronsmall;F6FD -Scedilla;015E -Schwa;018F -Schwacyrillic;04D8 -Schwadieresiscyrillic;04DA -Scircle;24C8 -Scircumflex;015C -Scommaaccent;0218 -Sdotaccent;1E60 -Sdotbelow;1E62 -Sdotbelowdotaccent;1E68 -Seharmenian;054D -Sevenroman;2166 -Shaarmenian;0547 -Shacyrillic;0428 -Shchacyrillic;0429 -Sheicoptic;03E2 -Shhacyrillic;04BA -Shimacoptic;03EC -Sigma;03A3 -Sixroman;2165 -Smonospace;FF33 -Softsigncyrillic;042C -Ssmall;F773 -Stigmagreek;03DA -T;0054 -Tau;03A4 -Tbar;0166 -Tcaron;0164 -Tcedilla;0162 -Tcircle;24C9 -Tcircumflexbelow;1E70 -Tcommaaccent;0162 -Tdotaccent;1E6A -Tdotbelow;1E6C -Tecyrillic;0422 -Tedescendercyrillic;04AC -Tenroman;2169 -Tetsecyrillic;04B4 -Theta;0398 -Thook;01AC -Thorn;00DE -Thornsmall;F7FE -Threeroman;2162 -Tildesmall;F6FE -Tiwnarmenian;054F -Tlinebelow;1E6E -Tmonospace;FF34 -Toarmenian;0539 -Tonefive;01BC -Tonesix;0184 -Tonetwo;01A7 -Tretroflexhook;01AE -Tsecyrillic;0426 -Tshecyrillic;040B -Tsmall;F774 -Twelveroman;216B -Tworoman;2161 -U;0055 -Uacute;00DA -Uacutesmall;F7FA -Ubreve;016C -Ucaron;01D3 -Ucircle;24CA -Ucircumflex;00DB -Ucircumflexbelow;1E76 -Ucircumflexsmall;F7FB -Ucyrillic;0423 -Udblacute;0170 -Udblgrave;0214 -Udieresis;00DC -Udieresisacute;01D7 -Udieresisbelow;1E72 -Udieresiscaron;01D9 -Udieresiscyrillic;04F0 -Udieresisgrave;01DB -Udieresismacron;01D5 -Udieresissmall;F7FC -Udotbelow;1EE4 -Ugrave;00D9 -Ugravesmall;F7F9 -Uhookabove;1EE6 -Uhorn;01AF -Uhornacute;1EE8 -Uhorndotbelow;1EF0 -Uhorngrave;1EEA -Uhornhookabove;1EEC -Uhorntilde;1EEE -Uhungarumlaut;0170 -Uhungarumlautcyrillic;04F2 -Uinvertedbreve;0216 -Ukcyrillic;0478 -Umacron;016A -Umacroncyrillic;04EE -Umacrondieresis;1E7A -Umonospace;FF35 -Uogonek;0172 -Upsilon;03A5 -Upsilon1;03D2 -Upsilonacutehooksymbolgreek;03D3 -Upsilonafrican;01B1 -Upsilondieresis;03AB -Upsilondieresishooksymbolgreek;03D4 -Upsilonhooksymbol;03D2 -Upsilontonos;038E -Uring;016E -Ushortcyrillic;040E -Usmall;F775 -Ustraightcyrillic;04AE -Ustraightstrokecyrillic;04B0 -Utilde;0168 -Utildeacute;1E78 -Utildebelow;1E74 -V;0056 -Vcircle;24CB -Vdotbelow;1E7E -Vecyrillic;0412 -Vewarmenian;054E -Vhook;01B2 -Vmonospace;FF36 -Voarmenian;0548 -Vsmall;F776 -Vtilde;1E7C -W;0057 -Wacute;1E82 -Wcircle;24CC -Wcircumflex;0174 -Wdieresis;1E84 -Wdotaccent;1E86 -Wdotbelow;1E88 -Wgrave;1E80 -Wmonospace;FF37 -Wsmall;F777 -X;0058 -Xcircle;24CD -Xdieresis;1E8C -Xdotaccent;1E8A -Xeharmenian;053D -Xi;039E -Xmonospace;FF38 -Xsmall;F778 -Y;0059 -Yacute;00DD -Yacutesmall;F7FD -Yatcyrillic;0462 -Ycircle;24CE -Ycircumflex;0176 -Ydieresis;0178 -Ydieresissmall;F7FF -Ydotaccent;1E8E -Ydotbelow;1EF4 -Yericyrillic;042B -Yerudieresiscyrillic;04F8 -Ygrave;1EF2 -Yhook;01B3 -Yhookabove;1EF6 -Yiarmenian;0545 -Yicyrillic;0407 -Yiwnarmenian;0552 -Ymonospace;FF39 -Ysmall;F779 -Ytilde;1EF8 -Yusbigcyrillic;046A -Yusbigiotifiedcyrillic;046C -Yuslittlecyrillic;0466 -Yuslittleiotifiedcyrillic;0468 -Z;005A -Zaarmenian;0536 -Zacute;0179 -Zcaron;017D -Zcaronsmall;F6FF -Zcircle;24CF -Zcircumflex;1E90 -Zdot;017B -Zdotaccent;017B -Zdotbelow;1E92 -Zecyrillic;0417 -Zedescendercyrillic;0498 -Zedieresiscyrillic;04DE -Zeta;0396 -Zhearmenian;053A -Zhebrevecyrillic;04C1 -Zhecyrillic;0416 -Zhedescendercyrillic;0496 -Zhedieresiscyrillic;04DC -Zlinebelow;1E94 -Zmonospace;FF3A -Zsmall;F77A -Zstroke;01B5 -a;0061 -aabengali;0986 -aacute;00E1 -aadeva;0906 -aagujarati;0A86 -aagurmukhi;0A06 -aamatragurmukhi;0A3E -aarusquare;3303 -aavowelsignbengali;09BE -aavowelsigndeva;093E -aavowelsigngujarati;0ABE -abbreviationmarkarmenian;055F -abbreviationsigndeva;0970 -abengali;0985 -abopomofo;311A -abreve;0103 -abreveacute;1EAF -abrevecyrillic;04D1 -abrevedotbelow;1EB7 -abrevegrave;1EB1 -abrevehookabove;1EB3 -abrevetilde;1EB5 -acaron;01CE -acircle;24D0 -acircumflex;00E2 -acircumflexacute;1EA5 -acircumflexdotbelow;1EAD -acircumflexgrave;1EA7 -acircumflexhookabove;1EA9 -acircumflextilde;1EAB -acute;00B4 -acutebelowcmb;0317 -acutecmb;0301 -acutecomb;0301 -acutedeva;0954 -acutelowmod;02CF -acutetonecmb;0341 -acyrillic;0430 -adblgrave;0201 -addakgurmukhi;0A71 -adeva;0905 -adieresis;00E4 -adieresiscyrillic;04D3 -adieresismacron;01DF -adotbelow;1EA1 -adotmacron;01E1 -ae;00E6 -aeacute;01FD -aekorean;3150 -aemacron;01E3 -afii00208;2015 -afii08941;20A4 -afii10017;0410 -afii10018;0411 -afii10019;0412 -afii10020;0413 -afii10021;0414 -afii10022;0415 -afii10023;0401 -afii10024;0416 -afii10025;0417 -afii10026;0418 -afii10027;0419 -afii10028;041A -afii10029;041B -afii10030;041C -afii10031;041D -afii10032;041E -afii10033;041F -afii10034;0420 -afii10035;0421 -afii10036;0422 -afii10037;0423 -afii10038;0424 -afii10039;0425 -afii10040;0426 -afii10041;0427 -afii10042;0428 -afii10043;0429 -afii10044;042A -afii10045;042B -afii10046;042C -afii10047;042D -afii10048;042E -afii10049;042F -afii10050;0490 -afii10051;0402 -afii10052;0403 -afii10053;0404 -afii10054;0405 -afii10055;0406 -afii10056;0407 -afii10057;0408 -afii10058;0409 -afii10059;040A -afii10060;040B -afii10061;040C -afii10062;040E -afii10063;F6C4 -afii10064;F6C5 -afii10065;0430 -afii10066;0431 -afii10067;0432 -afii10068;0433 -afii10069;0434 -afii10070;0435 -afii10071;0451 -afii10072;0436 -afii10073;0437 -afii10074;0438 -afii10075;0439 -afii10076;043A -afii10077;043B -afii10078;043C -afii10079;043D -afii10080;043E -afii10081;043F -afii10082;0440 -afii10083;0441 -afii10084;0442 -afii10085;0443 -afii10086;0444 -afii10087;0445 -afii10088;0446 -afii10089;0447 -afii10090;0448 -afii10091;0449 -afii10092;044A -afii10093;044B -afii10094;044C -afii10095;044D -afii10096;044E -afii10097;044F -afii10098;0491 -afii10099;0452 -afii10100;0453 -afii10101;0454 -afii10102;0455 -afii10103;0456 -afii10104;0457 -afii10105;0458 -afii10106;0459 -afii10107;045A -afii10108;045B -afii10109;045C -afii10110;045E -afii10145;040F -afii10146;0462 -afii10147;0472 -afii10148;0474 -afii10192;F6C6 -afii10193;045F -afii10194;0463 -afii10195;0473 -afii10196;0475 -afii10831;F6C7 -afii10832;F6C8 -afii10846;04D9 -afii299;200E -afii300;200F -afii301;200D -afii57381;066A -afii57388;060C -afii57392;0660 -afii57393;0661 -afii57394;0662 -afii57395;0663 -afii57396;0664 -afii57397;0665 -afii57398;0666 -afii57399;0667 -afii57400;0668 -afii57401;0669 -afii57403;061B -afii57407;061F -afii57409;0621 -afii57410;0622 -afii57411;0623 -afii57412;0624 -afii57413;0625 -afii57414;0626 -afii57415;0627 -afii57416;0628 -afii57417;0629 -afii57418;062A -afii57419;062B -afii57420;062C -afii57421;062D -afii57422;062E -afii57423;062F -afii57424;0630 -afii57425;0631 -afii57426;0632 -afii57427;0633 -afii57428;0634 -afii57429;0635 -afii57430;0636 -afii57431;0637 -afii57432;0638 -afii57433;0639 -afii57434;063A -afii57440;0640 -afii57441;0641 -afii57442;0642 -afii57443;0643 -afii57444;0644 -afii57445;0645 -afii57446;0646 -afii57448;0648 -afii57449;0649 -afii57450;064A -afii57451;064B -afii57452;064C -afii57453;064D -afii57454;064E -afii57455;064F -afii57456;0650 -afii57457;0651 -afii57458;0652 -afii57470;0647 -afii57505;06A4 -afii57506;067E -afii57507;0686 -afii57508;0698 -afii57509;06AF -afii57511;0679 -afii57512;0688 -afii57513;0691 -afii57514;06BA -afii57519;06D2 -afii57534;06D5 -afii57636;20AA -afii57645;05BE -afii57658;05C3 -afii57664;05D0 -afii57665;05D1 -afii57666;05D2 -afii57667;05D3 -afii57668;05D4 -afii57669;05D5 -afii57670;05D6 -afii57671;05D7 -afii57672;05D8 -afii57673;05D9 -afii57674;05DA -afii57675;05DB -afii57676;05DC -afii57677;05DD -afii57678;05DE -afii57679;05DF -afii57680;05E0 -afii57681;05E1 -afii57682;05E2 -afii57683;05E3 -afii57684;05E4 -afii57685;05E5 -afii57686;05E6 -afii57687;05E7 -afii57688;05E8 -afii57689;05E9 -afii57690;05EA -afii57694;FB2A -afii57695;FB2B -afii57700;FB4B -afii57705;FB1F -afii57716;05F0 -afii57717;05F1 -afii57718;05F2 -afii57723;FB35 -afii57793;05B4 -afii57794;05B5 -afii57795;05B6 -afii57796;05BB -afii57797;05B8 -afii57798;05B7 -afii57799;05B0 -afii57800;05B2 -afii57801;05B1 -afii57802;05B3 -afii57803;05C2 -afii57804;05C1 -afii57806;05B9 -afii57807;05BC -afii57839;05BD -afii57841;05BF -afii57842;05C0 -afii57929;02BC -afii61248;2105 -afii61289;2113 -afii61352;2116 -afii61573;202C -afii61574;202D -afii61575;202E -afii61664;200C -afii63167;066D -afii64937;02BD -agrave;00E0 -agujarati;0A85 -agurmukhi;0A05 -ahiragana;3042 -ahookabove;1EA3 -aibengali;0990 -aibopomofo;311E -aideva;0910 -aiecyrillic;04D5 -aigujarati;0A90 -aigurmukhi;0A10 -aimatragurmukhi;0A48 -ainarabic;0639 -ainfinalarabic;FECA -aininitialarabic;FECB -ainmedialarabic;FECC -ainvertedbreve;0203 -aivowelsignbengali;09C8 -aivowelsigndeva;0948 -aivowelsigngujarati;0AC8 -akatakana;30A2 -akatakanahalfwidth;FF71 -akorean;314F -alef;05D0 -alefarabic;0627 -alefdageshhebrew;FB30 -aleffinalarabic;FE8E -alefhamzaabovearabic;0623 -alefhamzaabovefinalarabic;FE84 -alefhamzabelowarabic;0625 -alefhamzabelowfinalarabic;FE88 -alefhebrew;05D0 -aleflamedhebrew;FB4F -alefmaddaabovearabic;0622 -alefmaddaabovefinalarabic;FE82 -alefmaksuraarabic;0649 -alefmaksurafinalarabic;FEF0 -alefmaksurainitialarabic;FEF3 -alefmaksuramedialarabic;FEF4 -alefpatahhebrew;FB2E -alefqamatshebrew;FB2F -aleph;2135 -allequal;224C -alpha;03B1 -alphatonos;03AC -amacron;0101 -amonospace;FF41 -ampersand;0026 -ampersandmonospace;FF06 -ampersandsmall;F726 -amsquare;33C2 -anbopomofo;3122 -angbopomofo;3124 -angkhankhuthai;0E5A -angle;2220 -anglebracketleft;3008 -anglebracketleftvertical;FE3F -anglebracketright;3009 -anglebracketrightvertical;FE40 -angleleft;2329 -angleright;232A -angstrom;212B -anoteleia;0387 -anudattadeva;0952 -anusvarabengali;0982 -anusvaradeva;0902 -anusvaragujarati;0A82 -aogonek;0105 -apaatosquare;3300 -aparen;249C -apostrophearmenian;055A -apostrophemod;02BC -apple;F8FF -approaches;2250 -approxequal;2248 -approxequalorimage;2252 -approximatelyequal;2245 -araeaekorean;318E -araeakorean;318D -arc;2312 -arighthalfring;1E9A -aring;00E5 -aringacute;01FB -aringbelow;1E01 -arrowboth;2194 -arrowdashdown;21E3 -arrowdashleft;21E0 -arrowdashright;21E2 -arrowdashup;21E1 -arrowdblboth;21D4 -arrowdbldown;21D3 -arrowdblleft;21D0 -arrowdblright;21D2 -arrowdblup;21D1 -arrowdown;2193 -arrowdownleft;2199 -arrowdownright;2198 -arrowdownwhite;21E9 -arrowheaddownmod;02C5 -arrowheadleftmod;02C2 -arrowheadrightmod;02C3 -arrowheadupmod;02C4 -arrowhorizex;F8E7 -arrowleft;2190 -arrowleftdbl;21D0 -arrowleftdblstroke;21CD -arrowleftoverright;21C6 -arrowleftwhite;21E6 -arrowright;2192 -arrowrightdblstroke;21CF -arrowrightheavy;279E -arrowrightoverleft;21C4 -arrowrightwhite;21E8 -arrowtableft;21E4 -arrowtabright;21E5 -arrowup;2191 -arrowupdn;2195 -arrowupdnbse;21A8 -arrowupdownbase;21A8 -arrowupleft;2196 -arrowupleftofdown;21C5 -arrowupright;2197 -arrowupwhite;21E7 -arrowvertex;F8E6 -asciicircum;005E -asciicircummonospace;FF3E -asciitilde;007E -asciitildemonospace;FF5E -ascript;0251 -ascriptturned;0252 -asmallhiragana;3041 -asmallkatakana;30A1 -asmallkatakanahalfwidth;FF67 -asterisk;002A -asteriskaltonearabic;066D -asteriskarabic;066D -asteriskmath;2217 -asteriskmonospace;FF0A -asterisksmall;FE61 -asterism;2042 -asuperior;F6E9 -asymptoticallyequal;2243 -at;0040 -atilde;00E3 -atmonospace;FF20 -atsmall;FE6B -aturned;0250 -aubengali;0994 -aubopomofo;3120 -audeva;0914 -augujarati;0A94 -augurmukhi;0A14 -aulengthmarkbengali;09D7 -aumatragurmukhi;0A4C -auvowelsignbengali;09CC -auvowelsigndeva;094C -auvowelsigngujarati;0ACC -avagrahadeva;093D -aybarmenian;0561 -ayin;05E2 -ayinaltonehebrew;FB20 -ayinhebrew;05E2 -b;0062 -babengali;09AC -backslash;005C -backslashmonospace;FF3C -badeva;092C -bagujarati;0AAC -bagurmukhi;0A2C -bahiragana;3070 -bahtthai;0E3F -bakatakana;30D0 -bar;007C -barmonospace;FF5C -bbopomofo;3105 -bcircle;24D1 -bdotaccent;1E03 -bdotbelow;1E05 -beamedsixteenthnotes;266C -because;2235 -becyrillic;0431 -beharabic;0628 -behfinalarabic;FE90 -behinitialarabic;FE91 -behiragana;3079 -behmedialarabic;FE92 -behmeeminitialarabic;FC9F -behmeemisolatedarabic;FC08 -behnoonfinalarabic;FC6D -bekatakana;30D9 -benarmenian;0562 -bet;05D1 -beta;03B2 -betasymbolgreek;03D0 -betdagesh;FB31 -betdageshhebrew;FB31 -bethebrew;05D1 -betrafehebrew;FB4C -bhabengali;09AD -bhadeva;092D -bhagujarati;0AAD -bhagurmukhi;0A2D -bhook;0253 -bihiragana;3073 -bikatakana;30D3 -bilabialclick;0298 -bindigurmukhi;0A02 -birusquare;3331 -blackcircle;25CF -blackdiamond;25C6 -blackdownpointingtriangle;25BC -blackleftpointingpointer;25C4 -blackleftpointingtriangle;25C0 -blacklenticularbracketleft;3010 -blacklenticularbracketleftvertical;FE3B -blacklenticularbracketright;3011 -blacklenticularbracketrightvertical;FE3C -blacklowerlefttriangle;25E3 -blacklowerrighttriangle;25E2 -blackrectangle;25AC -blackrightpointingpointer;25BA -blackrightpointingtriangle;25B6 -blacksmallsquare;25AA -blacksmilingface;263B -blacksquare;25A0 -blackstar;2605 -blackupperlefttriangle;25E4 -blackupperrighttriangle;25E5 -blackuppointingsmalltriangle;25B4 -blackuppointingtriangle;25B2 -blank;2423 -blinebelow;1E07 -block;2588 -bmonospace;FF42 -bobaimaithai;0E1A -bohiragana;307C -bokatakana;30DC -bparen;249D -bqsquare;33C3 -braceex;F8F4 -braceleft;007B -braceleftbt;F8F3 -braceleftmid;F8F2 -braceleftmonospace;FF5B -braceleftsmall;FE5B -bracelefttp;F8F1 -braceleftvertical;FE37 -braceright;007D -bracerightbt;F8FE -bracerightmid;F8FD -bracerightmonospace;FF5D -bracerightsmall;FE5C -bracerighttp;F8FC -bracerightvertical;FE38 -bracketleft;005B -bracketleftbt;F8F0 -bracketleftex;F8EF -bracketleftmonospace;FF3B -bracketlefttp;F8EE -bracketright;005D -bracketrightbt;F8FB -bracketrightex;F8FA -bracketrightmonospace;FF3D -bracketrighttp;F8F9 -breve;02D8 -brevebelowcmb;032E -brevecmb;0306 -breveinvertedbelowcmb;032F -breveinvertedcmb;0311 -breveinverteddoublecmb;0361 -bridgebelowcmb;032A -bridgeinvertedbelowcmb;033A -brokenbar;00A6 -bstroke;0180 -bsuperior;F6EA -btopbar;0183 -buhiragana;3076 -bukatakana;30D6 -bullet;2022 -bulletinverse;25D8 -bulletoperator;2219 -bullseye;25CE -c;0063 -caarmenian;056E -cabengali;099A -cacute;0107 -cadeva;091A -cagujarati;0A9A -cagurmukhi;0A1A -calsquare;3388 -candrabindubengali;0981 -candrabinducmb;0310 -candrabindudeva;0901 -candrabindugujarati;0A81 -capslock;21EA -careof;2105 -caron;02C7 -caronbelowcmb;032C -caroncmb;030C -carriagereturn;21B5 -cbopomofo;3118 -ccaron;010D -ccedilla;00E7 -ccedillaacute;1E09 -ccircle;24D2 -ccircumflex;0109 -ccurl;0255 -cdot;010B -cdotaccent;010B -cdsquare;33C5 -cedilla;00B8 -cedillacmb;0327 -cent;00A2 -centigrade;2103 -centinferior;F6DF -centmonospace;FFE0 -centoldstyle;F7A2 -centsuperior;F6E0 -chaarmenian;0579 -chabengali;099B -chadeva;091B -chagujarati;0A9B -chagurmukhi;0A1B -chbopomofo;3114 -cheabkhasiancyrillic;04BD -checkmark;2713 -checyrillic;0447 -chedescenderabkhasiancyrillic;04BF -chedescendercyrillic;04B7 -chedieresiscyrillic;04F5 -cheharmenian;0573 -chekhakassiancyrillic;04CC -cheverticalstrokecyrillic;04B9 -chi;03C7 -chieuchacirclekorean;3277 -chieuchaparenkorean;3217 -chieuchcirclekorean;3269 -chieuchkorean;314A -chieuchparenkorean;3209 -chochangthai;0E0A -chochanthai;0E08 -chochingthai;0E09 -chochoethai;0E0C -chook;0188 -cieucacirclekorean;3276 -cieucaparenkorean;3216 -cieuccirclekorean;3268 -cieuckorean;3148 -cieucparenkorean;3208 -cieucuparenkorean;321C -circle;25CB -circlemultiply;2297 -circleot;2299 -circleplus;2295 -circlepostalmark;3036 -circlewithlefthalfblack;25D0 -circlewithrighthalfblack;25D1 -circumflex;02C6 -circumflexbelowcmb;032D -circumflexcmb;0302 -clear;2327 -clickalveolar;01C2 -clickdental;01C0 -clicklateral;01C1 -clickretroflex;01C3 -club;2663 -clubsuitblack;2663 -clubsuitwhite;2667 -cmcubedsquare;33A4 -cmonospace;FF43 -cmsquaredsquare;33A0 -coarmenian;0581 -colon;003A -colonmonetary;20A1 -colonmonospace;FF1A -colonsign;20A1 -colonsmall;FE55 -colontriangularhalfmod;02D1 -colontriangularmod;02D0 -comma;002C -commaabovecmb;0313 -commaaboverightcmb;0315 -commaaccent;F6C3 -commaarabic;060C -commaarmenian;055D -commainferior;F6E1 -commamonospace;FF0C -commareversedabovecmb;0314 -commareversedmod;02BD -commasmall;FE50 -commasuperior;F6E2 -commaturnedabovecmb;0312 -commaturnedmod;02BB -compass;263C -congruent;2245 -contourintegral;222E -control;2303 -controlACK;0006 -controlBEL;0007 -controlBS;0008 -controlCAN;0018 -controlCR;000D -controlDC1;0011 -controlDC2;0012 -controlDC3;0013 -controlDC4;0014 -controlDEL;007F -controlDLE;0010 -controlEM;0019 -controlENQ;0005 -controlEOT;0004 -controlESC;001B -controlETB;0017 -controlETX;0003 -controlFF;000C -controlFS;001C -controlGS;001D -controlHT;0009 -controlLF;000A -controlNAK;0015 -controlRS;001E -controlSI;000F -controlSO;000E -controlSOT;0002 -controlSTX;0001 -controlSUB;001A -controlSYN;0016 -controlUS;001F -controlVT;000B -copyright;00A9 -copyrightsans;F8E9 -copyrightserif;F6D9 -cornerbracketleft;300C -cornerbracketlefthalfwidth;FF62 -cornerbracketleftvertical;FE41 -cornerbracketright;300D -cornerbracketrighthalfwidth;FF63 -cornerbracketrightvertical;FE42 -corporationsquare;337F -cosquare;33C7 -coverkgsquare;33C6 -cparen;249E -cruzeiro;20A2 -cstretched;0297 -curlyand;22CF -curlyor;22CE -currency;00A4 -cyrBreve;F6D1 -cyrFlex;F6D2 -cyrbreve;F6D4 -cyrflex;F6D5 -d;0064 -daarmenian;0564 -dabengali;09A6 -dadarabic;0636 -dadeva;0926 -dadfinalarabic;FEBE -dadinitialarabic;FEBF -dadmedialarabic;FEC0 -dagesh;05BC -dageshhebrew;05BC -dagger;2020 -daggerdbl;2021 -dagujarati;0AA6 -dagurmukhi;0A26 -dahiragana;3060 -dakatakana;30C0 -dalarabic;062F -dalet;05D3 -daletdagesh;FB33 -daletdageshhebrew;FB33 -dalethatafpatah;05D3 05B2 -dalethatafpatahhebrew;05D3 05B2 -dalethatafsegol;05D3 05B1 -dalethatafsegolhebrew;05D3 05B1 -dalethebrew;05D3 -dalethiriq;05D3 05B4 -dalethiriqhebrew;05D3 05B4 -daletholam;05D3 05B9 -daletholamhebrew;05D3 05B9 -daletpatah;05D3 05B7 -daletpatahhebrew;05D3 05B7 -daletqamats;05D3 05B8 -daletqamatshebrew;05D3 05B8 -daletqubuts;05D3 05BB -daletqubutshebrew;05D3 05BB -daletsegol;05D3 05B6 -daletsegolhebrew;05D3 05B6 -daletsheva;05D3 05B0 -daletshevahebrew;05D3 05B0 -dalettsere;05D3 05B5 -dalettserehebrew;05D3 05B5 -dalfinalarabic;FEAA -dammaarabic;064F -dammalowarabic;064F -dammatanaltonearabic;064C -dammatanarabic;064C -danda;0964 -dargahebrew;05A7 -dargalefthebrew;05A7 -dasiapneumatacyrilliccmb;0485 -dblGrave;F6D3 -dblanglebracketleft;300A -dblanglebracketleftvertical;FE3D -dblanglebracketright;300B -dblanglebracketrightvertical;FE3E -dblarchinvertedbelowcmb;032B -dblarrowleft;21D4 -dblarrowright;21D2 -dbldanda;0965 -dblgrave;F6D6 -dblgravecmb;030F -dblintegral;222C -dbllowline;2017 -dbllowlinecmb;0333 -dbloverlinecmb;033F -dblprimemod;02BA -dblverticalbar;2016 -dblverticallineabovecmb;030E -dbopomofo;3109 -dbsquare;33C8 -dcaron;010F -dcedilla;1E11 -dcircle;24D3 -dcircumflexbelow;1E13 -dcroat;0111 -ddabengali;09A1 -ddadeva;0921 -ddagujarati;0AA1 -ddagurmukhi;0A21 -ddalarabic;0688 -ddalfinalarabic;FB89 -dddhadeva;095C -ddhabengali;09A2 -ddhadeva;0922 -ddhagujarati;0AA2 -ddhagurmukhi;0A22 -ddotaccent;1E0B -ddotbelow;1E0D -decimalseparatorarabic;066B -decimalseparatorpersian;066B -decyrillic;0434 -degree;00B0 -dehihebrew;05AD -dehiragana;3067 -deicoptic;03EF -dekatakana;30C7 -deleteleft;232B -deleteright;2326 -delta;03B4 -deltaturned;018D -denominatorminusonenumeratorbengali;09F8 -dezh;02A4 -dhabengali;09A7 -dhadeva;0927 -dhagujarati;0AA7 -dhagurmukhi;0A27 -dhook;0257 -dialytikatonos;0385 -dialytikatonoscmb;0344 -diamond;2666 -diamondsuitwhite;2662 -dieresis;00A8 -dieresisacute;F6D7 -dieresisbelowcmb;0324 -dieresiscmb;0308 -dieresisgrave;F6D8 -dieresistonos;0385 -dihiragana;3062 -dikatakana;30C2 -dittomark;3003 -divide;00F7 -divides;2223 -divisionslash;2215 -djecyrillic;0452 -dkshade;2593 -dlinebelow;1E0F -dlsquare;3397 -dmacron;0111 -dmonospace;FF44 -dnblock;2584 -dochadathai;0E0E -dodekthai;0E14 -dohiragana;3069 -dokatakana;30C9 -dollar;0024 -dollarinferior;F6E3 -dollarmonospace;FF04 -dollaroldstyle;F724 -dollarsmall;FE69 -dollarsuperior;F6E4 -dong;20AB -dorusquare;3326 -dotaccent;02D9 -dotaccentcmb;0307 -dotbelowcmb;0323 -dotbelowcomb;0323 -dotkatakana;30FB -dotlessi;0131 -dotlessj;F6BE -dotlessjstrokehook;0284 -dotmath;22C5 -dottedcircle;25CC -doubleyodpatah;FB1F -doubleyodpatahhebrew;FB1F -downtackbelowcmb;031E -downtackmod;02D5 -dparen;249F -dsuperior;F6EB -dtail;0256 -dtopbar;018C -duhiragana;3065 -dukatakana;30C5 -dz;01F3 -dzaltone;02A3 -dzcaron;01C6 -dzcurl;02A5 -dzeabkhasiancyrillic;04E1 -dzecyrillic;0455 -dzhecyrillic;045F -e;0065 -eacute;00E9 -earth;2641 -ebengali;098F -ebopomofo;311C -ebreve;0115 -ecandradeva;090D -ecandragujarati;0A8D -ecandravowelsigndeva;0945 -ecandravowelsigngujarati;0AC5 -ecaron;011B -ecedillabreve;1E1D -echarmenian;0565 -echyiwnarmenian;0587 -ecircle;24D4 -ecircumflex;00EA -ecircumflexacute;1EBF -ecircumflexbelow;1E19 -ecircumflexdotbelow;1EC7 -ecircumflexgrave;1EC1 -ecircumflexhookabove;1EC3 -ecircumflextilde;1EC5 -ecyrillic;0454 -edblgrave;0205 -edeva;090F -edieresis;00EB -edot;0117 -edotaccent;0117 -edotbelow;1EB9 -eegurmukhi;0A0F -eematragurmukhi;0A47 -efcyrillic;0444 -egrave;00E8 -egujarati;0A8F -eharmenian;0567 -ehbopomofo;311D -ehiragana;3048 -ehookabove;1EBB -eibopomofo;311F -eight;0038 -eightarabic;0668 -eightbengali;09EE -eightcircle;2467 -eightcircleinversesansserif;2791 -eightdeva;096E -eighteencircle;2471 -eighteenparen;2485 -eighteenperiod;2499 -eightgujarati;0AEE -eightgurmukhi;0A6E -eighthackarabic;0668 -eighthangzhou;3028 -eighthnotebeamed;266B -eightideographicparen;3227 -eightinferior;2088 -eightmonospace;FF18 -eightoldstyle;F738 -eightparen;247B -eightperiod;248F -eightpersian;06F8 -eightroman;2177 -eightsuperior;2078 -eightthai;0E58 -einvertedbreve;0207 -eiotifiedcyrillic;0465 -ekatakana;30A8 -ekatakanahalfwidth;FF74 -ekonkargurmukhi;0A74 -ekorean;3154 -elcyrillic;043B -element;2208 -elevencircle;246A -elevenparen;247E -elevenperiod;2492 -elevenroman;217A -ellipsis;2026 -ellipsisvertical;22EE -emacron;0113 -emacronacute;1E17 -emacrongrave;1E15 -emcyrillic;043C -emdash;2014 -emdashvertical;FE31 -emonospace;FF45 -emphasismarkarmenian;055B -emptyset;2205 -enbopomofo;3123 -encyrillic;043D -endash;2013 -endashvertical;FE32 -endescendercyrillic;04A3 -eng;014B -engbopomofo;3125 -enghecyrillic;04A5 -enhookcyrillic;04C8 -enspace;2002 -eogonek;0119 -eokorean;3153 -eopen;025B -eopenclosed;029A -eopenreversed;025C -eopenreversedclosed;025E -eopenreversedhook;025D -eparen;24A0 -epsilon;03B5 -epsilontonos;03AD -equal;003D -equalmonospace;FF1D -equalsmall;FE66 -equalsuperior;207C -equivalence;2261 -erbopomofo;3126 -ercyrillic;0440 -ereversed;0258 -ereversedcyrillic;044D -escyrillic;0441 -esdescendercyrillic;04AB -esh;0283 -eshcurl;0286 -eshortdeva;090E -eshortvowelsigndeva;0946 -eshreversedloop;01AA -eshsquatreversed;0285 -esmallhiragana;3047 -esmallkatakana;30A7 -esmallkatakanahalfwidth;FF6A -estimated;212E -esuperior;F6EC -eta;03B7 -etarmenian;0568 -etatonos;03AE -eth;00F0 -etilde;1EBD -etildebelow;1E1B -etnahtafoukhhebrew;0591 -etnahtafoukhlefthebrew;0591 -etnahtahebrew;0591 -etnahtalefthebrew;0591 -eturned;01DD -eukorean;3161 -euro;20AC -evowelsignbengali;09C7 -evowelsigndeva;0947 -evowelsigngujarati;0AC7 -exclam;0021 -exclamarmenian;055C -exclamdbl;203C -exclamdown;00A1 -exclamdownsmall;F7A1 -exclammonospace;FF01 -exclamsmall;F721 -existential;2203 -ezh;0292 -ezhcaron;01EF -ezhcurl;0293 -ezhreversed;01B9 -ezhtail;01BA -f;0066 -fadeva;095E -fagurmukhi;0A5E -fahrenheit;2109 -fathaarabic;064E -fathalowarabic;064E -fathatanarabic;064B -fbopomofo;3108 -fcircle;24D5 -fdotaccent;1E1F -feharabic;0641 -feharmenian;0586 -fehfinalarabic;FED2 -fehinitialarabic;FED3 -fehmedialarabic;FED4 -feicoptic;03E5 -female;2640 -ff;FB00 -ffi;FB03 -ffl;FB04 -fi;FB01 -fifteencircle;246E -fifteenparen;2482 -fifteenperiod;2496 -figuredash;2012 -filledbox;25A0 -filledrect;25AC -finalkaf;05DA -finalkafdagesh;FB3A -finalkafdageshhebrew;FB3A -finalkafhebrew;05DA -finalkafqamats;05DA 05B8 -finalkafqamatshebrew;05DA 05B8 -finalkafsheva;05DA 05B0 -finalkafshevahebrew;05DA 05B0 -finalmem;05DD -finalmemhebrew;05DD -finalnun;05DF -finalnunhebrew;05DF -finalpe;05E3 -finalpehebrew;05E3 -finaltsadi;05E5 -finaltsadihebrew;05E5 -firsttonechinese;02C9 -fisheye;25C9 -fitacyrillic;0473 -five;0035 -fivearabic;0665 -fivebengali;09EB -fivecircle;2464 -fivecircleinversesansserif;278E -fivedeva;096B -fiveeighths;215D -fivegujarati;0AEB -fivegurmukhi;0A6B -fivehackarabic;0665 -fivehangzhou;3025 -fiveideographicparen;3224 -fiveinferior;2085 -fivemonospace;FF15 -fiveoldstyle;F735 -fiveparen;2478 -fiveperiod;248C -fivepersian;06F5 -fiveroman;2174 -fivesuperior;2075 -fivethai;0E55 -fl;FB02 -florin;0192 -fmonospace;FF46 -fmsquare;3399 -fofanthai;0E1F -fofathai;0E1D -fongmanthai;0E4F -forall;2200 -four;0034 -fourarabic;0664 -fourbengali;09EA -fourcircle;2463 -fourcircleinversesansserif;278D -fourdeva;096A -fourgujarati;0AEA -fourgurmukhi;0A6A -fourhackarabic;0664 -fourhangzhou;3024 -fourideographicparen;3223 -fourinferior;2084 -fourmonospace;FF14 -fournumeratorbengali;09F7 -fouroldstyle;F734 -fourparen;2477 -fourperiod;248B -fourpersian;06F4 -fourroman;2173 -foursuperior;2074 -fourteencircle;246D -fourteenparen;2481 -fourteenperiod;2495 -fourthai;0E54 -fourthtonechinese;02CB -fparen;24A1 -fraction;2044 -franc;20A3 -g;0067 -gabengali;0997 -gacute;01F5 -gadeva;0917 -gafarabic;06AF -gaffinalarabic;FB93 -gafinitialarabic;FB94 -gafmedialarabic;FB95 -gagujarati;0A97 -gagurmukhi;0A17 -gahiragana;304C -gakatakana;30AC -gamma;03B3 -gammalatinsmall;0263 -gammasuperior;02E0 -gangiacoptic;03EB -gbopomofo;310D -gbreve;011F -gcaron;01E7 -gcedilla;0123 -gcircle;24D6 -gcircumflex;011D -gcommaaccent;0123 -gdot;0121 -gdotaccent;0121 -gecyrillic;0433 -gehiragana;3052 -gekatakana;30B2 -geometricallyequal;2251 -gereshaccenthebrew;059C -gereshhebrew;05F3 -gereshmuqdamhebrew;059D -germandbls;00DF -gershayimaccenthebrew;059E -gershayimhebrew;05F4 -getamark;3013 -ghabengali;0998 -ghadarmenian;0572 -ghadeva;0918 -ghagujarati;0A98 -ghagurmukhi;0A18 -ghainarabic;063A -ghainfinalarabic;FECE -ghaininitialarabic;FECF -ghainmedialarabic;FED0 -ghemiddlehookcyrillic;0495 -ghestrokecyrillic;0493 -gheupturncyrillic;0491 -ghhadeva;095A -ghhagurmukhi;0A5A -ghook;0260 -ghzsquare;3393 -gihiragana;304E -gikatakana;30AE -gimarmenian;0563 -gimel;05D2 -gimeldagesh;FB32 -gimeldageshhebrew;FB32 -gimelhebrew;05D2 -gjecyrillic;0453 -glottalinvertedstroke;01BE -glottalstop;0294 -glottalstopinverted;0296 -glottalstopmod;02C0 -glottalstopreversed;0295 -glottalstopreversedmod;02C1 -glottalstopreversedsuperior;02E4 -glottalstopstroke;02A1 -glottalstopstrokereversed;02A2 -gmacron;1E21 -gmonospace;FF47 -gohiragana;3054 -gokatakana;30B4 -gparen;24A2 -gpasquare;33AC -gradient;2207 -grave;0060 -gravebelowcmb;0316 -gravecmb;0300 -gravecomb;0300 -gravedeva;0953 -gravelowmod;02CE -gravemonospace;FF40 -gravetonecmb;0340 -greater;003E -greaterequal;2265 -greaterequalorless;22DB -greatermonospace;FF1E -greaterorequivalent;2273 -greaterorless;2277 -greateroverequal;2267 -greatersmall;FE65 -gscript;0261 -gstroke;01E5 -guhiragana;3050 -guillemotleft;00AB -guillemotright;00BB -guilsinglleft;2039 -guilsinglright;203A -gukatakana;30B0 -guramusquare;3318 -gysquare;33C9 -h;0068 -haabkhasiancyrillic;04A9 -haaltonearabic;06C1 -habengali;09B9 -hadescendercyrillic;04B3 -hadeva;0939 -hagujarati;0AB9 -hagurmukhi;0A39 -haharabic;062D -hahfinalarabic;FEA2 -hahinitialarabic;FEA3 -hahiragana;306F -hahmedialarabic;FEA4 -haitusquare;332A -hakatakana;30CF -hakatakanahalfwidth;FF8A -halantgurmukhi;0A4D -hamzaarabic;0621 -hamzadammaarabic;0621 064F -hamzadammatanarabic;0621 064C -hamzafathaarabic;0621 064E -hamzafathatanarabic;0621 064B -hamzalowarabic;0621 -hamzalowkasraarabic;0621 0650 -hamzalowkasratanarabic;0621 064D -hamzasukunarabic;0621 0652 -hangulfiller;3164 -hardsigncyrillic;044A -harpoonleftbarbup;21BC -harpoonrightbarbup;21C0 -hasquare;33CA -hatafpatah;05B2 -hatafpatah16;05B2 -hatafpatah23;05B2 -hatafpatah2f;05B2 -hatafpatahhebrew;05B2 -hatafpatahnarrowhebrew;05B2 -hatafpatahquarterhebrew;05B2 -hatafpatahwidehebrew;05B2 -hatafqamats;05B3 -hatafqamats1b;05B3 -hatafqamats28;05B3 -hatafqamats34;05B3 -hatafqamatshebrew;05B3 -hatafqamatsnarrowhebrew;05B3 -hatafqamatsquarterhebrew;05B3 -hatafqamatswidehebrew;05B3 -hatafsegol;05B1 -hatafsegol17;05B1 -hatafsegol24;05B1 -hatafsegol30;05B1 -hatafsegolhebrew;05B1 -hatafsegolnarrowhebrew;05B1 -hatafsegolquarterhebrew;05B1 -hatafsegolwidehebrew;05B1 -hbar;0127 -hbopomofo;310F -hbrevebelow;1E2B -hcedilla;1E29 -hcircle;24D7 -hcircumflex;0125 -hdieresis;1E27 -hdotaccent;1E23 -hdotbelow;1E25 -he;05D4 -heart;2665 -heartsuitblack;2665 -heartsuitwhite;2661 -hedagesh;FB34 -hedageshhebrew;FB34 -hehaltonearabic;06C1 -heharabic;0647 -hehebrew;05D4 -hehfinalaltonearabic;FBA7 -hehfinalalttwoarabic;FEEA -hehfinalarabic;FEEA -hehhamzaabovefinalarabic;FBA5 -hehhamzaaboveisolatedarabic;FBA4 -hehinitialaltonearabic;FBA8 -hehinitialarabic;FEEB -hehiragana;3078 -hehmedialaltonearabic;FBA9 -hehmedialarabic;FEEC -heiseierasquare;337B -hekatakana;30D8 -hekatakanahalfwidth;FF8D -hekutaarusquare;3336 -henghook;0267 -herutusquare;3339 -het;05D7 -hethebrew;05D7 -hhook;0266 -hhooksuperior;02B1 -hieuhacirclekorean;327B -hieuhaparenkorean;321B -hieuhcirclekorean;326D -hieuhkorean;314E -hieuhparenkorean;320D -hihiragana;3072 -hikatakana;30D2 -hikatakanahalfwidth;FF8B -hiriq;05B4 -hiriq14;05B4 -hiriq21;05B4 -hiriq2d;05B4 -hiriqhebrew;05B4 -hiriqnarrowhebrew;05B4 -hiriqquarterhebrew;05B4 -hiriqwidehebrew;05B4 -hlinebelow;1E96 -hmonospace;FF48 -hoarmenian;0570 -hohipthai;0E2B -hohiragana;307B -hokatakana;30DB -hokatakanahalfwidth;FF8E -holam;05B9 -holam19;05B9 -holam26;05B9 -holam32;05B9 -holamhebrew;05B9 -holamnarrowhebrew;05B9 -holamquarterhebrew;05B9 -holamwidehebrew;05B9 -honokhukthai;0E2E -hookabovecomb;0309 -hookcmb;0309 -hookpalatalizedbelowcmb;0321 -hookretroflexbelowcmb;0322 -hoonsquare;3342 -horicoptic;03E9 -horizontalbar;2015 -horncmb;031B -hotsprings;2668 -house;2302 -hparen;24A3 -hsuperior;02B0 -hturned;0265 -huhiragana;3075 -huiitosquare;3333 -hukatakana;30D5 -hukatakanahalfwidth;FF8C -hungarumlaut;02DD -hungarumlautcmb;030B -hv;0195 -hyphen;002D -hypheninferior;F6E5 -hyphenmonospace;FF0D -hyphensmall;FE63 -hyphensuperior;F6E6 -hyphentwo;2010 -i;0069 -iacute;00ED -iacyrillic;044F -ibengali;0987 -ibopomofo;3127 -ibreve;012D -icaron;01D0 -icircle;24D8 -icircumflex;00EE -icyrillic;0456 -idblgrave;0209 -ideographearthcircle;328F -ideographfirecircle;328B -ideographicallianceparen;323F -ideographiccallparen;323A -ideographiccentrecircle;32A5 -ideographicclose;3006 -ideographiccomma;3001 -ideographiccommaleft;FF64 -ideographiccongratulationparen;3237 -ideographiccorrectcircle;32A3 -ideographicearthparen;322F -ideographicenterpriseparen;323D -ideographicexcellentcircle;329D -ideographicfestivalparen;3240 -ideographicfinancialcircle;3296 -ideographicfinancialparen;3236 -ideographicfireparen;322B -ideographichaveparen;3232 -ideographichighcircle;32A4 -ideographiciterationmark;3005 -ideographiclaborcircle;3298 -ideographiclaborparen;3238 -ideographicleftcircle;32A7 -ideographiclowcircle;32A6 -ideographicmedicinecircle;32A9 -ideographicmetalparen;322E -ideographicmoonparen;322A -ideographicnameparen;3234 -ideographicperiod;3002 -ideographicprintcircle;329E -ideographicreachparen;3243 -ideographicrepresentparen;3239 -ideographicresourceparen;323E -ideographicrightcircle;32A8 -ideographicsecretcircle;3299 -ideographicselfparen;3242 -ideographicsocietyparen;3233 -ideographicspace;3000 -ideographicspecialparen;3235 -ideographicstockparen;3231 -ideographicstudyparen;323B -ideographicsunparen;3230 -ideographicsuperviseparen;323C -ideographicwaterparen;322C -ideographicwoodparen;322D -ideographiczero;3007 -ideographmetalcircle;328E -ideographmooncircle;328A -ideographnamecircle;3294 -ideographsuncircle;3290 -ideographwatercircle;328C -ideographwoodcircle;328D -ideva;0907 -idieresis;00EF -idieresisacute;1E2F -idieresiscyrillic;04E5 -idotbelow;1ECB -iebrevecyrillic;04D7 -iecyrillic;0435 -ieungacirclekorean;3275 -ieungaparenkorean;3215 -ieungcirclekorean;3267 -ieungkorean;3147 -ieungparenkorean;3207 -igrave;00EC -igujarati;0A87 -igurmukhi;0A07 -ihiragana;3044 -ihookabove;1EC9 -iibengali;0988 -iicyrillic;0438 -iideva;0908 -iigujarati;0A88 -iigurmukhi;0A08 -iimatragurmukhi;0A40 -iinvertedbreve;020B -iishortcyrillic;0439 -iivowelsignbengali;09C0 -iivowelsigndeva;0940 -iivowelsigngujarati;0AC0 -ij;0133 -ikatakana;30A4 -ikatakanahalfwidth;FF72 -ikorean;3163 -ilde;02DC -iluyhebrew;05AC -imacron;012B -imacroncyrillic;04E3 -imageorapproximatelyequal;2253 -imatragurmukhi;0A3F -imonospace;FF49 -increment;2206 -infinity;221E -iniarmenian;056B -integral;222B -integralbottom;2321 -integralbt;2321 -integralex;F8F5 -integraltop;2320 -integraltp;2320 -intersection;2229 -intisquare;3305 -invbullet;25D8 -invcircle;25D9 -invsmileface;263B -iocyrillic;0451 -iogonek;012F -iota;03B9 -iotadieresis;03CA -iotadieresistonos;0390 -iotalatin;0269 -iotatonos;03AF -iparen;24A4 -irigurmukhi;0A72 -ismallhiragana;3043 -ismallkatakana;30A3 -ismallkatakanahalfwidth;FF68 -issharbengali;09FA -istroke;0268 -isuperior;F6ED -iterationhiragana;309D -iterationkatakana;30FD -itilde;0129 -itildebelow;1E2D -iubopomofo;3129 -iucyrillic;044E -ivowelsignbengali;09BF -ivowelsigndeva;093F -ivowelsigngujarati;0ABF -izhitsacyrillic;0475 -izhitsadblgravecyrillic;0477 -j;006A -jaarmenian;0571 -jabengali;099C -jadeva;091C -jagujarati;0A9C -jagurmukhi;0A1C -jbopomofo;3110 -jcaron;01F0 -jcircle;24D9 -jcircumflex;0135 -jcrossedtail;029D -jdotlessstroke;025F -jecyrillic;0458 -jeemarabic;062C -jeemfinalarabic;FE9E -jeeminitialarabic;FE9F -jeemmedialarabic;FEA0 -jeharabic;0698 -jehfinalarabic;FB8B -jhabengali;099D -jhadeva;091D -jhagujarati;0A9D -jhagurmukhi;0A1D -jheharmenian;057B -jis;3004 -jmonospace;FF4A -jparen;24A5 -jsuperior;02B2 -k;006B -kabashkircyrillic;04A1 -kabengali;0995 -kacute;1E31 -kacyrillic;043A -kadescendercyrillic;049B -kadeva;0915 -kaf;05DB -kafarabic;0643 -kafdagesh;FB3B -kafdageshhebrew;FB3B -kaffinalarabic;FEDA -kafhebrew;05DB -kafinitialarabic;FEDB -kafmedialarabic;FEDC -kafrafehebrew;FB4D -kagujarati;0A95 -kagurmukhi;0A15 -kahiragana;304B -kahookcyrillic;04C4 -kakatakana;30AB -kakatakanahalfwidth;FF76 -kappa;03BA -kappasymbolgreek;03F0 -kapyeounmieumkorean;3171 -kapyeounphieuphkorean;3184 -kapyeounpieupkorean;3178 -kapyeounssangpieupkorean;3179 -karoriisquare;330D -kashidaautoarabic;0640 -kashidaautonosidebearingarabic;0640 -kasmallkatakana;30F5 -kasquare;3384 -kasraarabic;0650 -kasratanarabic;064D -kastrokecyrillic;049F -katahiraprolongmarkhalfwidth;FF70 -kaverticalstrokecyrillic;049D -kbopomofo;310E -kcalsquare;3389 -kcaron;01E9 -kcedilla;0137 -kcircle;24DA -kcommaaccent;0137 -kdotbelow;1E33 -keharmenian;0584 -kehiragana;3051 -kekatakana;30B1 -kekatakanahalfwidth;FF79 -kenarmenian;056F -kesmallkatakana;30F6 -kgreenlandic;0138 -khabengali;0996 -khacyrillic;0445 -khadeva;0916 -khagujarati;0A96 -khagurmukhi;0A16 -khaharabic;062E -khahfinalarabic;FEA6 -khahinitialarabic;FEA7 -khahmedialarabic;FEA8 -kheicoptic;03E7 -khhadeva;0959 -khhagurmukhi;0A59 -khieukhacirclekorean;3278 -khieukhaparenkorean;3218 -khieukhcirclekorean;326A -khieukhkorean;314B -khieukhparenkorean;320A -khokhaithai;0E02 -khokhonthai;0E05 -khokhuatthai;0E03 -khokhwaithai;0E04 -khomutthai;0E5B -khook;0199 -khorakhangthai;0E06 -khzsquare;3391 -kihiragana;304D -kikatakana;30AD -kikatakanahalfwidth;FF77 -kiroguramusquare;3315 -kiromeetorusquare;3316 -kirosquare;3314 -kiyeokacirclekorean;326E -kiyeokaparenkorean;320E -kiyeokcirclekorean;3260 -kiyeokkorean;3131 -kiyeokparenkorean;3200 -kiyeoksioskorean;3133 -kjecyrillic;045C -klinebelow;1E35 -klsquare;3398 -kmcubedsquare;33A6 -kmonospace;FF4B -kmsquaredsquare;33A2 -kohiragana;3053 -kohmsquare;33C0 -kokaithai;0E01 -kokatakana;30B3 -kokatakanahalfwidth;FF7A -kooposquare;331E -koppacyrillic;0481 -koreanstandardsymbol;327F -koroniscmb;0343 -kparen;24A6 -kpasquare;33AA -ksicyrillic;046F -ktsquare;33CF -kturned;029E -kuhiragana;304F -kukatakana;30AF -kukatakanahalfwidth;FF78 -kvsquare;33B8 -kwsquare;33BE -l;006C -labengali;09B2 -lacute;013A -ladeva;0932 -lagujarati;0AB2 -lagurmukhi;0A32 -lakkhangyaothai;0E45 -lamaleffinalarabic;FEFC -lamalefhamzaabovefinalarabic;FEF8 -lamalefhamzaaboveisolatedarabic;FEF7 -lamalefhamzabelowfinalarabic;FEFA -lamalefhamzabelowisolatedarabic;FEF9 -lamalefisolatedarabic;FEFB -lamalefmaddaabovefinalarabic;FEF6 -lamalefmaddaaboveisolatedarabic;FEF5 -lamarabic;0644 -lambda;03BB -lambdastroke;019B -lamed;05DC -lameddagesh;FB3C -lameddageshhebrew;FB3C -lamedhebrew;05DC -lamedholam;05DC 05B9 -lamedholamdagesh;05DC 05B9 05BC -lamedholamdageshhebrew;05DC 05B9 05BC -lamedholamhebrew;05DC 05B9 -lamfinalarabic;FEDE -lamhahinitialarabic;FCCA -laminitialarabic;FEDF -lamjeeminitialarabic;FCC9 -lamkhahinitialarabic;FCCB -lamlamhehisolatedarabic;FDF2 -lammedialarabic;FEE0 -lammeemhahinitialarabic;FD88 -lammeeminitialarabic;FCCC -lammeemjeeminitialarabic;FEDF FEE4 FEA0 -lammeemkhahinitialarabic;FEDF FEE4 FEA8 -largecircle;25EF -lbar;019A -lbelt;026C -lbopomofo;310C -lcaron;013E -lcedilla;013C -lcircle;24DB -lcircumflexbelow;1E3D -lcommaaccent;013C -ldot;0140 -ldotaccent;0140 -ldotbelow;1E37 -ldotbelowmacron;1E39 -leftangleabovecmb;031A -lefttackbelowcmb;0318 -less;003C -lessequal;2264 -lessequalorgreater;22DA -lessmonospace;FF1C -lessorequivalent;2272 -lessorgreater;2276 -lessoverequal;2266 -lesssmall;FE64 -lezh;026E -lfblock;258C -lhookretroflex;026D -lira;20A4 -liwnarmenian;056C -lj;01C9 -ljecyrillic;0459 -ll;F6C0 -lladeva;0933 -llagujarati;0AB3 -llinebelow;1E3B -llladeva;0934 -llvocalicbengali;09E1 -llvocalicdeva;0961 -llvocalicvowelsignbengali;09E3 -llvocalicvowelsigndeva;0963 -lmiddletilde;026B -lmonospace;FF4C -lmsquare;33D0 -lochulathai;0E2C -logicaland;2227 -logicalnot;00AC -logicalnotreversed;2310 -logicalor;2228 -lolingthai;0E25 -longs;017F -lowlinecenterline;FE4E -lowlinecmb;0332 -lowlinedashed;FE4D -lozenge;25CA -lparen;24A7 -lslash;0142 -lsquare;2113 -lsuperior;F6EE -ltshade;2591 -luthai;0E26 -lvocalicbengali;098C -lvocalicdeva;090C -lvocalicvowelsignbengali;09E2 -lvocalicvowelsigndeva;0962 -lxsquare;33D3 -m;006D -mabengali;09AE -macron;00AF -macronbelowcmb;0331 -macroncmb;0304 -macronlowmod;02CD -macronmonospace;FFE3 -macute;1E3F -madeva;092E -magujarati;0AAE -magurmukhi;0A2E -mahapakhhebrew;05A4 -mahapakhlefthebrew;05A4 -mahiragana;307E -maichattawalowleftthai;F895 -maichattawalowrightthai;F894 -maichattawathai;0E4B -maichattawaupperleftthai;F893 -maieklowleftthai;F88C -maieklowrightthai;F88B -maiekthai;0E48 -maiekupperleftthai;F88A -maihanakatleftthai;F884 -maihanakatthai;0E31 -maitaikhuleftthai;F889 -maitaikhuthai;0E47 -maitholowleftthai;F88F -maitholowrightthai;F88E -maithothai;0E49 -maithoupperleftthai;F88D -maitrilowleftthai;F892 -maitrilowrightthai;F891 -maitrithai;0E4A -maitriupperleftthai;F890 -maiyamokthai;0E46 -makatakana;30DE -makatakanahalfwidth;FF8F -male;2642 -mansyonsquare;3347 -maqafhebrew;05BE -mars;2642 -masoracirclehebrew;05AF -masquare;3383 -mbopomofo;3107 -mbsquare;33D4 -mcircle;24DC -mcubedsquare;33A5 -mdotaccent;1E41 -mdotbelow;1E43 -meemarabic;0645 -meemfinalarabic;FEE2 -meeminitialarabic;FEE3 -meemmedialarabic;FEE4 -meemmeeminitialarabic;FCD1 -meemmeemisolatedarabic;FC48 -meetorusquare;334D -mehiragana;3081 -meizierasquare;337E -mekatakana;30E1 -mekatakanahalfwidth;FF92 -mem;05DE -memdagesh;FB3E -memdageshhebrew;FB3E -memhebrew;05DE -menarmenian;0574 -merkhahebrew;05A5 -merkhakefulahebrew;05A6 -merkhakefulalefthebrew;05A6 -merkhalefthebrew;05A5 -mhook;0271 -mhzsquare;3392 -middledotkatakanahalfwidth;FF65 -middot;00B7 -mieumacirclekorean;3272 -mieumaparenkorean;3212 -mieumcirclekorean;3264 -mieumkorean;3141 -mieumpansioskorean;3170 -mieumparenkorean;3204 -mieumpieupkorean;316E -mieumsioskorean;316F -mihiragana;307F -mikatakana;30DF -mikatakanahalfwidth;FF90 -minus;2212 -minusbelowcmb;0320 -minuscircle;2296 -minusmod;02D7 -minusplus;2213 -minute;2032 -miribaarusquare;334A -mirisquare;3349 -mlonglegturned;0270 -mlsquare;3396 -mmcubedsquare;33A3 -mmonospace;FF4D -mmsquaredsquare;339F -mohiragana;3082 -mohmsquare;33C1 -mokatakana;30E2 -mokatakanahalfwidth;FF93 -molsquare;33D6 -momathai;0E21 -moverssquare;33A7 -moverssquaredsquare;33A8 -mparen;24A8 -mpasquare;33AB -mssquare;33B3 -msuperior;F6EF -mturned;026F -mu;00B5 -mu1;00B5 -muasquare;3382 -muchgreater;226B -muchless;226A -mufsquare;338C -mugreek;03BC -mugsquare;338D -muhiragana;3080 -mukatakana;30E0 -mukatakanahalfwidth;FF91 -mulsquare;3395 -multiply;00D7 -mumsquare;339B -munahhebrew;05A3 -munahlefthebrew;05A3 -musicalnote;266A -musicalnotedbl;266B -musicflatsign;266D -musicsharpsign;266F -mussquare;33B2 -muvsquare;33B6 -muwsquare;33BC -mvmegasquare;33B9 -mvsquare;33B7 -mwmegasquare;33BF -mwsquare;33BD -n;006E -nabengali;09A8 -nabla;2207 -nacute;0144 -nadeva;0928 -nagujarati;0AA8 -nagurmukhi;0A28 -nahiragana;306A -nakatakana;30CA -nakatakanahalfwidth;FF85 -napostrophe;0149 -nasquare;3381 -nbopomofo;310B -nbspace;00A0 -ncaron;0148 -ncedilla;0146 -ncircle;24DD -ncircumflexbelow;1E4B -ncommaaccent;0146 -ndotaccent;1E45 -ndotbelow;1E47 -nehiragana;306D -nekatakana;30CD -nekatakanahalfwidth;FF88 -newsheqelsign;20AA -nfsquare;338B -ngabengali;0999 -ngadeva;0919 -ngagujarati;0A99 -ngagurmukhi;0A19 -ngonguthai;0E07 -nhiragana;3093 -nhookleft;0272 -nhookretroflex;0273 -nieunacirclekorean;326F -nieunaparenkorean;320F -nieuncieuckorean;3135 -nieuncirclekorean;3261 -nieunhieuhkorean;3136 -nieunkorean;3134 -nieunpansioskorean;3168 -nieunparenkorean;3201 -nieunsioskorean;3167 -nieuntikeutkorean;3166 -nihiragana;306B -nikatakana;30CB -nikatakanahalfwidth;FF86 -nikhahitleftthai;F899 -nikhahitthai;0E4D -nine;0039 -ninearabic;0669 -ninebengali;09EF -ninecircle;2468 -ninecircleinversesansserif;2792 -ninedeva;096F -ninegujarati;0AEF -ninegurmukhi;0A6F -ninehackarabic;0669 -ninehangzhou;3029 -nineideographicparen;3228 -nineinferior;2089 -ninemonospace;FF19 -nineoldstyle;F739 -nineparen;247C -nineperiod;2490 -ninepersian;06F9 -nineroman;2178 -ninesuperior;2079 -nineteencircle;2472 -nineteenparen;2486 -nineteenperiod;249A -ninethai;0E59 -nj;01CC -njecyrillic;045A -nkatakana;30F3 -nkatakanahalfwidth;FF9D -nlegrightlong;019E -nlinebelow;1E49 -nmonospace;FF4E -nmsquare;339A -nnabengali;09A3 -nnadeva;0923 -nnagujarati;0AA3 -nnagurmukhi;0A23 -nnnadeva;0929 -nohiragana;306E -nokatakana;30CE -nokatakanahalfwidth;FF89 -nonbreakingspace;00A0 -nonenthai;0E13 -nonuthai;0E19 -noonarabic;0646 -noonfinalarabic;FEE6 -noonghunnaarabic;06BA -noonghunnafinalarabic;FB9F -noonhehinitialarabic;FEE7 FEEC -nooninitialarabic;FEE7 -noonjeeminitialarabic;FCD2 -noonjeemisolatedarabic;FC4B -noonmedialarabic;FEE8 -noonmeeminitialarabic;FCD5 -noonmeemisolatedarabic;FC4E -noonnoonfinalarabic;FC8D -notcontains;220C -notelement;2209 -notelementof;2209 -notequal;2260 -notgreater;226F -notgreaternorequal;2271 -notgreaternorless;2279 -notidentical;2262 -notless;226E -notlessnorequal;2270 -notparallel;2226 -notprecedes;2280 -notsubset;2284 -notsucceeds;2281 -notsuperset;2285 -nowarmenian;0576 -nparen;24A9 -nssquare;33B1 -nsuperior;207F -ntilde;00F1 -nu;03BD -nuhiragana;306C -nukatakana;30CC -nukatakanahalfwidth;FF87 -nuktabengali;09BC -nuktadeva;093C -nuktagujarati;0ABC -nuktagurmukhi;0A3C -numbersign;0023 -numbersignmonospace;FF03 -numbersignsmall;FE5F -numeralsigngreek;0374 -numeralsignlowergreek;0375 -numero;2116 -nun;05E0 -nundagesh;FB40 -nundageshhebrew;FB40 -nunhebrew;05E0 -nvsquare;33B5 -nwsquare;33BB -nyabengali;099E -nyadeva;091E -nyagujarati;0A9E -nyagurmukhi;0A1E -o;006F -oacute;00F3 -oangthai;0E2D -obarred;0275 -obarredcyrillic;04E9 -obarreddieresiscyrillic;04EB -obengali;0993 -obopomofo;311B -obreve;014F -ocandradeva;0911 -ocandragujarati;0A91 -ocandravowelsigndeva;0949 -ocandravowelsigngujarati;0AC9 -ocaron;01D2 -ocircle;24DE -ocircumflex;00F4 -ocircumflexacute;1ED1 -ocircumflexdotbelow;1ED9 -ocircumflexgrave;1ED3 -ocircumflexhookabove;1ED5 -ocircumflextilde;1ED7 -ocyrillic;043E -odblacute;0151 -odblgrave;020D -odeva;0913 -odieresis;00F6 -odieresiscyrillic;04E7 -odotbelow;1ECD -oe;0153 -oekorean;315A -ogonek;02DB -ogonekcmb;0328 -ograve;00F2 -ogujarati;0A93 -oharmenian;0585 -ohiragana;304A -ohookabove;1ECF -ohorn;01A1 -ohornacute;1EDB -ohorndotbelow;1EE3 -ohorngrave;1EDD -ohornhookabove;1EDF -ohorntilde;1EE1 -ohungarumlaut;0151 -oi;01A3 -oinvertedbreve;020F -okatakana;30AA -okatakanahalfwidth;FF75 -okorean;3157 -olehebrew;05AB -omacron;014D -omacronacute;1E53 -omacrongrave;1E51 -omdeva;0950 -omega;03C9 -omega1;03D6 -omegacyrillic;0461 -omegalatinclosed;0277 -omegaroundcyrillic;047B -omegatitlocyrillic;047D -omegatonos;03CE -omgujarati;0AD0 -omicron;03BF -omicrontonos;03CC -omonospace;FF4F -one;0031 -onearabic;0661 -onebengali;09E7 -onecircle;2460 -onecircleinversesansserif;278A -onedeva;0967 -onedotenleader;2024 -oneeighth;215B -onefitted;F6DC -onegujarati;0AE7 -onegurmukhi;0A67 -onehackarabic;0661 -onehalf;00BD -onehangzhou;3021 -oneideographicparen;3220 -oneinferior;2081 -onemonospace;FF11 -onenumeratorbengali;09F4 -oneoldstyle;F731 -oneparen;2474 -oneperiod;2488 -onepersian;06F1 -onequarter;00BC -oneroman;2170 -onesuperior;00B9 -onethai;0E51 -onethird;2153 -oogonek;01EB -oogonekmacron;01ED -oogurmukhi;0A13 -oomatragurmukhi;0A4B -oopen;0254 -oparen;24AA -openbullet;25E6 -option;2325 -ordfeminine;00AA -ordmasculine;00BA -orthogonal;221F -oshortdeva;0912 -oshortvowelsigndeva;094A -oslash;00F8 -oslashacute;01FF -osmallhiragana;3049 -osmallkatakana;30A9 -osmallkatakanahalfwidth;FF6B -ostrokeacute;01FF -osuperior;F6F0 -otcyrillic;047F -otilde;00F5 -otildeacute;1E4D -otildedieresis;1E4F -oubopomofo;3121 -overline;203E -overlinecenterline;FE4A -overlinecmb;0305 -overlinedashed;FE49 -overlinedblwavy;FE4C -overlinewavy;FE4B -overscore;00AF -ovowelsignbengali;09CB -ovowelsigndeva;094B -ovowelsigngujarati;0ACB -p;0070 -paampssquare;3380 -paasentosquare;332B -pabengali;09AA -pacute;1E55 -padeva;092A -pagedown;21DF -pageup;21DE -pagujarati;0AAA -pagurmukhi;0A2A -pahiragana;3071 -paiyannoithai;0E2F -pakatakana;30D1 -palatalizationcyrilliccmb;0484 -palochkacyrillic;04C0 -pansioskorean;317F -paragraph;00B6 -parallel;2225 -parenleft;0028 -parenleftaltonearabic;FD3E -parenleftbt;F8ED -parenleftex;F8EC -parenleftinferior;208D -parenleftmonospace;FF08 -parenleftsmall;FE59 -parenleftsuperior;207D -parenlefttp;F8EB -parenleftvertical;FE35 -parenright;0029 -parenrightaltonearabic;FD3F -parenrightbt;F8F8 -parenrightex;F8F7 -parenrightinferior;208E -parenrightmonospace;FF09 -parenrightsmall;FE5A -parenrightsuperior;207E -parenrighttp;F8F6 -parenrightvertical;FE36 -partialdiff;2202 -paseqhebrew;05C0 -pashtahebrew;0599 -pasquare;33A9 -patah;05B7 -patah11;05B7 -patah1d;05B7 -patah2a;05B7 -patahhebrew;05B7 -patahnarrowhebrew;05B7 -patahquarterhebrew;05B7 -patahwidehebrew;05B7 -pazerhebrew;05A1 -pbopomofo;3106 -pcircle;24DF -pdotaccent;1E57 -pe;05E4 -pecyrillic;043F -pedagesh;FB44 -pedageshhebrew;FB44 -peezisquare;333B -pefinaldageshhebrew;FB43 -peharabic;067E -peharmenian;057A -pehebrew;05E4 -pehfinalarabic;FB57 -pehinitialarabic;FB58 -pehiragana;307A -pehmedialarabic;FB59 -pekatakana;30DA -pemiddlehookcyrillic;04A7 -perafehebrew;FB4E -percent;0025 -percentarabic;066A -percentmonospace;FF05 -percentsmall;FE6A -period;002E -periodarmenian;0589 -periodcentered;00B7 -periodhalfwidth;FF61 -periodinferior;F6E7 -periodmonospace;FF0E -periodsmall;FE52 -periodsuperior;F6E8 -perispomenigreekcmb;0342 -perpendicular;22A5 -perthousand;2030 -peseta;20A7 -pfsquare;338A -phabengali;09AB -phadeva;092B -phagujarati;0AAB -phagurmukhi;0A2B -phi;03C6 -phi1;03D5 -phieuphacirclekorean;327A -phieuphaparenkorean;321A -phieuphcirclekorean;326C -phieuphkorean;314D -phieuphparenkorean;320C -philatin;0278 -phinthuthai;0E3A -phisymbolgreek;03D5 -phook;01A5 -phophanthai;0E1E -phophungthai;0E1C -phosamphaothai;0E20 -pi;03C0 -pieupacirclekorean;3273 -pieupaparenkorean;3213 -pieupcieuckorean;3176 -pieupcirclekorean;3265 -pieupkiyeokkorean;3172 -pieupkorean;3142 -pieupparenkorean;3205 -pieupsioskiyeokkorean;3174 -pieupsioskorean;3144 -pieupsiostikeutkorean;3175 -pieupthieuthkorean;3177 -pieuptikeutkorean;3173 -pihiragana;3074 -pikatakana;30D4 -pisymbolgreek;03D6 -piwrarmenian;0583 -plus;002B -plusbelowcmb;031F -pluscircle;2295 -plusminus;00B1 -plusmod;02D6 -plusmonospace;FF0B -plussmall;FE62 -plussuperior;207A -pmonospace;FF50 -pmsquare;33D8 -pohiragana;307D -pointingindexdownwhite;261F -pointingindexleftwhite;261C -pointingindexrightwhite;261E -pointingindexupwhite;261D -pokatakana;30DD -poplathai;0E1B -postalmark;3012 -postalmarkface;3020 -pparen;24AB -precedes;227A -prescription;211E -primemod;02B9 -primereversed;2035 -product;220F -projective;2305 -prolongedkana;30FC -propellor;2318 -propersubset;2282 -propersuperset;2283 -proportion;2237 -proportional;221D -psi;03C8 -psicyrillic;0471 -psilipneumatacyrilliccmb;0486 -pssquare;33B0 -puhiragana;3077 -pukatakana;30D7 -pvsquare;33B4 -pwsquare;33BA -q;0071 -qadeva;0958 -qadmahebrew;05A8 -qafarabic;0642 -qaffinalarabic;FED6 -qafinitialarabic;FED7 -qafmedialarabic;FED8 -qamats;05B8 -qamats10;05B8 -qamats1a;05B8 -qamats1c;05B8 -qamats27;05B8 -qamats29;05B8 -qamats33;05B8 -qamatsde;05B8 -qamatshebrew;05B8 -qamatsnarrowhebrew;05B8 -qamatsqatanhebrew;05B8 -qamatsqatannarrowhebrew;05B8 -qamatsqatanquarterhebrew;05B8 -qamatsqatanwidehebrew;05B8 -qamatsquarterhebrew;05B8 -qamatswidehebrew;05B8 -qarneyparahebrew;059F -qbopomofo;3111 -qcircle;24E0 -qhook;02A0 -qmonospace;FF51 -qof;05E7 -qofdagesh;FB47 -qofdageshhebrew;FB47 -qofhatafpatah;05E7 05B2 -qofhatafpatahhebrew;05E7 05B2 -qofhatafsegol;05E7 05B1 -qofhatafsegolhebrew;05E7 05B1 -qofhebrew;05E7 -qofhiriq;05E7 05B4 -qofhiriqhebrew;05E7 05B4 -qofholam;05E7 05B9 -qofholamhebrew;05E7 05B9 -qofpatah;05E7 05B7 -qofpatahhebrew;05E7 05B7 -qofqamats;05E7 05B8 -qofqamatshebrew;05E7 05B8 -qofqubuts;05E7 05BB -qofqubutshebrew;05E7 05BB -qofsegol;05E7 05B6 -qofsegolhebrew;05E7 05B6 -qofsheva;05E7 05B0 -qofshevahebrew;05E7 05B0 -qoftsere;05E7 05B5 -qoftserehebrew;05E7 05B5 -qparen;24AC -quarternote;2669 -qubuts;05BB -qubuts18;05BB -qubuts25;05BB -qubuts31;05BB -qubutshebrew;05BB -qubutsnarrowhebrew;05BB -qubutsquarterhebrew;05BB -qubutswidehebrew;05BB -question;003F -questionarabic;061F -questionarmenian;055E -questiondown;00BF -questiondownsmall;F7BF -questiongreek;037E -questionmonospace;FF1F -questionsmall;F73F -quotedbl;0022 -quotedblbase;201E -quotedblleft;201C -quotedblmonospace;FF02 -quotedblprime;301E -quotedblprimereversed;301D -quotedblright;201D -quoteleft;2018 -quoteleftreversed;201B -quotereversed;201B -quoteright;2019 -quoterightn;0149 -quotesinglbase;201A -quotesingle;0027 -quotesinglemonospace;FF07 -r;0072 -raarmenian;057C -rabengali;09B0 -racute;0155 -radeva;0930 -radical;221A -radicalex;F8E5 -radoverssquare;33AE -radoverssquaredsquare;33AF -radsquare;33AD -rafe;05BF -rafehebrew;05BF -ragujarati;0AB0 -ragurmukhi;0A30 -rahiragana;3089 -rakatakana;30E9 -rakatakanahalfwidth;FF97 -ralowerdiagonalbengali;09F1 -ramiddlediagonalbengali;09F0 -ramshorn;0264 -ratio;2236 -rbopomofo;3116 -rcaron;0159 -rcedilla;0157 -rcircle;24E1 -rcommaaccent;0157 -rdblgrave;0211 -rdotaccent;1E59 -rdotbelow;1E5B -rdotbelowmacron;1E5D -referencemark;203B -reflexsubset;2286 -reflexsuperset;2287 -registered;00AE -registersans;F8E8 -registerserif;F6DA -reharabic;0631 -reharmenian;0580 -rehfinalarabic;FEAE -rehiragana;308C -rehyehaleflamarabic;0631 FEF3 FE8E 0644 -rekatakana;30EC -rekatakanahalfwidth;FF9A -resh;05E8 -reshdageshhebrew;FB48 -reshhatafpatah;05E8 05B2 -reshhatafpatahhebrew;05E8 05B2 -reshhatafsegol;05E8 05B1 -reshhatafsegolhebrew;05E8 05B1 -reshhebrew;05E8 -reshhiriq;05E8 05B4 -reshhiriqhebrew;05E8 05B4 -reshholam;05E8 05B9 -reshholamhebrew;05E8 05B9 -reshpatah;05E8 05B7 -reshpatahhebrew;05E8 05B7 -reshqamats;05E8 05B8 -reshqamatshebrew;05E8 05B8 -reshqubuts;05E8 05BB -reshqubutshebrew;05E8 05BB -reshsegol;05E8 05B6 -reshsegolhebrew;05E8 05B6 -reshsheva;05E8 05B0 -reshshevahebrew;05E8 05B0 -reshtsere;05E8 05B5 -reshtserehebrew;05E8 05B5 -reversedtilde;223D -reviahebrew;0597 -reviamugrashhebrew;0597 -revlogicalnot;2310 -rfishhook;027E -rfishhookreversed;027F -rhabengali;09DD -rhadeva;095D -rho;03C1 -rhook;027D -rhookturned;027B -rhookturnedsuperior;02B5 -rhosymbolgreek;03F1 -rhotichookmod;02DE -rieulacirclekorean;3271 -rieulaparenkorean;3211 -rieulcirclekorean;3263 -rieulhieuhkorean;3140 -rieulkiyeokkorean;313A -rieulkiyeoksioskorean;3169 -rieulkorean;3139 -rieulmieumkorean;313B -rieulpansioskorean;316C -rieulparenkorean;3203 -rieulphieuphkorean;313F -rieulpieupkorean;313C -rieulpieupsioskorean;316B -rieulsioskorean;313D -rieulthieuthkorean;313E -rieultikeutkorean;316A -rieulyeorinhieuhkorean;316D -rightangle;221F -righttackbelowcmb;0319 -righttriangle;22BF -rihiragana;308A -rikatakana;30EA -rikatakanahalfwidth;FF98 -ring;02DA -ringbelowcmb;0325 -ringcmb;030A -ringhalfleft;02BF -ringhalfleftarmenian;0559 -ringhalfleftbelowcmb;031C -ringhalfleftcentered;02D3 -ringhalfright;02BE -ringhalfrightbelowcmb;0339 -ringhalfrightcentered;02D2 -rinvertedbreve;0213 -rittorusquare;3351 -rlinebelow;1E5F -rlongleg;027C -rlonglegturned;027A -rmonospace;FF52 -rohiragana;308D -rokatakana;30ED -rokatakanahalfwidth;FF9B -roruathai;0E23 -rparen;24AD -rrabengali;09DC -rradeva;0931 -rragurmukhi;0A5C -rreharabic;0691 -rrehfinalarabic;FB8D -rrvocalicbengali;09E0 -rrvocalicdeva;0960 -rrvocalicgujarati;0AE0 -rrvocalicvowelsignbengali;09C4 -rrvocalicvowelsigndeva;0944 -rrvocalicvowelsigngujarati;0AC4 -rsuperior;F6F1 -rtblock;2590 -rturned;0279 -rturnedsuperior;02B4 -ruhiragana;308B -rukatakana;30EB -rukatakanahalfwidth;FF99 -rupeemarkbengali;09F2 -rupeesignbengali;09F3 -rupiah;F6DD -ruthai;0E24 -rvocalicbengali;098B -rvocalicdeva;090B -rvocalicgujarati;0A8B -rvocalicvowelsignbengali;09C3 -rvocalicvowelsigndeva;0943 -rvocalicvowelsigngujarati;0AC3 -s;0073 -sabengali;09B8 -sacute;015B -sacutedotaccent;1E65 -sadarabic;0635 -sadeva;0938 -sadfinalarabic;FEBA -sadinitialarabic;FEBB -sadmedialarabic;FEBC -sagujarati;0AB8 -sagurmukhi;0A38 -sahiragana;3055 -sakatakana;30B5 -sakatakanahalfwidth;FF7B -sallallahoualayhewasallamarabic;FDFA -samekh;05E1 -samekhdagesh;FB41 -samekhdageshhebrew;FB41 -samekhhebrew;05E1 -saraaathai;0E32 -saraaethai;0E41 -saraaimaimalaithai;0E44 -saraaimaimuanthai;0E43 -saraamthai;0E33 -saraathai;0E30 -saraethai;0E40 -saraiileftthai;F886 -saraiithai;0E35 -saraileftthai;F885 -saraithai;0E34 -saraothai;0E42 -saraueeleftthai;F888 -saraueethai;0E37 -saraueleftthai;F887 -sarauethai;0E36 -sarauthai;0E38 -sarauuthai;0E39 -sbopomofo;3119 -scaron;0161 -scarondotaccent;1E67 -scedilla;015F -schwa;0259 -schwacyrillic;04D9 -schwadieresiscyrillic;04DB -schwahook;025A -scircle;24E2 -scircumflex;015D -scommaaccent;0219 -sdotaccent;1E61 -sdotbelow;1E63 -sdotbelowdotaccent;1E69 -seagullbelowcmb;033C -second;2033 -secondtonechinese;02CA -section;00A7 -seenarabic;0633 -seenfinalarabic;FEB2 -seeninitialarabic;FEB3 -seenmedialarabic;FEB4 -segol;05B6 -segol13;05B6 -segol1f;05B6 -segol2c;05B6 -segolhebrew;05B6 -segolnarrowhebrew;05B6 -segolquarterhebrew;05B6 -segoltahebrew;0592 -segolwidehebrew;05B6 -seharmenian;057D -sehiragana;305B -sekatakana;30BB -sekatakanahalfwidth;FF7E -semicolon;003B -semicolonarabic;061B -semicolonmonospace;FF1B -semicolonsmall;FE54 -semivoicedmarkkana;309C -semivoicedmarkkanahalfwidth;FF9F -sentisquare;3322 -sentosquare;3323 -seven;0037 -sevenarabic;0667 -sevenbengali;09ED -sevencircle;2466 -sevencircleinversesansserif;2790 -sevendeva;096D -seveneighths;215E -sevengujarati;0AED -sevengurmukhi;0A6D -sevenhackarabic;0667 -sevenhangzhou;3027 -sevenideographicparen;3226 -seveninferior;2087 -sevenmonospace;FF17 -sevenoldstyle;F737 -sevenparen;247A -sevenperiod;248E -sevenpersian;06F7 -sevenroman;2176 -sevensuperior;2077 -seventeencircle;2470 -seventeenparen;2484 -seventeenperiod;2498 -seventhai;0E57 -sfthyphen;00AD -shaarmenian;0577 -shabengali;09B6 -shacyrillic;0448 -shaddaarabic;0651 -shaddadammaarabic;FC61 -shaddadammatanarabic;FC5E -shaddafathaarabic;FC60 -shaddafathatanarabic;0651 064B -shaddakasraarabic;FC62 -shaddakasratanarabic;FC5F -shade;2592 -shadedark;2593 -shadelight;2591 -shademedium;2592 -shadeva;0936 -shagujarati;0AB6 -shagurmukhi;0A36 -shalshelethebrew;0593 -shbopomofo;3115 -shchacyrillic;0449 -sheenarabic;0634 -sheenfinalarabic;FEB6 -sheeninitialarabic;FEB7 -sheenmedialarabic;FEB8 -sheicoptic;03E3 -sheqel;20AA -sheqelhebrew;20AA -sheva;05B0 -sheva115;05B0 -sheva15;05B0 -sheva22;05B0 -sheva2e;05B0 -shevahebrew;05B0 -shevanarrowhebrew;05B0 -shevaquarterhebrew;05B0 -shevawidehebrew;05B0 -shhacyrillic;04BB -shimacoptic;03ED -shin;05E9 -shindagesh;FB49 -shindageshhebrew;FB49 -shindageshshindot;FB2C -shindageshshindothebrew;FB2C -shindageshsindot;FB2D -shindageshsindothebrew;FB2D -shindothebrew;05C1 -shinhebrew;05E9 -shinshindot;FB2A -shinshindothebrew;FB2A -shinsindot;FB2B -shinsindothebrew;FB2B -shook;0282 -sigma;03C3 -sigma1;03C2 -sigmafinal;03C2 -sigmalunatesymbolgreek;03F2 -sihiragana;3057 -sikatakana;30B7 -sikatakanahalfwidth;FF7C -siluqhebrew;05BD -siluqlefthebrew;05BD -similar;223C -sindothebrew;05C2 -siosacirclekorean;3274 -siosaparenkorean;3214 -sioscieuckorean;317E -sioscirclekorean;3266 -sioskiyeokkorean;317A -sioskorean;3145 -siosnieunkorean;317B -siosparenkorean;3206 -siospieupkorean;317D -siostikeutkorean;317C -six;0036 -sixarabic;0666 -sixbengali;09EC -sixcircle;2465 -sixcircleinversesansserif;278F -sixdeva;096C -sixgujarati;0AEC -sixgurmukhi;0A6C -sixhackarabic;0666 -sixhangzhou;3026 -sixideographicparen;3225 -sixinferior;2086 -sixmonospace;FF16 -sixoldstyle;F736 -sixparen;2479 -sixperiod;248D -sixpersian;06F6 -sixroman;2175 -sixsuperior;2076 -sixteencircle;246F -sixteencurrencydenominatorbengali;09F9 -sixteenparen;2483 -sixteenperiod;2497 -sixthai;0E56 -slash;002F -slashmonospace;FF0F -slong;017F -slongdotaccent;1E9B -smileface;263A -smonospace;FF53 -sofpasuqhebrew;05C3 -softhyphen;00AD -softsigncyrillic;044C -sohiragana;305D -sokatakana;30BD -sokatakanahalfwidth;FF7F -soliduslongoverlaycmb;0338 -solidusshortoverlaycmb;0337 -sorusithai;0E29 -sosalathai;0E28 -sosothai;0E0B -sosuathai;0E2A -space;0020 -spacehackarabic;0020 -spade;2660 -spadesuitblack;2660 -spadesuitwhite;2664 -sparen;24AE -squarebelowcmb;033B -squarecc;33C4 -squarecm;339D -squarediagonalcrosshatchfill;25A9 -squarehorizontalfill;25A4 -squarekg;338F -squarekm;339E -squarekmcapital;33CE -squareln;33D1 -squarelog;33D2 -squaremg;338E -squaremil;33D5 -squaremm;339C -squaremsquared;33A1 -squareorthogonalcrosshatchfill;25A6 -squareupperlefttolowerrightfill;25A7 -squareupperrighttolowerleftfill;25A8 -squareverticalfill;25A5 -squarewhitewithsmallblack;25A3 -srsquare;33DB -ssabengali;09B7 -ssadeva;0937 -ssagujarati;0AB7 -ssangcieuckorean;3149 -ssanghieuhkorean;3185 -ssangieungkorean;3180 -ssangkiyeokkorean;3132 -ssangnieunkorean;3165 -ssangpieupkorean;3143 -ssangsioskorean;3146 -ssangtikeutkorean;3138 -ssuperior;F6F2 -sterling;00A3 -sterlingmonospace;FFE1 -strokelongoverlaycmb;0336 -strokeshortoverlaycmb;0335 -subset;2282 -subsetnotequal;228A -subsetorequal;2286 -succeeds;227B -suchthat;220B -suhiragana;3059 -sukatakana;30B9 -sukatakanahalfwidth;FF7D -sukunarabic;0652 -summation;2211 -sun;263C -superset;2283 -supersetnotequal;228B -supersetorequal;2287 -svsquare;33DC -syouwaerasquare;337C -t;0074 -tabengali;09A4 -tackdown;22A4 -tackleft;22A3 -tadeva;0924 -tagujarati;0AA4 -tagurmukhi;0A24 -taharabic;0637 -tahfinalarabic;FEC2 -tahinitialarabic;FEC3 -tahiragana;305F -tahmedialarabic;FEC4 -taisyouerasquare;337D -takatakana;30BF -takatakanahalfwidth;FF80 -tatweelarabic;0640 -tau;03C4 -tav;05EA -tavdages;FB4A -tavdagesh;FB4A -tavdageshhebrew;FB4A -tavhebrew;05EA -tbar;0167 -tbopomofo;310A -tcaron;0165 -tccurl;02A8 -tcedilla;0163 -tcheharabic;0686 -tchehfinalarabic;FB7B -tchehinitialarabic;FB7C -tchehmedialarabic;FB7D -tchehmeeminitialarabic;FB7C FEE4 -tcircle;24E3 -tcircumflexbelow;1E71 -tcommaaccent;0163 -tdieresis;1E97 -tdotaccent;1E6B -tdotbelow;1E6D -tecyrillic;0442 -tedescendercyrillic;04AD -teharabic;062A -tehfinalarabic;FE96 -tehhahinitialarabic;FCA2 -tehhahisolatedarabic;FC0C -tehinitialarabic;FE97 -tehiragana;3066 -tehjeeminitialarabic;FCA1 -tehjeemisolatedarabic;FC0B -tehmarbutaarabic;0629 -tehmarbutafinalarabic;FE94 -tehmedialarabic;FE98 -tehmeeminitialarabic;FCA4 -tehmeemisolatedarabic;FC0E -tehnoonfinalarabic;FC73 -tekatakana;30C6 -tekatakanahalfwidth;FF83 -telephone;2121 -telephoneblack;260E -telishagedolahebrew;05A0 -telishaqetanahebrew;05A9 -tencircle;2469 -tenideographicparen;3229 -tenparen;247D -tenperiod;2491 -tenroman;2179 -tesh;02A7 -tet;05D8 -tetdagesh;FB38 -tetdageshhebrew;FB38 -tethebrew;05D8 -tetsecyrillic;04B5 -tevirhebrew;059B -tevirlefthebrew;059B -thabengali;09A5 -thadeva;0925 -thagujarati;0AA5 -thagurmukhi;0A25 -thalarabic;0630 -thalfinalarabic;FEAC -thanthakhatlowleftthai;F898 -thanthakhatlowrightthai;F897 -thanthakhatthai;0E4C -thanthakhatupperleftthai;F896 -theharabic;062B -thehfinalarabic;FE9A -thehinitialarabic;FE9B -thehmedialarabic;FE9C -thereexists;2203 -therefore;2234 -theta;03B8 -theta1;03D1 -thetasymbolgreek;03D1 -thieuthacirclekorean;3279 -thieuthaparenkorean;3219 -thieuthcirclekorean;326B -thieuthkorean;314C -thieuthparenkorean;320B -thirteencircle;246C -thirteenparen;2480 -thirteenperiod;2494 -thonangmonthothai;0E11 -thook;01AD -thophuthaothai;0E12 -thorn;00FE -thothahanthai;0E17 -thothanthai;0E10 -thothongthai;0E18 -thothungthai;0E16 -thousandcyrillic;0482 -thousandsseparatorarabic;066C -thousandsseparatorpersian;066C -three;0033 -threearabic;0663 -threebengali;09E9 -threecircle;2462 -threecircleinversesansserif;278C -threedeva;0969 -threeeighths;215C -threegujarati;0AE9 -threegurmukhi;0A69 -threehackarabic;0663 -threehangzhou;3023 -threeideographicparen;3222 -threeinferior;2083 -threemonospace;FF13 -threenumeratorbengali;09F6 -threeoldstyle;F733 -threeparen;2476 -threeperiod;248A -threepersian;06F3 -threequarters;00BE -threequartersemdash;F6DE -threeroman;2172 -threesuperior;00B3 -threethai;0E53 -thzsquare;3394 -tihiragana;3061 -tikatakana;30C1 -tikatakanahalfwidth;FF81 -tikeutacirclekorean;3270 -tikeutaparenkorean;3210 -tikeutcirclekorean;3262 -tikeutkorean;3137 -tikeutparenkorean;3202 -tilde;02DC -tildebelowcmb;0330 -tildecmb;0303 -tildecomb;0303 -tildedoublecmb;0360 -tildeoperator;223C -tildeoverlaycmb;0334 -tildeverticalcmb;033E -timescircle;2297 -tipehahebrew;0596 -tipehalefthebrew;0596 -tippigurmukhi;0A70 -titlocyrilliccmb;0483 -tiwnarmenian;057F -tlinebelow;1E6F -tmonospace;FF54 -toarmenian;0569 -tohiragana;3068 -tokatakana;30C8 -tokatakanahalfwidth;FF84 -tonebarextrahighmod;02E5 -tonebarextralowmod;02E9 -tonebarhighmod;02E6 -tonebarlowmod;02E8 -tonebarmidmod;02E7 -tonefive;01BD -tonesix;0185 -tonetwo;01A8 -tonos;0384 -tonsquare;3327 -topatakthai;0E0F -tortoiseshellbracketleft;3014 -tortoiseshellbracketleftsmall;FE5D -tortoiseshellbracketleftvertical;FE39 -tortoiseshellbracketright;3015 -tortoiseshellbracketrightsmall;FE5E -tortoiseshellbracketrightvertical;FE3A -totaothai;0E15 -tpalatalhook;01AB -tparen;24AF -trademark;2122 -trademarksans;F8EA -trademarkserif;F6DB -tretroflexhook;0288 -triagdn;25BC -triaglf;25C4 -triagrt;25BA -triagup;25B2 -ts;02A6 -tsadi;05E6 -tsadidagesh;FB46 -tsadidageshhebrew;FB46 -tsadihebrew;05E6 -tsecyrillic;0446 -tsere;05B5 -tsere12;05B5 -tsere1e;05B5 -tsere2b;05B5 -tserehebrew;05B5 -tserenarrowhebrew;05B5 -tserequarterhebrew;05B5 -tserewidehebrew;05B5 -tshecyrillic;045B -tsuperior;F6F3 -ttabengali;099F -ttadeva;091F -ttagujarati;0A9F -ttagurmukhi;0A1F -tteharabic;0679 -ttehfinalarabic;FB67 -ttehinitialarabic;FB68 -ttehmedialarabic;FB69 -tthabengali;09A0 -tthadeva;0920 -tthagujarati;0AA0 -tthagurmukhi;0A20 -tturned;0287 -tuhiragana;3064 -tukatakana;30C4 -tukatakanahalfwidth;FF82 -tusmallhiragana;3063 -tusmallkatakana;30C3 -tusmallkatakanahalfwidth;FF6F -twelvecircle;246B -twelveparen;247F -twelveperiod;2493 -twelveroman;217B -twentycircle;2473 -twentyhangzhou;5344 -twentyparen;2487 -twentyperiod;249B -two;0032 -twoarabic;0662 -twobengali;09E8 -twocircle;2461 -twocircleinversesansserif;278B -twodeva;0968 -twodotenleader;2025 -twodotleader;2025 -twodotleadervertical;FE30 -twogujarati;0AE8 -twogurmukhi;0A68 -twohackarabic;0662 -twohangzhou;3022 -twoideographicparen;3221 -twoinferior;2082 -twomonospace;FF12 -twonumeratorbengali;09F5 -twooldstyle;F732 -twoparen;2475 -twoperiod;2489 -twopersian;06F2 -tworoman;2171 -twostroke;01BB -twosuperior;00B2 -twothai;0E52 -twothirds;2154 -u;0075 -uacute;00FA -ubar;0289 -ubengali;0989 -ubopomofo;3128 -ubreve;016D -ucaron;01D4 -ucircle;24E4 -ucircumflex;00FB -ucircumflexbelow;1E77 -ucyrillic;0443 -udattadeva;0951 -udblacute;0171 -udblgrave;0215 -udeva;0909 -udieresis;00FC -udieresisacute;01D8 -udieresisbelow;1E73 -udieresiscaron;01DA -udieresiscyrillic;04F1 -udieresisgrave;01DC -udieresismacron;01D6 -udotbelow;1EE5 -ugrave;00F9 -ugujarati;0A89 -ugurmukhi;0A09 -uhiragana;3046 -uhookabove;1EE7 -uhorn;01B0 -uhornacute;1EE9 -uhorndotbelow;1EF1 -uhorngrave;1EEB -uhornhookabove;1EED -uhorntilde;1EEF -uhungarumlaut;0171 -uhungarumlautcyrillic;04F3 -uinvertedbreve;0217 -ukatakana;30A6 -ukatakanahalfwidth;FF73 -ukcyrillic;0479 -ukorean;315C -umacron;016B -umacroncyrillic;04EF -umacrondieresis;1E7B -umatragurmukhi;0A41 -umonospace;FF55 -underscore;005F -underscoredbl;2017 -underscoremonospace;FF3F -underscorevertical;FE33 -underscorewavy;FE4F -union;222A -universal;2200 -uogonek;0173 -uparen;24B0 -upblock;2580 -upperdothebrew;05C4 -upsilon;03C5 -upsilondieresis;03CB -upsilondieresistonos;03B0 -upsilonlatin;028A -upsilontonos;03CD -uptackbelowcmb;031D -uptackmod;02D4 -uragurmukhi;0A73 -uring;016F -ushortcyrillic;045E -usmallhiragana;3045 -usmallkatakana;30A5 -usmallkatakanahalfwidth;FF69 -ustraightcyrillic;04AF -ustraightstrokecyrillic;04B1 -utilde;0169 -utildeacute;1E79 -utildebelow;1E75 -uubengali;098A -uudeva;090A -uugujarati;0A8A -uugurmukhi;0A0A -uumatragurmukhi;0A42 -uuvowelsignbengali;09C2 -uuvowelsigndeva;0942 -uuvowelsigngujarati;0AC2 -uvowelsignbengali;09C1 -uvowelsigndeva;0941 -uvowelsigngujarati;0AC1 -v;0076 -vadeva;0935 -vagujarati;0AB5 -vagurmukhi;0A35 -vakatakana;30F7 -vav;05D5 -vavdagesh;FB35 -vavdagesh65;FB35 -vavdageshhebrew;FB35 -vavhebrew;05D5 -vavholam;FB4B -vavholamhebrew;FB4B -vavvavhebrew;05F0 -vavyodhebrew;05F1 -vcircle;24E5 -vdotbelow;1E7F -vecyrillic;0432 -veharabic;06A4 -vehfinalarabic;FB6B -vehinitialarabic;FB6C -vehmedialarabic;FB6D -vekatakana;30F9 -venus;2640 -verticalbar;007C -verticallineabovecmb;030D -verticallinebelowcmb;0329 -verticallinelowmod;02CC -verticallinemod;02C8 -vewarmenian;057E -vhook;028B -vikatakana;30F8 -viramabengali;09CD -viramadeva;094D -viramagujarati;0ACD -visargabengali;0983 -visargadeva;0903 -visargagujarati;0A83 -vmonospace;FF56 -voarmenian;0578 -voicediterationhiragana;309E -voicediterationkatakana;30FE -voicedmarkkana;309B -voicedmarkkanahalfwidth;FF9E -vokatakana;30FA -vparen;24B1 -vtilde;1E7D -vturned;028C -vuhiragana;3094 -vukatakana;30F4 -w;0077 -wacute;1E83 -waekorean;3159 -wahiragana;308F -wakatakana;30EF -wakatakanahalfwidth;FF9C -wakorean;3158 -wasmallhiragana;308E -wasmallkatakana;30EE -wattosquare;3357 -wavedash;301C -wavyunderscorevertical;FE34 -wawarabic;0648 -wawfinalarabic;FEEE -wawhamzaabovearabic;0624 -wawhamzaabovefinalarabic;FE86 -wbsquare;33DD -wcircle;24E6 -wcircumflex;0175 -wdieresis;1E85 -wdotaccent;1E87 -wdotbelow;1E89 -wehiragana;3091 -weierstrass;2118 -wekatakana;30F1 -wekorean;315E -weokorean;315D -wgrave;1E81 -whitebullet;25E6 -whitecircle;25CB -whitecircleinverse;25D9 -whitecornerbracketleft;300E -whitecornerbracketleftvertical;FE43 -whitecornerbracketright;300F -whitecornerbracketrightvertical;FE44 -whitediamond;25C7 -whitediamondcontainingblacksmalldiamond;25C8 -whitedownpointingsmalltriangle;25BF -whitedownpointingtriangle;25BD -whiteleftpointingsmalltriangle;25C3 -whiteleftpointingtriangle;25C1 -whitelenticularbracketleft;3016 -whitelenticularbracketright;3017 -whiterightpointingsmalltriangle;25B9 -whiterightpointingtriangle;25B7 -whitesmallsquare;25AB -whitesmilingface;263A -whitesquare;25A1 -whitestar;2606 -whitetelephone;260F -whitetortoiseshellbracketleft;3018 -whitetortoiseshellbracketright;3019 -whiteuppointingsmalltriangle;25B5 -whiteuppointingtriangle;25B3 -wihiragana;3090 -wikatakana;30F0 -wikorean;315F -wmonospace;FF57 -wohiragana;3092 -wokatakana;30F2 -wokatakanahalfwidth;FF66 -won;20A9 -wonmonospace;FFE6 -wowaenthai;0E27 -wparen;24B2 -wring;1E98 -wsuperior;02B7 -wturned;028D -wynn;01BF -x;0078 -xabovecmb;033D -xbopomofo;3112 -xcircle;24E7 -xdieresis;1E8D -xdotaccent;1E8B -xeharmenian;056D -xi;03BE -xmonospace;FF58 -xparen;24B3 -xsuperior;02E3 -y;0079 -yaadosquare;334E -yabengali;09AF -yacute;00FD -yadeva;092F -yaekorean;3152 -yagujarati;0AAF -yagurmukhi;0A2F -yahiragana;3084 -yakatakana;30E4 -yakatakanahalfwidth;FF94 -yakorean;3151 -yamakkanthai;0E4E -yasmallhiragana;3083 -yasmallkatakana;30E3 -yasmallkatakanahalfwidth;FF6C -yatcyrillic;0463 -ycircle;24E8 -ycircumflex;0177 -ydieresis;00FF -ydotaccent;1E8F -ydotbelow;1EF5 -yeharabic;064A -yehbarreearabic;06D2 -yehbarreefinalarabic;FBAF -yehfinalarabic;FEF2 -yehhamzaabovearabic;0626 -yehhamzaabovefinalarabic;FE8A -yehhamzaaboveinitialarabic;FE8B -yehhamzaabovemedialarabic;FE8C -yehinitialarabic;FEF3 -yehmedialarabic;FEF4 -yehmeeminitialarabic;FCDD -yehmeemisolatedarabic;FC58 -yehnoonfinalarabic;FC94 -yehthreedotsbelowarabic;06D1 -yekorean;3156 -yen;00A5 -yenmonospace;FFE5 -yeokorean;3155 -yeorinhieuhkorean;3186 -yerahbenyomohebrew;05AA -yerahbenyomolefthebrew;05AA -yericyrillic;044B -yerudieresiscyrillic;04F9 -yesieungkorean;3181 -yesieungpansioskorean;3183 -yesieungsioskorean;3182 -yetivhebrew;059A -ygrave;1EF3 -yhook;01B4 -yhookabove;1EF7 -yiarmenian;0575 -yicyrillic;0457 -yikorean;3162 -yinyang;262F -yiwnarmenian;0582 -ymonospace;FF59 -yod;05D9 -yoddagesh;FB39 -yoddageshhebrew;FB39 -yodhebrew;05D9 -yodyodhebrew;05F2 -yodyodpatahhebrew;FB1F -yohiragana;3088 -yoikorean;3189 -yokatakana;30E8 -yokatakanahalfwidth;FF96 -yokorean;315B -yosmallhiragana;3087 -yosmallkatakana;30E7 -yosmallkatakanahalfwidth;FF6E -yotgreek;03F3 -yoyaekorean;3188 -yoyakorean;3187 -yoyakthai;0E22 -yoyingthai;0E0D -yparen;24B4 -ypogegrammeni;037A -ypogegrammenigreekcmb;0345 -yr;01A6 -yring;1E99 -ysuperior;02B8 -ytilde;1EF9 -yturned;028E -yuhiragana;3086 -yuikorean;318C -yukatakana;30E6 -yukatakanahalfwidth;FF95 -yukorean;3160 -yusbigcyrillic;046B -yusbigiotifiedcyrillic;046D -yuslittlecyrillic;0467 -yuslittleiotifiedcyrillic;0469 -yusmallhiragana;3085 -yusmallkatakana;30E5 -yusmallkatakanahalfwidth;FF6D -yuyekorean;318B -yuyeokorean;318A -yyabengali;09DF -yyadeva;095F -z;007A -zaarmenian;0566 -zacute;017A -zadeva;095B -zagurmukhi;0A5B -zaharabic;0638 -zahfinalarabic;FEC6 -zahinitialarabic;FEC7 -zahiragana;3056 -zahmedialarabic;FEC8 -zainarabic;0632 -zainfinalarabic;FEB0 -zakatakana;30B6 -zaqefgadolhebrew;0595 -zaqefqatanhebrew;0594 -zarqahebrew;0598 -zayin;05D6 -zayindagesh;FB36 -zayindageshhebrew;FB36 -zayinhebrew;05D6 -zbopomofo;3117 -zcaron;017E -zcircle;24E9 -zcircumflex;1E91 -zcurl;0291 -zdot;017C -zdotaccent;017C -zdotbelow;1E93 -zecyrillic;0437 -zedescendercyrillic;0499 -zedieresiscyrillic;04DF -zehiragana;305C -zekatakana;30BC -zero;0030 -zeroarabic;0660 -zerobengali;09E6 -zerodeva;0966 -zerogujarati;0AE6 -zerogurmukhi;0A66 -zerohackarabic;0660 -zeroinferior;2080 -zeromonospace;FF10 -zerooldstyle;F730 -zeropersian;06F0 -zerosuperior;2070 -zerothai;0E50 -zerowidthjoiner;FEFF -zerowidthnonjoiner;200C -zerowidthspace;200B -zeta;03B6 -zhbopomofo;3113 -zhearmenian;056A -zhebrevecyrillic;04C2 -zhecyrillic;0436 -zhedescendercyrillic;0497 -zhedieresiscyrillic;04DD -zihiragana;3058 -zikatakana;30B8 -zinorhebrew;05AE -zlinebelow;1E95 -zmonospace;FF5A -zohiragana;305E -zokatakana;30BE -zparen;24B5 -zretroflexhook;0290 -zstroke;01B6 -zuhiragana;305A -zukatakana;30BA -# END -""" - - -_aglfnText = """\ -# ----------------------------------------------------------- -# Copyright 2002-2019 Adobe (http://www.adobe.com/). -# -# Redistribution and use in source and binary forms, with or -# without modification, are permitted provided that the -# following conditions are met: -# -# Redistributions of source code must retain the above -# copyright notice, this list of conditions and the following -# disclaimer. -# -# Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following -# disclaimer in the documentation and/or other materials -# provided with the distribution. -# -# Neither the name of Adobe nor the names of its contributors -# may be used to endorse or promote products derived from this -# software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND -# CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, -# INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR -# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT -# NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) -# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN -# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR -# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# ----------------------------------------------------------- -# Name: Adobe Glyph List For New Fonts -# Table version: 1.7 -# Date: November 6, 2008 -# URL: https://github.com/adobe-type-tools/agl-aglfn -# -# Description: -# -# AGLFN (Adobe Glyph List For New Fonts) provides a list of base glyph -# names that are recommended for new fonts, which are compatible with -# the AGL (Adobe Glyph List) Specification, and which should be used -# as described in Section 6 of that document. AGLFN comprises the set -# of glyph names from AGL that map via the AGL Specification rules to -# the semantically correct UV (Unicode Value). For example, "Asmall" -# is omitted because AGL maps this glyph name to the PUA (Private Use -# Area) value U+F761, rather than to the UV that maps from the glyph -# name "A." Also omitted is "ffi," because AGL maps this to the -# Alphabetic Presentation Forms value U+FB03, rather than decomposing -# it into the following sequence of three UVs: U+0066, U+0066, and -# U+0069. The name "arrowvertex" has been omitted because this glyph -# now has a real UV, and AGL is now incorrect in mapping it to the PUA -# value U+F8E6. If you do not find an appropriate name for your glyph -# in this list, then please refer to Section 6 of the AGL -# Specification. -# -# Format: three semicolon-delimited fields: -# (1) Standard UV or CUS UV--four uppercase hexadecimal digits -# (2) Glyph name--upper/lowercase letters and digits -# (3) Character names: Unicode character names for standard UVs, and -# descriptive names for CUS UVs--uppercase letters, hyphen, and -# space -# -# The records are sorted by glyph name in increasing ASCII order, -# entries with the same glyph name are sorted in decreasing priority -# order, the UVs and Unicode character names are provided for -# convenience, lines starting with "#" are comments, and blank lines -# should be ignored. -# -# Revision History: -# -# 1.7 [6 November 2008] -# - Reverted to the original 1.4 and earlier mappings for Delta, -# Omega, and mu. -# - Removed mappings for "afii" names. These should now be assigned -# "uni" names. -# - Removed mappings for "commaaccent" names. These should now be -# assigned "uni" names. -# -# 1.6 [30 January 2006] -# - Completed work intended in 1.5. -# -# 1.5 [23 November 2005] -# - Removed duplicated block at end of file. -# - Changed mappings: -# 2206;Delta;INCREMENT changed to 0394;Delta;GREEK CAPITAL LETTER DELTA -# 2126;Omega;OHM SIGN changed to 03A9;Omega;GREEK CAPITAL LETTER OMEGA -# 03BC;mu;MICRO SIGN changed to 03BC;mu;GREEK SMALL LETTER MU -# - Corrected statement above about why "ffi" is omitted. -# -# 1.4 [24 September 2003] -# - Changed version to 1.4, to avoid confusion with the AGL 1.3. -# - Fixed spelling errors in the header. -# - Fully removed "arrowvertex," as it is mapped only to a PUA Unicode -# value in some fonts. -# -# 1.1 [17 April 2003] -# - Renamed [Tt]cedilla back to [Tt]commaaccent. -# -# 1.0 [31 January 2003] -# - Original version. -# - Derived from the AGLv1.2 by: -# removing the PUA area codes; -# removing duplicate Unicode mappings; and -# renaming "tcommaaccent" to "tcedilla" and "Tcommaaccent" to "Tcedilla" -# -0041;A;LATIN CAPITAL LETTER A -00C6;AE;LATIN CAPITAL LETTER AE -01FC;AEacute;LATIN CAPITAL LETTER AE WITH ACUTE -00C1;Aacute;LATIN CAPITAL LETTER A WITH ACUTE -0102;Abreve;LATIN CAPITAL LETTER A WITH BREVE -00C2;Acircumflex;LATIN CAPITAL LETTER A WITH CIRCUMFLEX -00C4;Adieresis;LATIN CAPITAL LETTER A WITH DIAERESIS -00C0;Agrave;LATIN CAPITAL LETTER A WITH GRAVE -0391;Alpha;GREEK CAPITAL LETTER ALPHA -0386;Alphatonos;GREEK CAPITAL LETTER ALPHA WITH TONOS -0100;Amacron;LATIN CAPITAL LETTER A WITH MACRON -0104;Aogonek;LATIN CAPITAL LETTER A WITH OGONEK -00C5;Aring;LATIN CAPITAL LETTER A WITH RING ABOVE -01FA;Aringacute;LATIN CAPITAL LETTER A WITH RING ABOVE AND ACUTE -00C3;Atilde;LATIN CAPITAL LETTER A WITH TILDE -0042;B;LATIN CAPITAL LETTER B -0392;Beta;GREEK CAPITAL LETTER BETA -0043;C;LATIN CAPITAL LETTER C -0106;Cacute;LATIN CAPITAL LETTER C WITH ACUTE -010C;Ccaron;LATIN CAPITAL LETTER C WITH CARON -00C7;Ccedilla;LATIN CAPITAL LETTER C WITH CEDILLA -0108;Ccircumflex;LATIN CAPITAL LETTER C WITH CIRCUMFLEX -010A;Cdotaccent;LATIN CAPITAL LETTER C WITH DOT ABOVE -03A7;Chi;GREEK CAPITAL LETTER CHI -0044;D;LATIN CAPITAL LETTER D -010E;Dcaron;LATIN CAPITAL LETTER D WITH CARON -0110;Dcroat;LATIN CAPITAL LETTER D WITH STROKE -2206;Delta;INCREMENT -0045;E;LATIN CAPITAL LETTER E -00C9;Eacute;LATIN CAPITAL LETTER E WITH ACUTE -0114;Ebreve;LATIN CAPITAL LETTER E WITH BREVE -011A;Ecaron;LATIN CAPITAL LETTER E WITH CARON -00CA;Ecircumflex;LATIN CAPITAL LETTER E WITH CIRCUMFLEX -00CB;Edieresis;LATIN CAPITAL LETTER E WITH DIAERESIS -0116;Edotaccent;LATIN CAPITAL LETTER E WITH DOT ABOVE -00C8;Egrave;LATIN CAPITAL LETTER E WITH GRAVE -0112;Emacron;LATIN CAPITAL LETTER E WITH MACRON -014A;Eng;LATIN CAPITAL LETTER ENG -0118;Eogonek;LATIN CAPITAL LETTER E WITH OGONEK -0395;Epsilon;GREEK CAPITAL LETTER EPSILON -0388;Epsilontonos;GREEK CAPITAL LETTER EPSILON WITH TONOS -0397;Eta;GREEK CAPITAL LETTER ETA -0389;Etatonos;GREEK CAPITAL LETTER ETA WITH TONOS -00D0;Eth;LATIN CAPITAL LETTER ETH -20AC;Euro;EURO SIGN -0046;F;LATIN CAPITAL LETTER F -0047;G;LATIN CAPITAL LETTER G -0393;Gamma;GREEK CAPITAL LETTER GAMMA -011E;Gbreve;LATIN CAPITAL LETTER G WITH BREVE -01E6;Gcaron;LATIN CAPITAL LETTER G WITH CARON -011C;Gcircumflex;LATIN CAPITAL LETTER G WITH CIRCUMFLEX -0120;Gdotaccent;LATIN CAPITAL LETTER G WITH DOT ABOVE -0048;H;LATIN CAPITAL LETTER H -25CF;H18533;BLACK CIRCLE -25AA;H18543;BLACK SMALL SQUARE -25AB;H18551;WHITE SMALL SQUARE -25A1;H22073;WHITE SQUARE -0126;Hbar;LATIN CAPITAL LETTER H WITH STROKE -0124;Hcircumflex;LATIN CAPITAL LETTER H WITH CIRCUMFLEX -0049;I;LATIN CAPITAL LETTER I -0132;IJ;LATIN CAPITAL LIGATURE IJ -00CD;Iacute;LATIN CAPITAL LETTER I WITH ACUTE -012C;Ibreve;LATIN CAPITAL LETTER I WITH BREVE -00CE;Icircumflex;LATIN CAPITAL LETTER I WITH CIRCUMFLEX -00CF;Idieresis;LATIN CAPITAL LETTER I WITH DIAERESIS -0130;Idotaccent;LATIN CAPITAL LETTER I WITH DOT ABOVE -2111;Ifraktur;BLACK-LETTER CAPITAL I -00CC;Igrave;LATIN CAPITAL LETTER I WITH GRAVE -012A;Imacron;LATIN CAPITAL LETTER I WITH MACRON -012E;Iogonek;LATIN CAPITAL LETTER I WITH OGONEK -0399;Iota;GREEK CAPITAL LETTER IOTA -03AA;Iotadieresis;GREEK CAPITAL LETTER IOTA WITH DIALYTIKA -038A;Iotatonos;GREEK CAPITAL LETTER IOTA WITH TONOS -0128;Itilde;LATIN CAPITAL LETTER I WITH TILDE -004A;J;LATIN CAPITAL LETTER J -0134;Jcircumflex;LATIN CAPITAL LETTER J WITH CIRCUMFLEX -004B;K;LATIN CAPITAL LETTER K -039A;Kappa;GREEK CAPITAL LETTER KAPPA -004C;L;LATIN CAPITAL LETTER L -0139;Lacute;LATIN CAPITAL LETTER L WITH ACUTE -039B;Lambda;GREEK CAPITAL LETTER LAMDA -013D;Lcaron;LATIN CAPITAL LETTER L WITH CARON -013F;Ldot;LATIN CAPITAL LETTER L WITH MIDDLE DOT -0141;Lslash;LATIN CAPITAL LETTER L WITH STROKE -004D;M;LATIN CAPITAL LETTER M -039C;Mu;GREEK CAPITAL LETTER MU -004E;N;LATIN CAPITAL LETTER N -0143;Nacute;LATIN CAPITAL LETTER N WITH ACUTE -0147;Ncaron;LATIN CAPITAL LETTER N WITH CARON -00D1;Ntilde;LATIN CAPITAL LETTER N WITH TILDE -039D;Nu;GREEK CAPITAL LETTER NU -004F;O;LATIN CAPITAL LETTER O -0152;OE;LATIN CAPITAL LIGATURE OE -00D3;Oacute;LATIN CAPITAL LETTER O WITH ACUTE -014E;Obreve;LATIN CAPITAL LETTER O WITH BREVE -00D4;Ocircumflex;LATIN CAPITAL LETTER O WITH CIRCUMFLEX -00D6;Odieresis;LATIN CAPITAL LETTER O WITH DIAERESIS -00D2;Ograve;LATIN CAPITAL LETTER O WITH GRAVE -01A0;Ohorn;LATIN CAPITAL LETTER O WITH HORN -0150;Ohungarumlaut;LATIN CAPITAL LETTER O WITH DOUBLE ACUTE -014C;Omacron;LATIN CAPITAL LETTER O WITH MACRON -2126;Omega;OHM SIGN -038F;Omegatonos;GREEK CAPITAL LETTER OMEGA WITH TONOS -039F;Omicron;GREEK CAPITAL LETTER OMICRON -038C;Omicrontonos;GREEK CAPITAL LETTER OMICRON WITH TONOS -00D8;Oslash;LATIN CAPITAL LETTER O WITH STROKE -01FE;Oslashacute;LATIN CAPITAL LETTER O WITH STROKE AND ACUTE -00D5;Otilde;LATIN CAPITAL LETTER O WITH TILDE -0050;P;LATIN CAPITAL LETTER P -03A6;Phi;GREEK CAPITAL LETTER PHI -03A0;Pi;GREEK CAPITAL LETTER PI -03A8;Psi;GREEK CAPITAL LETTER PSI -0051;Q;LATIN CAPITAL LETTER Q -0052;R;LATIN CAPITAL LETTER R -0154;Racute;LATIN CAPITAL LETTER R WITH ACUTE -0158;Rcaron;LATIN CAPITAL LETTER R WITH CARON -211C;Rfraktur;BLACK-LETTER CAPITAL R -03A1;Rho;GREEK CAPITAL LETTER RHO -0053;S;LATIN CAPITAL LETTER S -250C;SF010000;BOX DRAWINGS LIGHT DOWN AND RIGHT -2514;SF020000;BOX DRAWINGS LIGHT UP AND RIGHT -2510;SF030000;BOX DRAWINGS LIGHT DOWN AND LEFT -2518;SF040000;BOX DRAWINGS LIGHT UP AND LEFT -253C;SF050000;BOX DRAWINGS LIGHT VERTICAL AND HORIZONTAL -252C;SF060000;BOX DRAWINGS LIGHT DOWN AND HORIZONTAL -2534;SF070000;BOX DRAWINGS LIGHT UP AND HORIZONTAL -251C;SF080000;BOX DRAWINGS LIGHT VERTICAL AND RIGHT -2524;SF090000;BOX DRAWINGS LIGHT VERTICAL AND LEFT -2500;SF100000;BOX DRAWINGS LIGHT HORIZONTAL -2502;SF110000;BOX DRAWINGS LIGHT VERTICAL -2561;SF190000;BOX DRAWINGS VERTICAL SINGLE AND LEFT DOUBLE -2562;SF200000;BOX DRAWINGS VERTICAL DOUBLE AND LEFT SINGLE -2556;SF210000;BOX DRAWINGS DOWN DOUBLE AND LEFT SINGLE -2555;SF220000;BOX DRAWINGS DOWN SINGLE AND LEFT DOUBLE -2563;SF230000;BOX DRAWINGS DOUBLE VERTICAL AND LEFT -2551;SF240000;BOX DRAWINGS DOUBLE VERTICAL -2557;SF250000;BOX DRAWINGS DOUBLE DOWN AND LEFT -255D;SF260000;BOX DRAWINGS DOUBLE UP AND LEFT -255C;SF270000;BOX DRAWINGS UP DOUBLE AND LEFT SINGLE -255B;SF280000;BOX DRAWINGS UP SINGLE AND LEFT DOUBLE -255E;SF360000;BOX DRAWINGS VERTICAL SINGLE AND RIGHT DOUBLE -255F;SF370000;BOX DRAWINGS VERTICAL DOUBLE AND RIGHT SINGLE -255A;SF380000;BOX DRAWINGS DOUBLE UP AND RIGHT -2554;SF390000;BOX DRAWINGS DOUBLE DOWN AND RIGHT -2569;SF400000;BOX DRAWINGS DOUBLE UP AND HORIZONTAL -2566;SF410000;BOX DRAWINGS DOUBLE DOWN AND HORIZONTAL -2560;SF420000;BOX DRAWINGS DOUBLE VERTICAL AND RIGHT -2550;SF430000;BOX DRAWINGS DOUBLE HORIZONTAL -256C;SF440000;BOX DRAWINGS DOUBLE VERTICAL AND HORIZONTAL -2567;SF450000;BOX DRAWINGS UP SINGLE AND HORIZONTAL DOUBLE -2568;SF460000;BOX DRAWINGS UP DOUBLE AND HORIZONTAL SINGLE -2564;SF470000;BOX DRAWINGS DOWN SINGLE AND HORIZONTAL DOUBLE -2565;SF480000;BOX DRAWINGS DOWN DOUBLE AND HORIZONTAL SINGLE -2559;SF490000;BOX DRAWINGS UP DOUBLE AND RIGHT SINGLE -2558;SF500000;BOX DRAWINGS UP SINGLE AND RIGHT DOUBLE -2552;SF510000;BOX DRAWINGS DOWN SINGLE AND RIGHT DOUBLE -2553;SF520000;BOX DRAWINGS DOWN DOUBLE AND RIGHT SINGLE -256B;SF530000;BOX DRAWINGS VERTICAL DOUBLE AND HORIZONTAL SINGLE -256A;SF540000;BOX DRAWINGS VERTICAL SINGLE AND HORIZONTAL DOUBLE -015A;Sacute;LATIN CAPITAL LETTER S WITH ACUTE -0160;Scaron;LATIN CAPITAL LETTER S WITH CARON -015E;Scedilla;LATIN CAPITAL LETTER S WITH CEDILLA -015C;Scircumflex;LATIN CAPITAL LETTER S WITH CIRCUMFLEX -03A3;Sigma;GREEK CAPITAL LETTER SIGMA -0054;T;LATIN CAPITAL LETTER T -03A4;Tau;GREEK CAPITAL LETTER TAU -0166;Tbar;LATIN CAPITAL LETTER T WITH STROKE -0164;Tcaron;LATIN CAPITAL LETTER T WITH CARON -0398;Theta;GREEK CAPITAL LETTER THETA -00DE;Thorn;LATIN CAPITAL LETTER THORN -0055;U;LATIN CAPITAL LETTER U -00DA;Uacute;LATIN CAPITAL LETTER U WITH ACUTE -016C;Ubreve;LATIN CAPITAL LETTER U WITH BREVE -00DB;Ucircumflex;LATIN CAPITAL LETTER U WITH CIRCUMFLEX -00DC;Udieresis;LATIN CAPITAL LETTER U WITH DIAERESIS -00D9;Ugrave;LATIN CAPITAL LETTER U WITH GRAVE -01AF;Uhorn;LATIN CAPITAL LETTER U WITH HORN -0170;Uhungarumlaut;LATIN CAPITAL LETTER U WITH DOUBLE ACUTE -016A;Umacron;LATIN CAPITAL LETTER U WITH MACRON -0172;Uogonek;LATIN CAPITAL LETTER U WITH OGONEK -03A5;Upsilon;GREEK CAPITAL LETTER UPSILON -03D2;Upsilon1;GREEK UPSILON WITH HOOK SYMBOL -03AB;Upsilondieresis;GREEK CAPITAL LETTER UPSILON WITH DIALYTIKA -038E;Upsilontonos;GREEK CAPITAL LETTER UPSILON WITH TONOS -016E;Uring;LATIN CAPITAL LETTER U WITH RING ABOVE -0168;Utilde;LATIN CAPITAL LETTER U WITH TILDE -0056;V;LATIN CAPITAL LETTER V -0057;W;LATIN CAPITAL LETTER W -1E82;Wacute;LATIN CAPITAL LETTER W WITH ACUTE -0174;Wcircumflex;LATIN CAPITAL LETTER W WITH CIRCUMFLEX -1E84;Wdieresis;LATIN CAPITAL LETTER W WITH DIAERESIS -1E80;Wgrave;LATIN CAPITAL LETTER W WITH GRAVE -0058;X;LATIN CAPITAL LETTER X -039E;Xi;GREEK CAPITAL LETTER XI -0059;Y;LATIN CAPITAL LETTER Y -00DD;Yacute;LATIN CAPITAL LETTER Y WITH ACUTE -0176;Ycircumflex;LATIN CAPITAL LETTER Y WITH CIRCUMFLEX -0178;Ydieresis;LATIN CAPITAL LETTER Y WITH DIAERESIS -1EF2;Ygrave;LATIN CAPITAL LETTER Y WITH GRAVE -005A;Z;LATIN CAPITAL LETTER Z -0179;Zacute;LATIN CAPITAL LETTER Z WITH ACUTE -017D;Zcaron;LATIN CAPITAL LETTER Z WITH CARON -017B;Zdotaccent;LATIN CAPITAL LETTER Z WITH DOT ABOVE -0396;Zeta;GREEK CAPITAL LETTER ZETA -0061;a;LATIN SMALL LETTER A -00E1;aacute;LATIN SMALL LETTER A WITH ACUTE -0103;abreve;LATIN SMALL LETTER A WITH BREVE -00E2;acircumflex;LATIN SMALL LETTER A WITH CIRCUMFLEX -00B4;acute;ACUTE ACCENT -0301;acutecomb;COMBINING ACUTE ACCENT -00E4;adieresis;LATIN SMALL LETTER A WITH DIAERESIS -00E6;ae;LATIN SMALL LETTER AE -01FD;aeacute;LATIN SMALL LETTER AE WITH ACUTE -00E0;agrave;LATIN SMALL LETTER A WITH GRAVE -2135;aleph;ALEF SYMBOL -03B1;alpha;GREEK SMALL LETTER ALPHA -03AC;alphatonos;GREEK SMALL LETTER ALPHA WITH TONOS -0101;amacron;LATIN SMALL LETTER A WITH MACRON -0026;ampersand;AMPERSAND -2220;angle;ANGLE -2329;angleleft;LEFT-POINTING ANGLE BRACKET -232A;angleright;RIGHT-POINTING ANGLE BRACKET -0387;anoteleia;GREEK ANO TELEIA -0105;aogonek;LATIN SMALL LETTER A WITH OGONEK -2248;approxequal;ALMOST EQUAL TO -00E5;aring;LATIN SMALL LETTER A WITH RING ABOVE -01FB;aringacute;LATIN SMALL LETTER A WITH RING ABOVE AND ACUTE -2194;arrowboth;LEFT RIGHT ARROW -21D4;arrowdblboth;LEFT RIGHT DOUBLE ARROW -21D3;arrowdbldown;DOWNWARDS DOUBLE ARROW -21D0;arrowdblleft;LEFTWARDS DOUBLE ARROW -21D2;arrowdblright;RIGHTWARDS DOUBLE ARROW -21D1;arrowdblup;UPWARDS DOUBLE ARROW -2193;arrowdown;DOWNWARDS ARROW -2190;arrowleft;LEFTWARDS ARROW -2192;arrowright;RIGHTWARDS ARROW -2191;arrowup;UPWARDS ARROW -2195;arrowupdn;UP DOWN ARROW -21A8;arrowupdnbse;UP DOWN ARROW WITH BASE -005E;asciicircum;CIRCUMFLEX ACCENT -007E;asciitilde;TILDE -002A;asterisk;ASTERISK -2217;asteriskmath;ASTERISK OPERATOR -0040;at;COMMERCIAL AT -00E3;atilde;LATIN SMALL LETTER A WITH TILDE -0062;b;LATIN SMALL LETTER B -005C;backslash;REVERSE SOLIDUS -007C;bar;VERTICAL LINE -03B2;beta;GREEK SMALL LETTER BETA -2588;block;FULL BLOCK -007B;braceleft;LEFT CURLY BRACKET -007D;braceright;RIGHT CURLY BRACKET -005B;bracketleft;LEFT SQUARE BRACKET -005D;bracketright;RIGHT SQUARE BRACKET -02D8;breve;BREVE -00A6;brokenbar;BROKEN BAR -2022;bullet;BULLET -0063;c;LATIN SMALL LETTER C -0107;cacute;LATIN SMALL LETTER C WITH ACUTE -02C7;caron;CARON -21B5;carriagereturn;DOWNWARDS ARROW WITH CORNER LEFTWARDS -010D;ccaron;LATIN SMALL LETTER C WITH CARON -00E7;ccedilla;LATIN SMALL LETTER C WITH CEDILLA -0109;ccircumflex;LATIN SMALL LETTER C WITH CIRCUMFLEX -010B;cdotaccent;LATIN SMALL LETTER C WITH DOT ABOVE -00B8;cedilla;CEDILLA -00A2;cent;CENT SIGN -03C7;chi;GREEK SMALL LETTER CHI -25CB;circle;WHITE CIRCLE -2297;circlemultiply;CIRCLED TIMES -2295;circleplus;CIRCLED PLUS -02C6;circumflex;MODIFIER LETTER CIRCUMFLEX ACCENT -2663;club;BLACK CLUB SUIT -003A;colon;COLON -20A1;colonmonetary;COLON SIGN -002C;comma;COMMA -2245;congruent;APPROXIMATELY EQUAL TO -00A9;copyright;COPYRIGHT SIGN -00A4;currency;CURRENCY SIGN -0064;d;LATIN SMALL LETTER D -2020;dagger;DAGGER -2021;daggerdbl;DOUBLE DAGGER -010F;dcaron;LATIN SMALL LETTER D WITH CARON -0111;dcroat;LATIN SMALL LETTER D WITH STROKE -00B0;degree;DEGREE SIGN -03B4;delta;GREEK SMALL LETTER DELTA -2666;diamond;BLACK DIAMOND SUIT -00A8;dieresis;DIAERESIS -0385;dieresistonos;GREEK DIALYTIKA TONOS -00F7;divide;DIVISION SIGN -2593;dkshade;DARK SHADE -2584;dnblock;LOWER HALF BLOCK -0024;dollar;DOLLAR SIGN -20AB;dong;DONG SIGN -02D9;dotaccent;DOT ABOVE -0323;dotbelowcomb;COMBINING DOT BELOW -0131;dotlessi;LATIN SMALL LETTER DOTLESS I -22C5;dotmath;DOT OPERATOR -0065;e;LATIN SMALL LETTER E -00E9;eacute;LATIN SMALL LETTER E WITH ACUTE -0115;ebreve;LATIN SMALL LETTER E WITH BREVE -011B;ecaron;LATIN SMALL LETTER E WITH CARON -00EA;ecircumflex;LATIN SMALL LETTER E WITH CIRCUMFLEX -00EB;edieresis;LATIN SMALL LETTER E WITH DIAERESIS -0117;edotaccent;LATIN SMALL LETTER E WITH DOT ABOVE -00E8;egrave;LATIN SMALL LETTER E WITH GRAVE -0038;eight;DIGIT EIGHT -2208;element;ELEMENT OF -2026;ellipsis;HORIZONTAL ELLIPSIS -0113;emacron;LATIN SMALL LETTER E WITH MACRON -2014;emdash;EM DASH -2205;emptyset;EMPTY SET -2013;endash;EN DASH -014B;eng;LATIN SMALL LETTER ENG -0119;eogonek;LATIN SMALL LETTER E WITH OGONEK -03B5;epsilon;GREEK SMALL LETTER EPSILON -03AD;epsilontonos;GREEK SMALL LETTER EPSILON WITH TONOS -003D;equal;EQUALS SIGN -2261;equivalence;IDENTICAL TO -212E;estimated;ESTIMATED SYMBOL -03B7;eta;GREEK SMALL LETTER ETA -03AE;etatonos;GREEK SMALL LETTER ETA WITH TONOS -00F0;eth;LATIN SMALL LETTER ETH -0021;exclam;EXCLAMATION MARK -203C;exclamdbl;DOUBLE EXCLAMATION MARK -00A1;exclamdown;INVERTED EXCLAMATION MARK -2203;existential;THERE EXISTS -0066;f;LATIN SMALL LETTER F -2640;female;FEMALE SIGN -2012;figuredash;FIGURE DASH -25A0;filledbox;BLACK SQUARE -25AC;filledrect;BLACK RECTANGLE -0035;five;DIGIT FIVE -215D;fiveeighths;VULGAR FRACTION FIVE EIGHTHS -0192;florin;LATIN SMALL LETTER F WITH HOOK -0034;four;DIGIT FOUR -2044;fraction;FRACTION SLASH -20A3;franc;FRENCH FRANC SIGN -0067;g;LATIN SMALL LETTER G -03B3;gamma;GREEK SMALL LETTER GAMMA -011F;gbreve;LATIN SMALL LETTER G WITH BREVE -01E7;gcaron;LATIN SMALL LETTER G WITH CARON -011D;gcircumflex;LATIN SMALL LETTER G WITH CIRCUMFLEX -0121;gdotaccent;LATIN SMALL LETTER G WITH DOT ABOVE -00DF;germandbls;LATIN SMALL LETTER SHARP S -2207;gradient;NABLA -0060;grave;GRAVE ACCENT -0300;gravecomb;COMBINING GRAVE ACCENT -003E;greater;GREATER-THAN SIGN -2265;greaterequal;GREATER-THAN OR EQUAL TO -00AB;guillemotleft;LEFT-POINTING DOUBLE ANGLE QUOTATION MARK -00BB;guillemotright;RIGHT-POINTING DOUBLE ANGLE QUOTATION MARK -2039;guilsinglleft;SINGLE LEFT-POINTING ANGLE QUOTATION MARK -203A;guilsinglright;SINGLE RIGHT-POINTING ANGLE QUOTATION MARK -0068;h;LATIN SMALL LETTER H -0127;hbar;LATIN SMALL LETTER H WITH STROKE -0125;hcircumflex;LATIN SMALL LETTER H WITH CIRCUMFLEX -2665;heart;BLACK HEART SUIT -0309;hookabovecomb;COMBINING HOOK ABOVE -2302;house;HOUSE -02DD;hungarumlaut;DOUBLE ACUTE ACCENT -002D;hyphen;HYPHEN-MINUS -0069;i;LATIN SMALL LETTER I -00ED;iacute;LATIN SMALL LETTER I WITH ACUTE -012D;ibreve;LATIN SMALL LETTER I WITH BREVE -00EE;icircumflex;LATIN SMALL LETTER I WITH CIRCUMFLEX -00EF;idieresis;LATIN SMALL LETTER I WITH DIAERESIS -00EC;igrave;LATIN SMALL LETTER I WITH GRAVE -0133;ij;LATIN SMALL LIGATURE IJ -012B;imacron;LATIN SMALL LETTER I WITH MACRON -221E;infinity;INFINITY -222B;integral;INTEGRAL -2321;integralbt;BOTTOM HALF INTEGRAL -2320;integraltp;TOP HALF INTEGRAL -2229;intersection;INTERSECTION -25D8;invbullet;INVERSE BULLET -25D9;invcircle;INVERSE WHITE CIRCLE -263B;invsmileface;BLACK SMILING FACE -012F;iogonek;LATIN SMALL LETTER I WITH OGONEK -03B9;iota;GREEK SMALL LETTER IOTA -03CA;iotadieresis;GREEK SMALL LETTER IOTA WITH DIALYTIKA -0390;iotadieresistonos;GREEK SMALL LETTER IOTA WITH DIALYTIKA AND TONOS -03AF;iotatonos;GREEK SMALL LETTER IOTA WITH TONOS -0129;itilde;LATIN SMALL LETTER I WITH TILDE -006A;j;LATIN SMALL LETTER J -0135;jcircumflex;LATIN SMALL LETTER J WITH CIRCUMFLEX -006B;k;LATIN SMALL LETTER K -03BA;kappa;GREEK SMALL LETTER KAPPA -0138;kgreenlandic;LATIN SMALL LETTER KRA -006C;l;LATIN SMALL LETTER L -013A;lacute;LATIN SMALL LETTER L WITH ACUTE -03BB;lambda;GREEK SMALL LETTER LAMDA -013E;lcaron;LATIN SMALL LETTER L WITH CARON -0140;ldot;LATIN SMALL LETTER L WITH MIDDLE DOT -003C;less;LESS-THAN SIGN -2264;lessequal;LESS-THAN OR EQUAL TO -258C;lfblock;LEFT HALF BLOCK -20A4;lira;LIRA SIGN -2227;logicaland;LOGICAL AND -00AC;logicalnot;NOT SIGN -2228;logicalor;LOGICAL OR -017F;longs;LATIN SMALL LETTER LONG S -25CA;lozenge;LOZENGE -0142;lslash;LATIN SMALL LETTER L WITH STROKE -2591;ltshade;LIGHT SHADE -006D;m;LATIN SMALL LETTER M -00AF;macron;MACRON -2642;male;MALE SIGN -2212;minus;MINUS SIGN -2032;minute;PRIME -00B5;mu;MICRO SIGN -00D7;multiply;MULTIPLICATION SIGN -266A;musicalnote;EIGHTH NOTE -266B;musicalnotedbl;BEAMED EIGHTH NOTES -006E;n;LATIN SMALL LETTER N -0144;nacute;LATIN SMALL LETTER N WITH ACUTE -0149;napostrophe;LATIN SMALL LETTER N PRECEDED BY APOSTROPHE -0148;ncaron;LATIN SMALL LETTER N WITH CARON -0039;nine;DIGIT NINE -2209;notelement;NOT AN ELEMENT OF -2260;notequal;NOT EQUAL TO -2284;notsubset;NOT A SUBSET OF -00F1;ntilde;LATIN SMALL LETTER N WITH TILDE -03BD;nu;GREEK SMALL LETTER NU -0023;numbersign;NUMBER SIGN -006F;o;LATIN SMALL LETTER O -00F3;oacute;LATIN SMALL LETTER O WITH ACUTE -014F;obreve;LATIN SMALL LETTER O WITH BREVE -00F4;ocircumflex;LATIN SMALL LETTER O WITH CIRCUMFLEX -00F6;odieresis;LATIN SMALL LETTER O WITH DIAERESIS -0153;oe;LATIN SMALL LIGATURE OE -02DB;ogonek;OGONEK -00F2;ograve;LATIN SMALL LETTER O WITH GRAVE -01A1;ohorn;LATIN SMALL LETTER O WITH HORN -0151;ohungarumlaut;LATIN SMALL LETTER O WITH DOUBLE ACUTE -014D;omacron;LATIN SMALL LETTER O WITH MACRON -03C9;omega;GREEK SMALL LETTER OMEGA -03D6;omega1;GREEK PI SYMBOL -03CE;omegatonos;GREEK SMALL LETTER OMEGA WITH TONOS -03BF;omicron;GREEK SMALL LETTER OMICRON -03CC;omicrontonos;GREEK SMALL LETTER OMICRON WITH TONOS -0031;one;DIGIT ONE -2024;onedotenleader;ONE DOT LEADER -215B;oneeighth;VULGAR FRACTION ONE EIGHTH -00BD;onehalf;VULGAR FRACTION ONE HALF -00BC;onequarter;VULGAR FRACTION ONE QUARTER -2153;onethird;VULGAR FRACTION ONE THIRD -25E6;openbullet;WHITE BULLET -00AA;ordfeminine;FEMININE ORDINAL INDICATOR -00BA;ordmasculine;MASCULINE ORDINAL INDICATOR -221F;orthogonal;RIGHT ANGLE -00F8;oslash;LATIN SMALL LETTER O WITH STROKE -01FF;oslashacute;LATIN SMALL LETTER O WITH STROKE AND ACUTE -00F5;otilde;LATIN SMALL LETTER O WITH TILDE -0070;p;LATIN SMALL LETTER P -00B6;paragraph;PILCROW SIGN -0028;parenleft;LEFT PARENTHESIS -0029;parenright;RIGHT PARENTHESIS -2202;partialdiff;PARTIAL DIFFERENTIAL -0025;percent;PERCENT SIGN -002E;period;FULL STOP -00B7;periodcentered;MIDDLE DOT -22A5;perpendicular;UP TACK -2030;perthousand;PER MILLE SIGN -20A7;peseta;PESETA SIGN -03C6;phi;GREEK SMALL LETTER PHI -03D5;phi1;GREEK PHI SYMBOL -03C0;pi;GREEK SMALL LETTER PI -002B;plus;PLUS SIGN -00B1;plusminus;PLUS-MINUS SIGN -211E;prescription;PRESCRIPTION TAKE -220F;product;N-ARY PRODUCT -2282;propersubset;SUBSET OF -2283;propersuperset;SUPERSET OF -221D;proportional;PROPORTIONAL TO -03C8;psi;GREEK SMALL LETTER PSI -0071;q;LATIN SMALL LETTER Q -003F;question;QUESTION MARK -00BF;questiondown;INVERTED QUESTION MARK -0022;quotedbl;QUOTATION MARK -201E;quotedblbase;DOUBLE LOW-9 QUOTATION MARK -201C;quotedblleft;LEFT DOUBLE QUOTATION MARK -201D;quotedblright;RIGHT DOUBLE QUOTATION MARK -2018;quoteleft;LEFT SINGLE QUOTATION MARK -201B;quotereversed;SINGLE HIGH-REVERSED-9 QUOTATION MARK -2019;quoteright;RIGHT SINGLE QUOTATION MARK -201A;quotesinglbase;SINGLE LOW-9 QUOTATION MARK -0027;quotesingle;APOSTROPHE -0072;r;LATIN SMALL LETTER R -0155;racute;LATIN SMALL LETTER R WITH ACUTE -221A;radical;SQUARE ROOT -0159;rcaron;LATIN SMALL LETTER R WITH CARON -2286;reflexsubset;SUBSET OF OR EQUAL TO -2287;reflexsuperset;SUPERSET OF OR EQUAL TO -00AE;registered;REGISTERED SIGN -2310;revlogicalnot;REVERSED NOT SIGN -03C1;rho;GREEK SMALL LETTER RHO -02DA;ring;RING ABOVE -2590;rtblock;RIGHT HALF BLOCK -0073;s;LATIN SMALL LETTER S -015B;sacute;LATIN SMALL LETTER S WITH ACUTE -0161;scaron;LATIN SMALL LETTER S WITH CARON -015F;scedilla;LATIN SMALL LETTER S WITH CEDILLA -015D;scircumflex;LATIN SMALL LETTER S WITH CIRCUMFLEX -2033;second;DOUBLE PRIME -00A7;section;SECTION SIGN -003B;semicolon;SEMICOLON -0037;seven;DIGIT SEVEN -215E;seveneighths;VULGAR FRACTION SEVEN EIGHTHS -2592;shade;MEDIUM SHADE -03C3;sigma;GREEK SMALL LETTER SIGMA -03C2;sigma1;GREEK SMALL LETTER FINAL SIGMA -223C;similar;TILDE OPERATOR -0036;six;DIGIT SIX -002F;slash;SOLIDUS -263A;smileface;WHITE SMILING FACE -0020;space;SPACE -2660;spade;BLACK SPADE SUIT -00A3;sterling;POUND SIGN -220B;suchthat;CONTAINS AS MEMBER -2211;summation;N-ARY SUMMATION -263C;sun;WHITE SUN WITH RAYS -0074;t;LATIN SMALL LETTER T -03C4;tau;GREEK SMALL LETTER TAU -0167;tbar;LATIN SMALL LETTER T WITH STROKE -0165;tcaron;LATIN SMALL LETTER T WITH CARON -2234;therefore;THEREFORE -03B8;theta;GREEK SMALL LETTER THETA -03D1;theta1;GREEK THETA SYMBOL -00FE;thorn;LATIN SMALL LETTER THORN -0033;three;DIGIT THREE -215C;threeeighths;VULGAR FRACTION THREE EIGHTHS -00BE;threequarters;VULGAR FRACTION THREE QUARTERS -02DC;tilde;SMALL TILDE -0303;tildecomb;COMBINING TILDE -0384;tonos;GREEK TONOS -2122;trademark;TRADE MARK SIGN -25BC;triagdn;BLACK DOWN-POINTING TRIANGLE -25C4;triaglf;BLACK LEFT-POINTING POINTER -25BA;triagrt;BLACK RIGHT-POINTING POINTER -25B2;triagup;BLACK UP-POINTING TRIANGLE -0032;two;DIGIT TWO -2025;twodotenleader;TWO DOT LEADER -2154;twothirds;VULGAR FRACTION TWO THIRDS -0075;u;LATIN SMALL LETTER U -00FA;uacute;LATIN SMALL LETTER U WITH ACUTE -016D;ubreve;LATIN SMALL LETTER U WITH BREVE -00FB;ucircumflex;LATIN SMALL LETTER U WITH CIRCUMFLEX -00FC;udieresis;LATIN SMALL LETTER U WITH DIAERESIS -00F9;ugrave;LATIN SMALL LETTER U WITH GRAVE -01B0;uhorn;LATIN SMALL LETTER U WITH HORN -0171;uhungarumlaut;LATIN SMALL LETTER U WITH DOUBLE ACUTE -016B;umacron;LATIN SMALL LETTER U WITH MACRON -005F;underscore;LOW LINE -2017;underscoredbl;DOUBLE LOW LINE -222A;union;UNION -2200;universal;FOR ALL -0173;uogonek;LATIN SMALL LETTER U WITH OGONEK -2580;upblock;UPPER HALF BLOCK -03C5;upsilon;GREEK SMALL LETTER UPSILON -03CB;upsilondieresis;GREEK SMALL LETTER UPSILON WITH DIALYTIKA -03B0;upsilondieresistonos;GREEK SMALL LETTER UPSILON WITH DIALYTIKA AND TONOS -03CD;upsilontonos;GREEK SMALL LETTER UPSILON WITH TONOS -016F;uring;LATIN SMALL LETTER U WITH RING ABOVE -0169;utilde;LATIN SMALL LETTER U WITH TILDE -0076;v;LATIN SMALL LETTER V -0077;w;LATIN SMALL LETTER W -1E83;wacute;LATIN SMALL LETTER W WITH ACUTE -0175;wcircumflex;LATIN SMALL LETTER W WITH CIRCUMFLEX -1E85;wdieresis;LATIN SMALL LETTER W WITH DIAERESIS -2118;weierstrass;SCRIPT CAPITAL P -1E81;wgrave;LATIN SMALL LETTER W WITH GRAVE -0078;x;LATIN SMALL LETTER X -03BE;xi;GREEK SMALL LETTER XI -0079;y;LATIN SMALL LETTER Y -00FD;yacute;LATIN SMALL LETTER Y WITH ACUTE -0177;ycircumflex;LATIN SMALL LETTER Y WITH CIRCUMFLEX -00FF;ydieresis;LATIN SMALL LETTER Y WITH DIAERESIS -00A5;yen;YEN SIGN -1EF3;ygrave;LATIN SMALL LETTER Y WITH GRAVE -007A;z;LATIN SMALL LETTER Z -017A;zacute;LATIN SMALL LETTER Z WITH ACUTE -017E;zcaron;LATIN SMALL LETTER Z WITH CARON -017C;zdotaccent;LATIN SMALL LETTER Z WITH DOT ABOVE -0030;zero;DIGIT ZERO -03B6;zeta;GREEK SMALL LETTER ZETA -# END -""" - - -class AGLError(Exception): - pass - - -LEGACY_AGL2UV = {} -AGL2UV = {} -UV2AGL = {} - - -def _builddicts(): - import re - - lines = _aglText.splitlines() - - parseAGL_RE = re.compile("([A-Za-z0-9]+);((?:[0-9A-F]{4})(?: (?:[0-9A-F]{4}))*)$") - - for line in lines: - if not line or line[:1] == "#": - continue - m = parseAGL_RE.match(line) - if not m: - raise AGLError("syntax error in glyphlist.txt: %s" % repr(line[:20])) - unicodes = m.group(2) - assert len(unicodes) % 5 == 4 - unicodes = [int(unicode, 16) for unicode in unicodes.split()] - glyphName = tostr(m.group(1)) - LEGACY_AGL2UV[glyphName] = unicodes - - lines = _aglfnText.splitlines() - - parseAGLFN_RE = re.compile("([0-9A-F]{4});([A-Za-z0-9]+);.*?$") - - for line in lines: - if not line or line[:1] == "#": - continue - m = parseAGLFN_RE.match(line) - if not m: - raise AGLError("syntax error in aglfn.txt: %s" % repr(line[:20])) - unicode = m.group(1) - assert len(unicode) == 4 - unicode = int(unicode, 16) - glyphName = tostr(m.group(2)) - AGL2UV[glyphName] = unicode - UV2AGL[unicode] = glyphName - - -_builddicts() - - -def toUnicode(glyph, isZapfDingbats=False): - """Convert glyph names to Unicode, such as ``'longs_t.oldstyle'`` --> ``u'ΕΏt'`` - - If ``isZapfDingbats`` is ``True``, the implementation recognizes additional - glyph names (as required by the AGL specification). - """ - # https://github.com/adobe-type-tools/agl-specification#2-the-mapping - # - # 1. Drop all the characters from the glyph name starting with - # the first occurrence of a period (U+002E; FULL STOP), if any. - glyph = glyph.split(".", 1)[0] - - # 2. Split the remaining string into a sequence of components, - # using underscore (U+005F; LOW LINE) as the delimiter. - components = glyph.split("_") - - # 3. Map each component to a character string according to the - # procedure below, and concatenate those strings; the result - # is the character string to which the glyph name is mapped. - result = [_glyphComponentToUnicode(c, isZapfDingbats) for c in components] - return "".join(result) - - -def _glyphComponentToUnicode(component, isZapfDingbats): - # If the font is Zapf Dingbats (PostScript FontName: ZapfDingbats), - # and the component is in the ITC Zapf Dingbats Glyph List, then - # map it to the corresponding character in that list. - dingbat = _zapfDingbatsToUnicode(component) if isZapfDingbats else None - if dingbat: - return dingbat - - # Otherwise, if the component is in AGL, then map it - # to the corresponding character in that list. - uchars = LEGACY_AGL2UV.get(component) - if uchars: - return "".join(map(chr, uchars)) - - # Otherwise, if the component is of the form "uni" (U+0075, - # U+006E, and U+0069) followed by a sequence of uppercase - # hexadecimal digits (0–9 and A–F, meaning U+0030 through - # U+0039 and U+0041 through U+0046), if the length of that - # sequence is a multiple of four, and if each group of four - # digits represents a value in the ranges 0000 through D7FF - # or E000 through FFFF, then interpret each as a Unicode scalar - # value and map the component to the string made of those - # scalar values. Note that the range and digit-length - # restrictions mean that the "uni" glyph name prefix can be - # used only with UVs in the Basic Multilingual Plane (BMP). - uni = _uniToUnicode(component) - if uni: - return uni - - # Otherwise, if the component is of the form "u" (U+0075) - # followed by a sequence of four to six uppercase hexadecimal - # digits (0–9 and A–F, meaning U+0030 through U+0039 and - # U+0041 through U+0046), and those digits represents a value - # in the ranges 0000 through D7FF or E000 through 10FFFF, then - # interpret it as a Unicode scalar value and map the component - # to the string made of this scalar value. - uni = _uToUnicode(component) - if uni: - return uni - - # Otherwise, map the component to an empty string. - return "" - - -# https://github.com/adobe-type-tools/agl-aglfn/blob/master/zapfdingbats.txt -_AGL_ZAPF_DINGBATS = ( - " βœβœ‚βœ„β˜Žβœ†βœβœžβœŸβœ βœ‘β˜›β˜žβœŒβœβœŽβœβœ‘βœ’βœ“βœ”βœ•βœ–βœ—βœ˜βœ™βœšβœ›βœœβœ’βœ£βœ€βœ₯βœ¦βœ§β˜…βœ©βœͺ✫✬✭βœβœ―βœ°βœ±βœ²βœ³βœ΄βœ΅βœΆβœ·βœΈβœΉβœΊβœ»βœΌβœ½βœΎβœΏβ€" - "ββ‚βƒβ„β…β†β‡βˆβ‰βŠβ‹β—ββ– ββ‘β–²β–Όβ—†β– β——β˜β™βšβ―β±β²β³β¨β©β¬β­βͺβ«β΄β΅β›βœββžβ‘β’β£β€βœβ₯❦❧♠β™₯♦♣ βœ‰βœˆβœ‡" - "β‘ β‘‘β‘’β‘£β‘€β‘₯β‘¦β‘§β‘¨β‘©βΆβ·βΈβΉβΊβ»βΌβ½βΎβΏβž€βžβž‚βžƒβž„βž…βž†βž‡βžˆβž‰βžŠβž‹βžŒβžβžŽβžβžβž‘βž’βž“βž”β†’βž£β†”" - "β†•βž™βž›βžœβžβžžβžŸβž βž‘βž’βž€βž₯➦➧➨➩➫➭➯➲➳➡➸➺➻➼➽➾➚βžͺ➢➹➘➴➷➬βžβž±βœƒββ’ββ°" -) - - -def _zapfDingbatsToUnicode(glyph): - """Helper for toUnicode().""" - if len(glyph) < 2 or glyph[0] != "a": - return None - try: - gid = int(glyph[1:]) - except ValueError: - return None - if gid < 0 or gid >= len(_AGL_ZAPF_DINGBATS): - return None - uchar = _AGL_ZAPF_DINGBATS[gid] - return uchar if uchar != " " else None - - -_re_uni = re.compile("^uni([0-9A-F]+)$") - - -def _uniToUnicode(component): - """Helper for toUnicode() to handle "uniABCD" components.""" - match = _re_uni.match(component) - if match is None: - return None - digits = match.group(1) - if len(digits) % 4 != 0: - return None - chars = [int(digits[i : i + 4], 16) for i in range(0, len(digits), 4)] - if any(c >= 0xD800 and c <= 0xDFFF for c in chars): - # The AGL specification explicitly excluded surrogate pairs. - return None - return "".join([chr(c) for c in chars]) - - -_re_u = re.compile("^u([0-9A-F]{4,6})$") - - -def _uToUnicode(component): - """Helper for toUnicode() to handle "u1ABCD" components.""" - match = _re_u.match(component) - if match is None: - return None - digits = match.group(1) - try: - value = int(digits, 16) - except ValueError: - return None - if (value >= 0x0000 and value <= 0xD7FF) or (value >= 0xE000 and value <= 0x10FFFF): - return chr(value) - return None diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/annotations.py b/pptx-env/lib/python3.12/site-packages/fontTools/annotations.py deleted file mode 100644 index 5ff5972e..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/annotations.py +++ /dev/null @@ -1,30 +0,0 @@ -from __future__ import annotations -from typing import TYPE_CHECKING, Iterable, Optional, TypeVar, Union -from collections.abc import Callable, Sequence -from fontTools.misc.filesystem._base import FS -from os import PathLike -from xml.etree.ElementTree import Element as ElementTreeElement - -if TYPE_CHECKING: - from fontTools.ufoLib import UFOFormatVersion - from fontTools.ufoLib.glifLib import GLIFFormatVersion - from lxml.etree import _Element as LxmlElement - - -T = TypeVar("T") # Generic type -K = TypeVar("K") # Generic dict key type -V = TypeVar("V") # Generic dict value type - -GlyphNameToFileNameFunc = Optional[Callable[[str, set[str]], str]] -ElementType = Union[ElementTreeElement, "LxmlElement"] -FormatVersion = Union[int, tuple[int, int]] -FormatVersions = Optional[Iterable[FormatVersion]] -GLIFFormatVersionInput = Optional[Union[int, tuple[int, int], "GLIFFormatVersion"]] -UFOFormatVersionInput = Optional[Union[int, tuple[int, int], "UFOFormatVersion"]] -IntFloat = Union[int, float] -KerningPair = tuple[str, str] -KerningDict = dict[KerningPair, IntFloat] -KerningGroups = dict[str, Sequence[str]] -KerningNested = dict[str, dict[str, IntFloat]] -PathStr = Union[str, PathLike[str]] -PathOrFS = Union[PathStr, FS] diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/CFF2ToCFF.py b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/CFF2ToCFF.py deleted file mode 100644 index e0ec956b..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/CFF2ToCFF.py +++ /dev/null @@ -1,233 +0,0 @@ -"""CFF2 to CFF converter.""" - -from fontTools.ttLib import TTFont, newTable -from fontTools.misc.cliTools import makeOutputFileName -from fontTools.misc.psCharStrings import T2StackUseExtractor -from fontTools.cffLib import ( - TopDictIndex, - buildOrder, - buildDefaults, - topDictOperators, - privateDictOperators, - FDSelect, -) -from .transforms import desubroutinizeCharString -from .specializer import specializeProgram -from .width import optimizeWidths -from collections import defaultdict -import logging - - -__all__ = ["convertCFF2ToCFF", "main"] - - -log = logging.getLogger("fontTools.cffLib") - - -def _convertCFF2ToCFF(cff, otFont): - """Converts this object from CFF2 format to CFF format. This conversion - is done 'in-place'. The conversion cannot be reversed. - - The CFF2 font cannot be variable. (TODO Accept those and convert to the - default instance?) - - This assumes a decompiled CFF2 table. (i.e. that the object has been - filled via :meth:`decompile` and e.g. not loaded from XML.)""" - - cff.major = 1 - - topDictData = TopDictIndex(None) - for item in cff.topDictIndex: - # Iterate over, such that all are decompiled - item.cff2GetGlyphOrder = None - topDictData.append(item) - cff.topDictIndex = topDictData - topDict = topDictData[0] - - if hasattr(topDict, "VarStore"): - raise ValueError("Variable CFF2 font cannot be converted to CFF format.") - - opOrder = buildOrder(topDictOperators) - topDict.order = opOrder - for key in topDict.rawDict.keys(): - if key not in opOrder: - del topDict.rawDict[key] - if hasattr(topDict, key): - delattr(topDict, key) - - charStrings = topDict.CharStrings - - fdArray = topDict.FDArray - if not hasattr(topDict, "FDSelect"): - # FDSelect is optional in CFF2, but required in CFF. - fdSelect = topDict.FDSelect = FDSelect() - fdSelect.gidArray = [0] * len(charStrings.charStrings) - - defaults = buildDefaults(privateDictOperators) - order = buildOrder(privateDictOperators) - for fd in fdArray: - fd.setCFF2(False) - privateDict = fd.Private - privateDict.order = order - for key in order: - if key not in privateDict.rawDict and key in defaults: - privateDict.rawDict[key] = defaults[key] - for key in privateDict.rawDict.keys(): - if key not in order: - del privateDict.rawDict[key] - if hasattr(privateDict, key): - delattr(privateDict, key) - - # Add ending operators - for cs in charStrings.values(): - cs.decompile() - cs.program.append("endchar") - for subrSets in [cff.GlobalSubrs] + [ - getattr(fd.Private, "Subrs", []) for fd in fdArray - ]: - for cs in subrSets: - cs.program.append("return") - - # Add (optimal) width to CharStrings that need it. - widths = defaultdict(list) - metrics = otFont["hmtx"].metrics - for glyphName in charStrings.keys(): - cs, fdIndex = charStrings.getItemAndSelector(glyphName) - if fdIndex == None: - fdIndex = 0 - widths[fdIndex].append(metrics[glyphName][0]) - for fdIndex, widthList in widths.items(): - bestDefault, bestNominal = optimizeWidths(widthList) - private = fdArray[fdIndex].Private - private.defaultWidthX = bestDefault - private.nominalWidthX = bestNominal - for glyphName in charStrings.keys(): - cs, fdIndex = charStrings.getItemAndSelector(glyphName) - if fdIndex == None: - fdIndex = 0 - private = fdArray[fdIndex].Private - width = metrics[glyphName][0] - if width != private.defaultWidthX: - cs.program.insert(0, width - private.nominalWidthX) - - # Handle stack use since stack-depth is lower in CFF than in CFF2. - for glyphName in charStrings.keys(): - cs, fdIndex = charStrings.getItemAndSelector(glyphName) - if fdIndex is None: - fdIndex = 0 - private = fdArray[fdIndex].Private - extractor = T2StackUseExtractor( - getattr(private, "Subrs", []), cff.GlobalSubrs, private=private - ) - stackUse = extractor.execute(cs) - if stackUse > 48: # CFF stack depth is 48 - desubroutinizeCharString(cs) - cs.program = specializeProgram(cs.program) - - # Unused subroutines are still in CFF2 (ie. lacking 'return' operator) - # because they were not decompiled when we added the 'return'. - # Moreover, some used subroutines may have become unused after the - # stack-use fixup. So we remove all unused subroutines now. - cff.remove_unused_subroutines() - - mapping = { - name: ("cid" + str(n).zfill(5) if n else ".notdef") - for n, name in enumerate(topDict.charset) - } - topDict.charset = [ - "cid" + str(n).zfill(5) if n else ".notdef" for n in range(len(topDict.charset)) - ] - charStrings.charStrings = { - mapping[name]: v for name, v in charStrings.charStrings.items() - } - - topDict.ROS = ("Adobe", "Identity", 0) - - -def convertCFF2ToCFF(font, *, updatePostTable=True): - if "CFF2" not in font: - raise ValueError("Input font does not contain a CFF2 table.") - cff = font["CFF2"].cff - _convertCFF2ToCFF(cff, font) - del font["CFF2"] - table = font["CFF "] = newTable("CFF ") - table.cff = cff - - if updatePostTable and "post" in font: - # Only version supported for fonts with CFF table is 0x00030000 not 0x20000 - post = font["post"] - if post.formatType == 2.0: - post.formatType = 3.0 - - -def main(args=None): - """Convert CFF2 OTF font to CFF OTF font""" - if args is None: - import sys - - args = sys.argv[1:] - - import argparse - - parser = argparse.ArgumentParser( - "fonttools cffLib.CFF2ToCFF", - description="Convert a non-variable CFF2 font to CFF.", - ) - parser.add_argument( - "input", metavar="INPUT.ttf", help="Input OTF file with CFF table." - ) - parser.add_argument( - "-o", - "--output", - metavar="OUTPUT.ttf", - default=None, - help="Output instance OTF file (default: INPUT-CFF2.ttf).", - ) - parser.add_argument( - "--no-recalc-timestamp", - dest="recalc_timestamp", - action="store_false", - help="Don't set the output font's timestamp to the current time.", - ) - loggingGroup = parser.add_mutually_exclusive_group(required=False) - loggingGroup.add_argument( - "-v", "--verbose", action="store_true", help="Run more verbosely." - ) - loggingGroup.add_argument( - "-q", "--quiet", action="store_true", help="Turn verbosity off." - ) - options = parser.parse_args(args) - - from fontTools import configLogger - - configLogger( - level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO") - ) - - import os - - infile = options.input - if not os.path.isfile(infile): - parser.error("No such file '{}'".format(infile)) - - outfile = ( - makeOutputFileName(infile, overWrite=True, suffix="-CFF") - if not options.output - else options.output - ) - - font = TTFont(infile, recalcTimestamp=options.recalc_timestamp, recalcBBoxes=False) - - convertCFF2ToCFF(font) - - log.info( - "Saving %s", - outfile, - ) - font.save(outfile) - - -if __name__ == "__main__": - import sys - - sys.exit(main(sys.argv[1:])) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/CFFToCFF2.py b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/CFFToCFF2.py deleted file mode 100644 index 2555f0b2..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/CFFToCFF2.py +++ /dev/null @@ -1,305 +0,0 @@ -"""CFF to CFF2 converter.""" - -from fontTools.ttLib import TTFont, newTable -from fontTools.misc.cliTools import makeOutputFileName -from fontTools.misc.psCharStrings import T2WidthExtractor -from fontTools.cffLib import ( - TopDictIndex, - FDArrayIndex, - FontDict, - buildOrder, - topDictOperators, - privateDictOperators, - topDictOperators2, - privateDictOperators2, -) -from io import BytesIO -import logging - -__all__ = ["convertCFFToCFF2", "main"] - - -log = logging.getLogger("fontTools.cffLib") - - -class _NominalWidthUsedError(Exception): - def __add__(self, other): - raise self - - def __radd__(self, other): - raise self - - -def _convertCFFToCFF2(cff, otFont): - """Converts this object from CFF format to CFF2 format. This conversion - is done 'in-place'. The conversion cannot be reversed. - - This assumes a decompiled CFF table. (i.e. that the object has been - filled via :meth:`decompile` and e.g. not loaded from XML.)""" - - # Clean up T2CharStrings - - topDict = cff.topDictIndex[0] - fdArray = topDict.FDArray if hasattr(topDict, "FDArray") else None - charStrings = topDict.CharStrings - globalSubrs = cff.GlobalSubrs - localSubrs = ( - [getattr(fd.Private, "Subrs", []) for fd in fdArray] - if fdArray - else ( - [topDict.Private.Subrs] - if hasattr(topDict, "Private") and hasattr(topDict.Private, "Subrs") - else [] - ) - ) - - for glyphName in charStrings.keys(): - cs, fdIndex = charStrings.getItemAndSelector(glyphName) - cs.decompile() - - # Clean up subroutines first - for subrs in [globalSubrs] + localSubrs: - for subr in subrs: - program = subr.program - i = j = len(program) - try: - i = program.index("return") - except ValueError: - pass - try: - j = program.index("endchar") - except ValueError: - pass - program[min(i, j) :] = [] - - # Clean up glyph charstrings - removeUnusedSubrs = False - nominalWidthXError = _NominalWidthUsedError() - for glyphName in charStrings.keys(): - cs, fdIndex = charStrings.getItemAndSelector(glyphName) - program = cs.program - - thisLocalSubrs = ( - localSubrs[fdIndex] - if fdIndex is not None - else ( - getattr(topDict.Private, "Subrs", []) - if hasattr(topDict, "Private") - else [] - ) - ) - - # Intentionally use custom type for nominalWidthX, such that any - # CharString that has an explicit width encoded will throw back to us. - extractor = T2WidthExtractor( - thisLocalSubrs, - globalSubrs, - nominalWidthXError, - 0, - ) - try: - extractor.execute(cs) - except _NominalWidthUsedError: - # Program has explicit width. We want to drop it, but can't - # just pop the first number since it may be a subroutine call. - # Instead, when seeing that, we embed the subroutine and recurse. - # If this ever happened, we later prune unused subroutines. - while len(program) >= 2 and program[1] in ["callsubr", "callgsubr"]: - removeUnusedSubrs = True - subrNumber = program.pop(0) - assert isinstance(subrNumber, int), subrNumber - op = program.pop(0) - bias = extractor.localBias if op == "callsubr" else extractor.globalBias - subrNumber += bias - subrSet = thisLocalSubrs if op == "callsubr" else globalSubrs - subrProgram = subrSet[subrNumber].program - program[:0] = subrProgram - # Now pop the actual width - assert len(program) >= 1, program - program.pop(0) - - if program and program[-1] == "endchar": - program.pop() - - if removeUnusedSubrs: - cff.remove_unused_subroutines() - - # Upconvert TopDict - - cff.major = 2 - cff2GetGlyphOrder = cff.otFont.getGlyphOrder - topDictData = TopDictIndex(None, cff2GetGlyphOrder) - for item in cff.topDictIndex: - # Iterate over, such that all are decompiled - topDictData.append(item) - cff.topDictIndex = topDictData - topDict = topDictData[0] - if hasattr(topDict, "Private"): - privateDict = topDict.Private - else: - privateDict = None - opOrder = buildOrder(topDictOperators2) - topDict.order = opOrder - topDict.cff2GetGlyphOrder = cff2GetGlyphOrder - - if not hasattr(topDict, "FDArray"): - fdArray = topDict.FDArray = FDArrayIndex() - fdArray.strings = None - fdArray.GlobalSubrs = topDict.GlobalSubrs - topDict.GlobalSubrs.fdArray = fdArray - charStrings = topDict.CharStrings - if charStrings.charStringsAreIndexed: - charStrings.charStringsIndex.fdArray = fdArray - else: - charStrings.fdArray = fdArray - fontDict = FontDict() - fontDict.setCFF2(True) - fdArray.append(fontDict) - fontDict.Private = privateDict - privateOpOrder = buildOrder(privateDictOperators2) - if privateDict is not None: - for entry in privateDictOperators: - key = entry[1] - if key not in privateOpOrder: - if key in privateDict.rawDict: - # print "Removing private dict", key - del privateDict.rawDict[key] - if hasattr(privateDict, key): - delattr(privateDict, key) - # print "Removing privateDict attr", key - else: - # clean up the PrivateDicts in the fdArray - fdArray = topDict.FDArray - privateOpOrder = buildOrder(privateDictOperators2) - for fontDict in fdArray: - fontDict.setCFF2(True) - for key in list(fontDict.rawDict.keys()): - if key not in fontDict.order: - del fontDict.rawDict[key] - if hasattr(fontDict, key): - delattr(fontDict, key) - - privateDict = fontDict.Private - for entry in privateDictOperators: - key = entry[1] - if key not in privateOpOrder: - if key in list(privateDict.rawDict.keys()): - # print "Removing private dict", key - del privateDict.rawDict[key] - if hasattr(privateDict, key): - delattr(privateDict, key) - # print "Removing privateDict attr", key - - # Now delete up the deprecated topDict operators from CFF 1.0 - for entry in topDictOperators: - key = entry[1] - # We seem to need to keep the charset operator for now, - # or we fail to compile with some fonts, like AdditionFont.otf. - # I don't know which kind of CFF font those are. But keeping - # charset seems to work. It will be removed when we save and - # read the font again. - # - # AdditionFont.otf has . - if key == "charset": - continue - if key not in opOrder: - if key in topDict.rawDict: - del topDict.rawDict[key] - if hasattr(topDict, key): - delattr(topDict, key) - - # TODO(behdad): What does the following comment even mean? Both CFF and CFF2 - # use the same T2Charstring class. I *think* what it means is that the CharStrings - # were loaded for CFF1, and we need to reload them for CFF2 to set varstore, etc - # on them. At least that's what I understand. It's probably safe to remove this - # and just set vstore where needed. - # - # See comment above about charset as well. - - # At this point, the Subrs and Charstrings are all still T2Charstring class - # easiest to fix this by compiling, then decompiling again - file = BytesIO() - cff.compile(file, otFont, isCFF2=True) - file.seek(0) - cff.decompile(file, otFont, isCFF2=True) - - -def convertCFFToCFF2(font): - cff = font["CFF "].cff - del font["CFF "] - _convertCFFToCFF2(cff, font) - table = font["CFF2"] = newTable("CFF2") - table.cff = cff - - -def main(args=None): - """Convert CFF OTF font to CFF2 OTF font""" - if args is None: - import sys - - args = sys.argv[1:] - - import argparse - - parser = argparse.ArgumentParser( - "fonttools cffLib.CFFToCFF2", - description="Upgrade a CFF font to CFF2.", - ) - parser.add_argument( - "input", metavar="INPUT.ttf", help="Input OTF file with CFF table." - ) - parser.add_argument( - "-o", - "--output", - metavar="OUTPUT.ttf", - default=None, - help="Output instance OTF file (default: INPUT-CFF2.ttf).", - ) - parser.add_argument( - "--no-recalc-timestamp", - dest="recalc_timestamp", - action="store_false", - help="Don't set the output font's timestamp to the current time.", - ) - loggingGroup = parser.add_mutually_exclusive_group(required=False) - loggingGroup.add_argument( - "-v", "--verbose", action="store_true", help="Run more verbosely." - ) - loggingGroup.add_argument( - "-q", "--quiet", action="store_true", help="Turn verbosity off." - ) - options = parser.parse_args(args) - - from fontTools import configLogger - - configLogger( - level=("DEBUG" if options.verbose else "ERROR" if options.quiet else "INFO") - ) - - import os - - infile = options.input - if not os.path.isfile(infile): - parser.error("No such file '{}'".format(infile)) - - outfile = ( - makeOutputFileName(infile, overWrite=True, suffix="-CFF2") - if not options.output - else options.output - ) - - font = TTFont(infile, recalcTimestamp=options.recalc_timestamp, recalcBBoxes=False) - - convertCFFToCFF2(font) - - log.info( - "Saving %s", - outfile, - ) - font.save(outfile) - - -if __name__ == "__main__": - import sys - - sys.exit(main(sys.argv[1:])) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__init__.py b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__init__.py deleted file mode 100644 index 4ad724a2..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__init__.py +++ /dev/null @@ -1,3694 +0,0 @@ -"""cffLib: read/write Adobe CFF fonts - -OpenType fonts with PostScript outlines embed a completely independent -font file in Adobe's *Compact Font Format*. So dealing with OpenType fonts -requires also dealing with CFF. This module allows you to read and write -fonts written in the CFF format. - -In 2016, OpenType 1.8 introduced the `CFF2 `_ -format which, along with other changes, extended the CFF format to deal with -the demands of variable fonts. This module parses both original CFF and CFF2. - -""" - -from fontTools.misc import sstruct -from fontTools.misc import psCharStrings -from fontTools.misc.arrayTools import unionRect, intRect -from fontTools.misc.textTools import ( - bytechr, - byteord, - bytesjoin, - tobytes, - tostr, - safeEval, -) -from fontTools.ttLib import TTFont -from fontTools.ttLib.tables.otBase import OTTableWriter -from fontTools.ttLib.tables.otBase import OTTableReader -from fontTools.ttLib.tables import otTables as ot -from io import BytesIO -import struct -import logging -import re - -# mute cffLib debug messages when running ttx in verbose mode -DEBUG = logging.DEBUG - 1 -log = logging.getLogger(__name__) - -cffHeaderFormat = """ - major: B - minor: B - hdrSize: B -""" - -maxStackLimit = 513 -# maxstack operator has been deprecated. max stack is now always 513. - - -class CFFFontSet(object): - """A CFF font "file" can contain more than one font, although this is - extremely rare (and not allowed within OpenType fonts). - - This class is the entry point for parsing a CFF table. To actually - manipulate the data inside the CFF font, you will want to access the - ``CFFFontSet``'s :class:`TopDict` object. To do this, a ``CFFFontSet`` - object can either be treated as a dictionary (with appropriate - ``keys()`` and ``values()`` methods) mapping font names to :class:`TopDict` - objects, or as a list. - - .. code:: python - - from fontTools import ttLib - tt = ttLib.TTFont("Tests/cffLib/data/LinLibertine_RBI.otf") - tt["CFF "].cff - # - tt["CFF "].cff[0] # Here's your actual font data - # - - """ - - def decompile(self, file, otFont, isCFF2=None): - """Parse a binary CFF file into an internal representation. ``file`` - should be a file handle object. ``otFont`` is the top-level - :py:class:`fontTools.ttLib.ttFont.TTFont` object containing this CFF file. - - If ``isCFF2`` is passed and set to ``True`` or ``False``, then the - library makes an assertion that the CFF header is of the appropriate - version. - """ - - self.otFont = otFont - sstruct.unpack(cffHeaderFormat, file.read(3), self) - if isCFF2 is not None: - # called from ttLib: assert 'major' as read from file matches the - # expected version - expected_major = 2 if isCFF2 else 1 - if self.major != expected_major: - raise ValueError( - "Invalid CFF 'major' version: expected %d, found %d" - % (expected_major, self.major) - ) - else: - # use 'major' version from file to determine if isCFF2 - assert self.major in (1, 2), "Unknown CFF format" - isCFF2 = self.major == 2 - if not isCFF2: - self.offSize = struct.unpack("B", file.read(1))[0] - file.seek(self.hdrSize) - self.fontNames = list(tostr(s) for s in Index(file, isCFF2=isCFF2)) - self.topDictIndex = TopDictIndex(file, isCFF2=isCFF2) - self.strings = IndexedStrings(file) - else: # isCFF2 - self.topDictSize = struct.unpack(">H", file.read(2))[0] - file.seek(self.hdrSize) - self.fontNames = ["CFF2Font"] - cff2GetGlyphOrder = otFont.getGlyphOrder - # in CFF2, offsetSize is the size of the TopDict data. - self.topDictIndex = TopDictIndex( - file, cff2GetGlyphOrder, self.topDictSize, isCFF2=isCFF2 - ) - self.strings = None - self.GlobalSubrs = GlobalSubrsIndex(file, isCFF2=isCFF2) - self.topDictIndex.strings = self.strings - self.topDictIndex.GlobalSubrs = self.GlobalSubrs - - def __len__(self): - return len(self.fontNames) - - def keys(self): - return list(self.fontNames) - - def values(self): - return self.topDictIndex - - def __getitem__(self, nameOrIndex): - """Return TopDict instance identified by name (str) or index (int - or any object that implements `__index__`). - """ - if hasattr(nameOrIndex, "__index__"): - index = nameOrIndex.__index__() - elif isinstance(nameOrIndex, str): - name = nameOrIndex - try: - index = self.fontNames.index(name) - except ValueError: - raise KeyError(nameOrIndex) - else: - raise TypeError(nameOrIndex) - return self.topDictIndex[index] - - def compile(self, file, otFont, isCFF2=None): - """Write the object back into binary representation onto the given file. - ``file`` should be a file handle object. ``otFont`` is the top-level - :py:class:`fontTools.ttLib.ttFont.TTFont` object containing this CFF file. - - If ``isCFF2`` is passed and set to ``True`` or ``False``, then the - library makes an assertion that the CFF header is of the appropriate - version. - """ - self.otFont = otFont - if isCFF2 is not None: - # called from ttLib: assert 'major' value matches expected version - expected_major = 2 if isCFF2 else 1 - if self.major != expected_major: - raise ValueError( - "Invalid CFF 'major' version: expected %d, found %d" - % (expected_major, self.major) - ) - else: - # use current 'major' value to determine output format - assert self.major in (1, 2), "Unknown CFF format" - isCFF2 = self.major == 2 - - if otFont.recalcBBoxes and not isCFF2: - for topDict in self.topDictIndex: - topDict.recalcFontBBox() - - if not isCFF2: - strings = IndexedStrings() - else: - strings = None - writer = CFFWriter(isCFF2) - topCompiler = self.topDictIndex.getCompiler(strings, self, isCFF2=isCFF2) - if isCFF2: - self.hdrSize = 5 - writer.add(sstruct.pack(cffHeaderFormat, self)) - # Note: topDictSize will most likely change in CFFWriter.toFile(). - self.topDictSize = topCompiler.getDataLength() - writer.add(struct.pack(">H", self.topDictSize)) - else: - self.hdrSize = 4 - self.offSize = 4 # will most likely change in CFFWriter.toFile(). - writer.add(sstruct.pack(cffHeaderFormat, self)) - writer.add(struct.pack("B", self.offSize)) - if not isCFF2: - fontNames = Index() - for name in self.fontNames: - fontNames.append(name) - writer.add(fontNames.getCompiler(strings, self, isCFF2=isCFF2)) - writer.add(topCompiler) - if not isCFF2: - writer.add(strings.getCompiler()) - writer.add(self.GlobalSubrs.getCompiler(strings, self, isCFF2=isCFF2)) - - for topDict in self.topDictIndex: - if not hasattr(topDict, "charset") or topDict.charset is None: - charset = otFont.getGlyphOrder() - topDict.charset = charset - children = topCompiler.getChildren(strings) - for child in children: - writer.add(child) - - writer.toFile(file) - - def toXML(self, xmlWriter): - """Write the object into XML representation onto the given - :class:`fontTools.misc.xmlWriter.XMLWriter`. - - .. code:: python - - writer = xmlWriter.XMLWriter(sys.stdout) - tt["CFF "].cff.toXML(writer) - - """ - - xmlWriter.simpletag("major", value=self.major) - xmlWriter.newline() - xmlWriter.simpletag("minor", value=self.minor) - xmlWriter.newline() - for fontName in self.fontNames: - xmlWriter.begintag("CFFFont", name=tostr(fontName)) - xmlWriter.newline() - font = self[fontName] - font.toXML(xmlWriter) - xmlWriter.endtag("CFFFont") - xmlWriter.newline() - xmlWriter.newline() - xmlWriter.begintag("GlobalSubrs") - xmlWriter.newline() - self.GlobalSubrs.toXML(xmlWriter) - xmlWriter.endtag("GlobalSubrs") - xmlWriter.newline() - - def fromXML(self, name, attrs, content, otFont=None): - """Reads data from the XML element into the ``CFFFontSet`` object.""" - self.otFont = otFont - - # set defaults. These will be replaced if there are entries for them - # in the XML file. - if not hasattr(self, "major"): - self.major = 1 - if not hasattr(self, "minor"): - self.minor = 0 - - if name == "CFFFont": - if self.major == 1: - if not hasattr(self, "offSize"): - # this will be recalculated when the cff is compiled. - self.offSize = 4 - if not hasattr(self, "hdrSize"): - self.hdrSize = 4 - if not hasattr(self, "GlobalSubrs"): - self.GlobalSubrs = GlobalSubrsIndex() - if not hasattr(self, "fontNames"): - self.fontNames = [] - self.topDictIndex = TopDictIndex() - fontName = attrs["name"] - self.fontNames.append(fontName) - topDict = TopDict(GlobalSubrs=self.GlobalSubrs) - topDict.charset = None # gets filled in later - elif self.major == 2: - if not hasattr(self, "hdrSize"): - self.hdrSize = 5 - if not hasattr(self, "GlobalSubrs"): - self.GlobalSubrs = GlobalSubrsIndex() - if not hasattr(self, "fontNames"): - self.fontNames = ["CFF2Font"] - cff2GetGlyphOrder = self.otFont.getGlyphOrder - topDict = TopDict( - GlobalSubrs=self.GlobalSubrs, cff2GetGlyphOrder=cff2GetGlyphOrder - ) - self.topDictIndex = TopDictIndex(None, cff2GetGlyphOrder) - self.topDictIndex.append(topDict) - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - topDict.fromXML(name, attrs, content) - - if hasattr(topDict, "VarStore") and topDict.FDArray[0].vstore is None: - fdArray = topDict.FDArray - for fontDict in fdArray: - if hasattr(fontDict, "Private"): - fontDict.Private.vstore = topDict.VarStore - - elif name == "GlobalSubrs": - subrCharStringClass = psCharStrings.T2CharString - if not hasattr(self, "GlobalSubrs"): - self.GlobalSubrs = GlobalSubrsIndex() - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - subr = subrCharStringClass() - subr.fromXML(name, attrs, content) - self.GlobalSubrs.append(subr) - elif name == "major": - self.major = int(attrs["value"]) - elif name == "minor": - self.minor = int(attrs["value"]) - - def convertCFFToCFF2(self, otFont): - from .CFFToCFF2 import _convertCFFToCFF2 - - _convertCFFToCFF2(self, otFont) - - def convertCFF2ToCFF(self, otFont): - from .CFF2ToCFF import _convertCFF2ToCFF - - _convertCFF2ToCFF(self, otFont) - - def desubroutinize(self): - from .transforms import desubroutinize - - desubroutinize(self) - - def remove_hints(self): - from .transforms import remove_hints - - remove_hints(self) - - def remove_unused_subroutines(self): - from .transforms import remove_unused_subroutines - - remove_unused_subroutines(self) - - -class CFFWriter(object): - """Helper class for serializing CFF data to binary. Used by - :meth:`CFFFontSet.compile`.""" - - def __init__(self, isCFF2): - self.data = [] - self.isCFF2 = isCFF2 - - def add(self, table): - self.data.append(table) - - def toFile(self, file): - lastPosList = None - count = 1 - while True: - log.log(DEBUG, "CFFWriter.toFile() iteration: %d", count) - count = count + 1 - pos = 0 - posList = [pos] - for item in self.data: - if hasattr(item, "getDataLength"): - endPos = pos + item.getDataLength() - if isinstance(item, TopDictIndexCompiler) and item.isCFF2: - self.topDictSize = item.getDataLength() - else: - endPos = pos + len(item) - if hasattr(item, "setPos"): - item.setPos(pos, endPos) - pos = endPos - posList.append(pos) - if posList == lastPosList: - break - lastPosList = posList - log.log(DEBUG, "CFFWriter.toFile() writing to file.") - begin = file.tell() - if self.isCFF2: - self.data[1] = struct.pack(">H", self.topDictSize) - else: - self.offSize = calcOffSize(lastPosList[-1]) - self.data[1] = struct.pack("B", self.offSize) - posList = [0] - for item in self.data: - if hasattr(item, "toFile"): - item.toFile(file) - else: - file.write(item) - posList.append(file.tell() - begin) - assert posList == lastPosList - - -def calcOffSize(largestOffset): - if largestOffset < 0x100: - offSize = 1 - elif largestOffset < 0x10000: - offSize = 2 - elif largestOffset < 0x1000000: - offSize = 3 - else: - offSize = 4 - return offSize - - -class IndexCompiler(object): - """Base class for writing CFF `INDEX data `_ - to binary.""" - - def __init__(self, items, strings, parent, isCFF2=None): - if isCFF2 is None and hasattr(parent, "isCFF2"): - isCFF2 = parent.isCFF2 - assert isCFF2 is not None - self.isCFF2 = isCFF2 - self.items = self.getItems(items, strings) - self.parent = parent - - def getItems(self, items, strings): - return items - - def getOffsets(self): - # An empty INDEX contains only the count field. - if self.items: - pos = 1 - offsets = [pos] - for item in self.items: - if hasattr(item, "getDataLength"): - pos = pos + item.getDataLength() - else: - pos = pos + len(item) - offsets.append(pos) - else: - offsets = [] - return offsets - - def getDataLength(self): - if self.isCFF2: - countSize = 4 - else: - countSize = 2 - - if self.items: - lastOffset = self.getOffsets()[-1] - offSize = calcOffSize(lastOffset) - dataLength = ( - countSize - + 1 # count - + (len(self.items) + 1) * offSize # offSize - + lastOffset # the offsets - - 1 # size of object data - ) - else: - # count. For empty INDEX tables, this is the only entry. - dataLength = countSize - - return dataLength - - def toFile(self, file): - offsets = self.getOffsets() - if self.isCFF2: - writeCard32(file, len(self.items)) - else: - writeCard16(file, len(self.items)) - # An empty INDEX contains only the count field. - if self.items: - offSize = calcOffSize(offsets[-1]) - writeCard8(file, offSize) - offSize = -offSize - pack = struct.pack - for offset in offsets: - binOffset = pack(">l", offset)[offSize:] - assert len(binOffset) == -offSize - file.write(binOffset) - for item in self.items: - if hasattr(item, "toFile"): - item.toFile(file) - else: - data = tobytes(item, encoding="latin1") - file.write(data) - - -class IndexedStringsCompiler(IndexCompiler): - def getItems(self, items, strings): - return items.strings - - -class TopDictIndexCompiler(IndexCompiler): - """Helper class for writing the TopDict to binary.""" - - def getItems(self, items, strings): - out = [] - for item in items: - out.append(item.getCompiler(strings, self)) - return out - - def getChildren(self, strings): - children = [] - for topDict in self.items: - children.extend(topDict.getChildren(strings)) - return children - - def getOffsets(self): - if self.isCFF2: - offsets = [0, self.items[0].getDataLength()] - return offsets - else: - return super(TopDictIndexCompiler, self).getOffsets() - - def getDataLength(self): - if self.isCFF2: - dataLength = self.items[0].getDataLength() - return dataLength - else: - return super(TopDictIndexCompiler, self).getDataLength() - - def toFile(self, file): - if self.isCFF2: - self.items[0].toFile(file) - else: - super(TopDictIndexCompiler, self).toFile(file) - - -class FDArrayIndexCompiler(IndexCompiler): - """Helper class for writing the - `Font DICT INDEX `_ - to binary.""" - - def getItems(self, items, strings): - out = [] - for item in items: - out.append(item.getCompiler(strings, self)) - return out - - def getChildren(self, strings): - children = [] - for fontDict in self.items: - children.extend(fontDict.getChildren(strings)) - return children - - def toFile(self, file): - offsets = self.getOffsets() - if self.isCFF2: - writeCard32(file, len(self.items)) - else: - writeCard16(file, len(self.items)) - offSize = calcOffSize(offsets[-1]) - writeCard8(file, offSize) - offSize = -offSize - pack = struct.pack - for offset in offsets: - binOffset = pack(">l", offset)[offSize:] - assert len(binOffset) == -offSize - file.write(binOffset) - for item in self.items: - if hasattr(item, "toFile"): - item.toFile(file) - else: - file.write(item) - - def setPos(self, pos, endPos): - self.parent.rawDict["FDArray"] = pos - - -class GlobalSubrsCompiler(IndexCompiler): - """Helper class for writing the `global subroutine INDEX `_ - to binary.""" - - def getItems(self, items, strings): - out = [] - for cs in items: - cs.compile(self.isCFF2) - out.append(cs.bytecode) - return out - - -class SubrsCompiler(GlobalSubrsCompiler): - """Helper class for writing the `local subroutine INDEX `_ - to binary.""" - - def setPos(self, pos, endPos): - offset = pos - self.parent.pos - self.parent.rawDict["Subrs"] = offset - - -class CharStringsCompiler(GlobalSubrsCompiler): - """Helper class for writing the `CharStrings INDEX `_ - to binary.""" - - def getItems(self, items, strings): - out = [] - for cs in items: - cs.compile(self.isCFF2) - out.append(cs.bytecode) - return out - - def setPos(self, pos, endPos): - self.parent.rawDict["CharStrings"] = pos - - -class Index(object): - """This class represents what the CFF spec calls an INDEX (an array of - variable-sized objects). `Index` items can be addressed and set using - Python list indexing.""" - - compilerClass = IndexCompiler - - def __init__(self, file=None, isCFF2=None): - self.items = [] - self.offsets = offsets = [] - name = self.__class__.__name__ - if file is None: - return - self._isCFF2 = isCFF2 - log.log(DEBUG, "loading %s at %s", name, file.tell()) - self.file = file - if isCFF2: - count = readCard32(file) - else: - count = readCard16(file) - if count == 0: - return - self.items = [None] * count - offSize = readCard8(file) - log.log(DEBUG, " index count: %s offSize: %s", count, offSize) - assert offSize <= 4, "offSize too large: %s" % offSize - pad = b"\0" * (4 - offSize) - for index in range(count + 1): - chunk = file.read(offSize) - chunk = pad + chunk - (offset,) = struct.unpack(">L", chunk) - offsets.append(int(offset)) - self.offsetBase = file.tell() - 1 - file.seek(self.offsetBase + offsets[-1]) # pretend we've read the whole lot - log.log(DEBUG, " end of %s at %s", name, file.tell()) - - def __len__(self): - return len(self.items) - - def __getitem__(self, index): - item = self.items[index] - if item is not None: - return item - offset = self.offsets[index] + self.offsetBase - size = self.offsets[index + 1] - self.offsets[index] - file = self.file - file.seek(offset) - data = file.read(size) - assert len(data) == size - item = self.produceItem(index, data, file, offset) - self.items[index] = item - return item - - def __setitem__(self, index, item): - self.items[index] = item - - def produceItem(self, index, data, file, offset): - return data - - def append(self, item): - """Add an item to an INDEX.""" - self.items.append(item) - - def getCompiler(self, strings, parent, isCFF2=None): - return self.compilerClass(self, strings, parent, isCFF2=isCFF2) - - def clear(self): - """Empty the INDEX.""" - del self.items[:] - - -class GlobalSubrsIndex(Index): - """This index contains all the global subroutines in the font. A global - subroutine is a set of ``CharString`` data which is accessible to any - glyph in the font, and are used to store repeated instructions - for - example, components may be encoded as global subroutines, but so could - hinting instructions. - - Remember that when interpreting a ``callgsubr`` instruction (or indeed - a ``callsubr`` instruction) that you will need to add the "subroutine - number bias" to number given: - - .. code:: python - - tt = ttLib.TTFont("Almendra-Bold.otf") - u = tt["CFF "].cff[0].CharStrings["udieresis"] - u.decompile() - - u.toXML(XMLWriter(sys.stdout)) - # - # -64 callgsubr <-- Subroutine which implements the dieresis mark - # - - tt["CFF "].cff[0].GlobalSubrs[-64] # <-- WRONG - # - - tt["CFF "].cff[0].GlobalSubrs[-64 + 107] # <-- RIGHT - # - - ("The bias applied depends on the number of subrs (gsubrs). If the number of - subrs (gsubrs) is less than 1240, the bias is 107. Otherwise if it is less - than 33900, it is 1131; otherwise it is 32768.", - `Subroutine Operators `) - """ - - compilerClass = GlobalSubrsCompiler - subrClass = psCharStrings.T2CharString - charStringClass = psCharStrings.T2CharString - - def __init__( - self, - file=None, - globalSubrs=None, - private=None, - fdSelect=None, - fdArray=None, - isCFF2=None, - ): - super(GlobalSubrsIndex, self).__init__(file, isCFF2=isCFF2) - self.globalSubrs = globalSubrs - self.private = private - if fdSelect: - self.fdSelect = fdSelect - if fdArray: - self.fdArray = fdArray - - def produceItem(self, index, data, file, offset): - if self.private is not None: - private = self.private - elif hasattr(self, "fdArray") and self.fdArray is not None: - if hasattr(self, "fdSelect") and self.fdSelect is not None: - fdIndex = self.fdSelect[index] - else: - fdIndex = 0 - private = self.fdArray[fdIndex].Private - else: - private = None - return self.subrClass(data, private=private, globalSubrs=self.globalSubrs) - - def toXML(self, xmlWriter): - """Write the subroutines index into XML representation onto the given - :class:`fontTools.misc.xmlWriter.XMLWriter`. - - .. code:: python - - writer = xmlWriter.XMLWriter(sys.stdout) - tt["CFF "].cff[0].GlobalSubrs.toXML(writer) - - """ - xmlWriter.comment( - "The 'index' attribute is only for humans; " "it is ignored when parsed." - ) - xmlWriter.newline() - for i in range(len(self)): - subr = self[i] - if subr.needsDecompilation(): - xmlWriter.begintag("CharString", index=i, raw=1) - else: - xmlWriter.begintag("CharString", index=i) - xmlWriter.newline() - subr.toXML(xmlWriter) - xmlWriter.endtag("CharString") - xmlWriter.newline() - - def fromXML(self, name, attrs, content): - if name != "CharString": - return - subr = self.subrClass() - subr.fromXML(name, attrs, content) - self.append(subr) - - def getItemAndSelector(self, index): - sel = None - if hasattr(self, "fdSelect"): - sel = self.fdSelect[index] - return self[index], sel - - -class SubrsIndex(GlobalSubrsIndex): - """This index contains a glyph's local subroutines. A local subroutine is a - private set of ``CharString`` data which is accessible only to the glyph to - which the index is attached.""" - - compilerClass = SubrsCompiler - - -class TopDictIndex(Index): - """This index represents the array of ``TopDict`` structures in the font - (again, usually only one entry is present). Hence the following calls are - equivalent: - - .. code:: python - - tt["CFF "].cff[0] - # - tt["CFF "].cff.topDictIndex[0] - # - - """ - - compilerClass = TopDictIndexCompiler - - def __init__(self, file=None, cff2GetGlyphOrder=None, topSize=0, isCFF2=None): - self.cff2GetGlyphOrder = cff2GetGlyphOrder - if file is not None and isCFF2: - self._isCFF2 = isCFF2 - self.items = [] - name = self.__class__.__name__ - log.log(DEBUG, "loading %s at %s", name, file.tell()) - self.file = file - count = 1 - self.items = [None] * count - self.offsets = [0, topSize] - self.offsetBase = file.tell() - # pretend we've read the whole lot - file.seek(self.offsetBase + topSize) - log.log(DEBUG, " end of %s at %s", name, file.tell()) - else: - super(TopDictIndex, self).__init__(file, isCFF2=isCFF2) - - def produceItem(self, index, data, file, offset): - top = TopDict( - self.strings, - file, - offset, - self.GlobalSubrs, - self.cff2GetGlyphOrder, - isCFF2=self._isCFF2, - ) - top.decompile(data) - return top - - def toXML(self, xmlWriter): - for i in range(len(self)): - xmlWriter.begintag("FontDict", index=i) - xmlWriter.newline() - self[i].toXML(xmlWriter) - xmlWriter.endtag("FontDict") - xmlWriter.newline() - - -class FDArrayIndex(Index): - compilerClass = FDArrayIndexCompiler - - def toXML(self, xmlWriter): - for i in range(len(self)): - xmlWriter.begintag("FontDict", index=i) - xmlWriter.newline() - self[i].toXML(xmlWriter) - xmlWriter.endtag("FontDict") - xmlWriter.newline() - - def produceItem(self, index, data, file, offset): - fontDict = FontDict( - self.strings, - file, - offset, - self.GlobalSubrs, - isCFF2=self._isCFF2, - vstore=self.vstore, - ) - fontDict.decompile(data) - return fontDict - - def fromXML(self, name, attrs, content): - if name != "FontDict": - return - fontDict = FontDict() - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - fontDict.fromXML(name, attrs, content) - self.append(fontDict) - - -class VarStoreData(object): - def __init__(self, file=None, otVarStore=None): - self.file = file - self.data = None - self.otVarStore = otVarStore - self.font = TTFont() # dummy font for the decompile function. - - def decompile(self): - if self.file: - # read data in from file. Assume position is correct. - length = readCard16(self.file) - # https://github.com/fonttools/fonttools/issues/3673 - if length == 65535: - self.data = self.file.read() - else: - self.data = self.file.read(length) - globalState = {} - reader = OTTableReader(self.data, globalState) - self.otVarStore = ot.VarStore() - self.otVarStore.decompile(reader, self.font) - self.data = None - return self - - def compile(self): - writer = OTTableWriter() - self.otVarStore.compile(writer, self.font) - # Note that this omits the initial Card16 length from the CFF2 - # VarStore data block - self.data = writer.getAllData() - - def writeXML(self, xmlWriter, name): - self.otVarStore.toXML(xmlWriter, self.font) - - def xmlRead(self, name, attrs, content, parent): - self.otVarStore = ot.VarStore() - for element in content: - if isinstance(element, tuple): - name, attrs, content = element - self.otVarStore.fromXML(name, attrs, content, self.font) - else: - pass - return None - - def __len__(self): - return len(self.data) - - def getNumRegions(self, vsIndex): - if vsIndex is None: - vsIndex = 0 - varData = self.otVarStore.VarData[vsIndex] - numRegions = varData.VarRegionCount - return numRegions - - -class FDSelect(object): - def __init__(self, file=None, numGlyphs=None, format=None): - if file: - # read data in from file - self.format = readCard8(file) - if self.format == 0: - from array import array - - self.gidArray = array("B", file.read(numGlyphs)).tolist() - elif self.format == 3: - gidArray = [None] * numGlyphs - nRanges = readCard16(file) - fd = None - prev = None - for i in range(nRanges): - first = readCard16(file) - if prev is not None: - for glyphID in range(prev, first): - gidArray[glyphID] = fd - prev = first - fd = readCard8(file) - if prev is not None: - first = readCard16(file) - for glyphID in range(prev, first): - gidArray[glyphID] = fd - self.gidArray = gidArray - elif self.format == 4: - gidArray = [None] * numGlyphs - nRanges = readCard32(file) - fd = None - prev = None - for i in range(nRanges): - first = readCard32(file) - if prev is not None: - for glyphID in range(prev, first): - gidArray[glyphID] = fd - prev = first - fd = readCard16(file) - if prev is not None: - first = readCard32(file) - for glyphID in range(prev, first): - gidArray[glyphID] = fd - self.gidArray = gidArray - else: - assert False, "unsupported FDSelect format: %s" % format - else: - # reading from XML. Make empty gidArray, and leave format as passed in. - # format is None will result in the smallest representation being used. - self.format = format - self.gidArray = [] - - def __len__(self): - return len(self.gidArray) - - def __getitem__(self, index): - return self.gidArray[index] - - def __setitem__(self, index, fdSelectValue): - self.gidArray[index] = fdSelectValue - - def append(self, fdSelectValue): - self.gidArray.append(fdSelectValue) - - -class CharStrings(object): - """The ``CharStrings`` in the font represent the instructions for drawing - each glyph. This object presents a dictionary interface to the font's - CharStrings, indexed by glyph name: - - .. code:: python - - tt["CFF "].cff[0].CharStrings["a"] - # - - See :class:`fontTools.misc.psCharStrings.T1CharString` and - :class:`fontTools.misc.psCharStrings.T2CharString` for how to decompile, - compile and interpret the glyph drawing instructions in the returned objects. - - """ - - def __init__( - self, - file, - charset, - globalSubrs, - private, - fdSelect, - fdArray, - isCFF2=None, - varStore=None, - ): - self.globalSubrs = globalSubrs - self.varStore = varStore - if file is not None: - self.charStringsIndex = SubrsIndex( - file, globalSubrs, private, fdSelect, fdArray, isCFF2=isCFF2 - ) - self.charStrings = charStrings = {} - for i in range(len(charset)): - charStrings[charset[i]] = i - # read from OTF file: charStrings.values() are indices into - # charStringsIndex. - self.charStringsAreIndexed = 1 - else: - self.charStrings = {} - # read from ttx file: charStrings.values() are actual charstrings - self.charStringsAreIndexed = 0 - self.private = private - if fdSelect is not None: - self.fdSelect = fdSelect - if fdArray is not None: - self.fdArray = fdArray - - def keys(self): - return list(self.charStrings.keys()) - - def values(self): - if self.charStringsAreIndexed: - return self.charStringsIndex - else: - return list(self.charStrings.values()) - - def has_key(self, name): - return name in self.charStrings - - __contains__ = has_key - - def __len__(self): - return len(self.charStrings) - - def __getitem__(self, name): - charString = self.charStrings[name] - if self.charStringsAreIndexed: - charString = self.charStringsIndex[charString] - return charString - - def __setitem__(self, name, charString): - if self.charStringsAreIndexed: - index = self.charStrings[name] - self.charStringsIndex[index] = charString - else: - self.charStrings[name] = charString - - def getItemAndSelector(self, name): - if self.charStringsAreIndexed: - index = self.charStrings[name] - return self.charStringsIndex.getItemAndSelector(index) - else: - if hasattr(self, "fdArray"): - if hasattr(self, "fdSelect"): - sel = self.charStrings[name].fdSelectIndex - else: - sel = 0 - else: - sel = None - return self.charStrings[name], sel - - def toXML(self, xmlWriter): - names = sorted(self.keys()) - for name in names: - charStr, fdSelectIndex = self.getItemAndSelector(name) - if charStr.needsDecompilation(): - raw = [("raw", 1)] - else: - raw = [] - if fdSelectIndex is None: - xmlWriter.begintag("CharString", [("name", name)] + raw) - else: - xmlWriter.begintag( - "CharString", - [("name", name), ("fdSelectIndex", fdSelectIndex)] + raw, - ) - xmlWriter.newline() - charStr.toXML(xmlWriter) - xmlWriter.endtag("CharString") - xmlWriter.newline() - - def fromXML(self, name, attrs, content): - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - if name != "CharString": - continue - fdID = -1 - if hasattr(self, "fdArray"): - try: - fdID = safeEval(attrs["fdSelectIndex"]) - except KeyError: - fdID = 0 - private = self.fdArray[fdID].Private - else: - private = self.private - - glyphName = attrs["name"] - charStringClass = psCharStrings.T2CharString - charString = charStringClass(private=private, globalSubrs=self.globalSubrs) - charString.fromXML(name, attrs, content) - if fdID >= 0: - charString.fdSelectIndex = fdID - self[glyphName] = charString - - -def readCard8(file): - return byteord(file.read(1)) - - -def readCard16(file): - (value,) = struct.unpack(">H", file.read(2)) - return value - - -def readCard32(file): - (value,) = struct.unpack(">L", file.read(4)) - return value - - -def writeCard8(file, value): - file.write(bytechr(value)) - - -def writeCard16(file, value): - file.write(struct.pack(">H", value)) - - -def writeCard32(file, value): - file.write(struct.pack(">L", value)) - - -def packCard8(value): - return bytechr(value) - - -def packCard16(value): - return struct.pack(">H", value) - - -def packCard32(value): - return struct.pack(">L", value) - - -def buildOperatorDict(table): - d = {} - for op, name, arg, default, conv in table: - d[op] = (name, arg) - return d - - -def buildOpcodeDict(table): - d = {} - for op, name, arg, default, conv in table: - if isinstance(op, tuple): - op = bytechr(op[0]) + bytechr(op[1]) - else: - op = bytechr(op) - d[name] = (op, arg) - return d - - -def buildOrder(table): - l = [] - for op, name, arg, default, conv in table: - l.append(name) - return l - - -def buildDefaults(table): - d = {} - for op, name, arg, default, conv in table: - if default is not None: - d[name] = default - return d - - -def buildConverters(table): - d = {} - for op, name, arg, default, conv in table: - d[name] = conv - return d - - -class SimpleConverter(object): - def read(self, parent, value): - if not hasattr(parent, "file"): - return self._read(parent, value) - file = parent.file - pos = file.tell() - try: - return self._read(parent, value) - finally: - file.seek(pos) - - def _read(self, parent, value): - return value - - def write(self, parent, value): - return value - - def xmlWrite(self, xmlWriter, name, value): - xmlWriter.simpletag(name, value=value) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - return attrs["value"] - - -class ASCIIConverter(SimpleConverter): - def _read(self, parent, value): - return tostr(value, encoding="ascii") - - def write(self, parent, value): - return tobytes(value, encoding="ascii") - - def xmlWrite(self, xmlWriter, name, value): - xmlWriter.simpletag(name, value=tostr(value, encoding="ascii")) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - return tobytes(attrs["value"], encoding=("ascii")) - - -class Latin1Converter(SimpleConverter): - def _read(self, parent, value): - return tostr(value, encoding="latin1") - - def write(self, parent, value): - return tobytes(value, encoding="latin1") - - def xmlWrite(self, xmlWriter, name, value): - value = tostr(value, encoding="latin1") - if name in ["Notice", "Copyright"]: - value = re.sub(r"[\r\n]\s+", " ", value) - xmlWriter.simpletag(name, value=value) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - return tobytes(attrs["value"], encoding=("latin1")) - - -def parseNum(s): - try: - value = int(s) - except: - value = float(s) - return value - - -def parseBlendList(s): - valueList = [] - for element in s: - if isinstance(element, str): - continue - name, attrs, content = element - blendList = attrs["value"].split() - blendList = [eval(val) for val in blendList] - valueList.append(blendList) - if len(valueList) == 1: - valueList = valueList[0] - return valueList - - -class NumberConverter(SimpleConverter): - def xmlWrite(self, xmlWriter, name, value): - if isinstance(value, list): - xmlWriter.begintag(name) - xmlWriter.newline() - xmlWriter.indent() - blendValue = " ".join([str(val) for val in value]) - xmlWriter.simpletag(kBlendDictOpName, value=blendValue) - xmlWriter.newline() - xmlWriter.dedent() - xmlWriter.endtag(name) - xmlWriter.newline() - else: - xmlWriter.simpletag(name, value=value) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - valueString = attrs.get("value", None) - if valueString is None: - value = parseBlendList(content) - else: - value = parseNum(attrs["value"]) - return value - - -class ArrayConverter(SimpleConverter): - def xmlWrite(self, xmlWriter, name, value): - if value and isinstance(value[0], list): - xmlWriter.begintag(name) - xmlWriter.newline() - xmlWriter.indent() - for valueList in value: - blendValue = " ".join([str(val) for val in valueList]) - xmlWriter.simpletag(kBlendDictOpName, value=blendValue) - xmlWriter.newline() - xmlWriter.dedent() - xmlWriter.endtag(name) - xmlWriter.newline() - else: - value = " ".join([str(val) for val in value]) - xmlWriter.simpletag(name, value=value) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - valueString = attrs.get("value", None) - if valueString is None: - valueList = parseBlendList(content) - else: - values = valueString.split() - valueList = [parseNum(value) for value in values] - return valueList - - -class TableConverter(SimpleConverter): - def xmlWrite(self, xmlWriter, name, value): - xmlWriter.begintag(name) - xmlWriter.newline() - value.toXML(xmlWriter) - xmlWriter.endtag(name) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - ob = self.getClass()() - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - ob.fromXML(name, attrs, content) - return ob - - -class PrivateDictConverter(TableConverter): - def getClass(self): - return PrivateDict - - def _read(self, parent, value): - size, offset = value - file = parent.file - isCFF2 = parent._isCFF2 - try: - vstore = parent.vstore - except AttributeError: - vstore = None - priv = PrivateDict(parent.strings, file, offset, isCFF2=isCFF2, vstore=vstore) - file.seek(offset) - data = file.read(size) - assert len(data) == size - priv.decompile(data) - return priv - - def write(self, parent, value): - return (0, 0) # dummy value - - -class SubrsConverter(TableConverter): - def getClass(self): - return SubrsIndex - - def _read(self, parent, value): - file = parent.file - isCFF2 = parent._isCFF2 - file.seek(parent.offset + value) # Offset(self) - return SubrsIndex(file, isCFF2=isCFF2) - - def write(self, parent, value): - return 0 # dummy value - - -class CharStringsConverter(TableConverter): - def _read(self, parent, value): - file = parent.file - isCFF2 = parent._isCFF2 - charset = parent.charset - varStore = getattr(parent, "VarStore", None) - globalSubrs = parent.GlobalSubrs - if hasattr(parent, "FDArray"): - fdArray = parent.FDArray - if hasattr(parent, "FDSelect"): - fdSelect = parent.FDSelect - else: - fdSelect = None - private = None - else: - fdSelect, fdArray = None, None - private = parent.Private - file.seek(value) # Offset(0) - charStrings = CharStrings( - file, - charset, - globalSubrs, - private, - fdSelect, - fdArray, - isCFF2=isCFF2, - varStore=varStore, - ) - return charStrings - - def write(self, parent, value): - return 0 # dummy value - - def xmlRead(self, name, attrs, content, parent): - if hasattr(parent, "FDArray"): - # if it is a CID-keyed font, then the private Dict is extracted from the - # parent.FDArray - fdArray = parent.FDArray - if hasattr(parent, "FDSelect"): - fdSelect = parent.FDSelect - else: - fdSelect = None - private = None - else: - # if it is a name-keyed font, then the private dict is in the top dict, - # and - # there is no fdArray. - private, fdSelect, fdArray = parent.Private, None, None - charStrings = CharStrings( - None, - None, - parent.GlobalSubrs, - private, - fdSelect, - fdArray, - varStore=getattr(parent, "VarStore", None), - ) - charStrings.fromXML(name, attrs, content) - return charStrings - - -class CharsetConverter(SimpleConverter): - def _read(self, parent, value): - isCID = hasattr(parent, "ROS") - if value > 2: - numGlyphs = parent.numGlyphs - file = parent.file - file.seek(value) - log.log(DEBUG, "loading charset at %s", value) - format = readCard8(file) - if format == 0: - charset = parseCharset0(numGlyphs, file, parent.strings, isCID) - elif format == 1 or format == 2: - charset = parseCharset(numGlyphs, file, parent.strings, isCID, format) - else: - raise NotImplementedError - assert len(charset) == numGlyphs - log.log(DEBUG, " charset end at %s", file.tell()) - # make sure glyph names are unique - allNames = {} - newCharset = [] - for glyphName in charset: - if glyphName in allNames: - # make up a new glyphName that's unique - n = allNames[glyphName] - names = set(allNames) | set(charset) - while (glyphName + "." + str(n)) in names: - n += 1 - allNames[glyphName] = n + 1 - glyphName = glyphName + "." + str(n) - allNames[glyphName] = 1 - newCharset.append(glyphName) - charset = newCharset - else: # offset == 0 -> no charset data. - if isCID or "CharStrings" not in parent.rawDict: - # We get here only when processing fontDicts from the FDArray of - # CFF-CID fonts. Only the real topDict references the charset. - assert value == 0 - charset = None - elif value == 0: - charset = cffISOAdobeStrings - elif value == 1: - charset = cffIExpertStrings - elif value == 2: - charset = cffExpertSubsetStrings - if charset and (len(charset) != parent.numGlyphs): - charset = charset[: parent.numGlyphs] - return charset - - def write(self, parent, value): - return 0 # dummy value - - def xmlWrite(self, xmlWriter, name, value): - # XXX only write charset when not in OT/TTX context, where we - # dump charset as a separate "GlyphOrder" table. - # # xmlWriter.simpletag("charset") - xmlWriter.comment("charset is dumped separately as the 'GlyphOrder' element") - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - pass - - -class CharsetCompiler(object): - def __init__(self, strings, charset, parent): - assert charset[0] == ".notdef" - isCID = hasattr(parent.dictObj, "ROS") - data0 = packCharset0(charset, isCID, strings) - data = packCharset(charset, isCID, strings) - if len(data) < len(data0): - self.data = data - else: - self.data = data0 - self.parent = parent - - def setPos(self, pos, endPos): - self.parent.rawDict["charset"] = pos - - def getDataLength(self): - return len(self.data) - - def toFile(self, file): - file.write(self.data) - - -def getStdCharSet(charset): - # check to see if we can use a predefined charset value. - predefinedCharSetVal = None - predefinedCharSets = [ - (cffISOAdobeStringCount, cffISOAdobeStrings, 0), - (cffExpertStringCount, cffIExpertStrings, 1), - (cffExpertSubsetStringCount, cffExpertSubsetStrings, 2), - ] - lcs = len(charset) - for cnt, pcs, csv in predefinedCharSets: - if predefinedCharSetVal is not None: - break - if lcs > cnt: - continue - predefinedCharSetVal = csv - for i in range(lcs): - if charset[i] != pcs[i]: - predefinedCharSetVal = None - break - return predefinedCharSetVal - - -def getCIDfromName(name, strings): - return int(name[3:]) - - -def getSIDfromName(name, strings): - return strings.getSID(name) - - -def packCharset0(charset, isCID, strings): - fmt = 0 - data = [packCard8(fmt)] - if isCID: - getNameID = getCIDfromName - else: - getNameID = getSIDfromName - - for name in charset[1:]: - data.append(packCard16(getNameID(name, strings))) - return bytesjoin(data) - - -def packCharset(charset, isCID, strings): - fmt = 1 - ranges = [] - first = None - end = 0 - if isCID: - getNameID = getCIDfromName - else: - getNameID = getSIDfromName - - for name in charset[1:]: - SID = getNameID(name, strings) - if first is None: - first = SID - elif end + 1 != SID: - nLeft = end - first - if nLeft > 255: - fmt = 2 - ranges.append((first, nLeft)) - first = SID - end = SID - if end: - nLeft = end - first - if nLeft > 255: - fmt = 2 - ranges.append((first, nLeft)) - - data = [packCard8(fmt)] - if fmt == 1: - nLeftFunc = packCard8 - else: - nLeftFunc = packCard16 - for first, nLeft in ranges: - data.append(packCard16(first) + nLeftFunc(nLeft)) - return bytesjoin(data) - - -def parseCharset0(numGlyphs, file, strings, isCID): - charset = [".notdef"] - if isCID: - for i in range(numGlyphs - 1): - CID = readCard16(file) - charset.append("cid" + str(CID).zfill(5)) - else: - for i in range(numGlyphs - 1): - SID = readCard16(file) - charset.append(strings[SID]) - return charset - - -def parseCharset(numGlyphs, file, strings, isCID, fmt): - charset = [".notdef"] - count = 1 - if fmt == 1: - nLeftFunc = readCard8 - else: - nLeftFunc = readCard16 - while count < numGlyphs: - first = readCard16(file) - nLeft = nLeftFunc(file) - if isCID: - for CID in range(first, first + nLeft + 1): - charset.append("cid" + str(CID).zfill(5)) - else: - for SID in range(first, first + nLeft + 1): - charset.append(strings[SID]) - count = count + nLeft + 1 - return charset - - -class EncodingCompiler(object): - def __init__(self, strings, encoding, parent): - assert not isinstance(encoding, str) - data0 = packEncoding0(parent.dictObj.charset, encoding, parent.strings) - data1 = packEncoding1(parent.dictObj.charset, encoding, parent.strings) - if len(data0) < len(data1): - self.data = data0 - else: - self.data = data1 - self.parent = parent - - def setPos(self, pos, endPos): - self.parent.rawDict["Encoding"] = pos - - def getDataLength(self): - return len(self.data) - - def toFile(self, file): - file.write(self.data) - - -class EncodingConverter(SimpleConverter): - def _read(self, parent, value): - if value == 0: - return "StandardEncoding" - elif value == 1: - return "ExpertEncoding" - # custom encoding at offset `value` - assert value > 1 - file = parent.file - file.seek(value) - log.log(DEBUG, "loading Encoding at %s", value) - fmt = readCard8(file) - haveSupplement = bool(fmt & 0x80) - fmt = fmt & 0x7F - - if fmt == 0: - encoding = parseEncoding0(parent.charset, file) - elif fmt == 1: - encoding = parseEncoding1(parent.charset, file) - else: - raise ValueError(f"Unknown Encoding format: {fmt}") - - if haveSupplement: - parseEncodingSupplement(file, encoding, parent.strings) - - return encoding - - def write(self, parent, value): - if value == "StandardEncoding": - return 0 - elif value == "ExpertEncoding": - return 1 - return 0 # dummy value - - def xmlWrite(self, xmlWriter, name, value): - if value in ("StandardEncoding", "ExpertEncoding"): - xmlWriter.simpletag(name, name=value) - xmlWriter.newline() - return - xmlWriter.begintag(name) - xmlWriter.newline() - for code in range(len(value)): - glyphName = value[code] - if glyphName != ".notdef": - xmlWriter.simpletag("map", code=hex(code), name=glyphName) - xmlWriter.newline() - xmlWriter.endtag(name) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - if "name" in attrs: - return attrs["name"] - encoding = [".notdef"] * 256 - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - code = safeEval(attrs["code"]) - glyphName = attrs["name"] - encoding[code] = glyphName - return encoding - - -def readSID(file): - """Read a String ID (SID) β€” 2-byte unsigned integer.""" - data = file.read(2) - if len(data) != 2: - raise EOFError("Unexpected end of file while reading SID") - return struct.unpack(">H", data)[0] # big-endian uint16 - - -def parseEncodingSupplement(file, encoding, strings): - """ - Parse the CFF Encoding supplement data: - - nSups: number of supplementary mappings - - each mapping: (code, SID) pair - and apply them to the `encoding` list in place. - """ - nSups = readCard8(file) - for _ in range(nSups): - code = readCard8(file) - sid = readSID(file) - name = strings[sid] - encoding[code] = name - - -def parseEncoding0(charset, file): - """ - Format 0: simple list of codes. - After reading the base table, optionally parse the supplement. - """ - nCodes = readCard8(file) - encoding = [".notdef"] * 256 - for glyphID in range(1, nCodes + 1): - code = readCard8(file) - if code != 0: - encoding[code] = charset[glyphID] - - return encoding - - -def parseEncoding1(charset, file): - """ - FormatΒ 1: range-based encoding. - After reading the base ranges, optionally parse the supplement. - """ - nRanges = readCard8(file) - encoding = [".notdef"] * 256 - glyphID = 1 - for _ in range(nRanges): - code = readCard8(file) - nLeft = readCard8(file) - for _ in range(nLeft + 1): - encoding[code] = charset[glyphID] - code += 1 - glyphID += 1 - - return encoding - - -def packEncoding0(charset, encoding, strings): - fmt = 0 - m = {} - for code in range(len(encoding)): - name = encoding[code] - if name != ".notdef": - m[name] = code - codes = [] - for name in charset[1:]: - code = m.get(name) - codes.append(code) - - while codes and codes[-1] is None: - codes.pop() - - data = [packCard8(fmt), packCard8(len(codes))] - for code in codes: - if code is None: - code = 0 - data.append(packCard8(code)) - return bytesjoin(data) - - -def packEncoding1(charset, encoding, strings): - fmt = 1 - m = {} - for code in range(len(encoding)): - name = encoding[code] - if name != ".notdef": - m[name] = code - ranges = [] - first = None - end = 0 - for name in charset[1:]: - code = m.get(name, -1) - if first is None: - first = code - elif end + 1 != code: - nLeft = end - first - ranges.append((first, nLeft)) - first = code - end = code - nLeft = end - first - ranges.append((first, nLeft)) - - # remove unencoded glyphs at the end. - while ranges and ranges[-1][0] == -1: - ranges.pop() - - data = [packCard8(fmt), packCard8(len(ranges))] - for first, nLeft in ranges: - if first == -1: # unencoded - first = 0 - data.append(packCard8(first) + packCard8(nLeft)) - return bytesjoin(data) - - -class FDArrayConverter(TableConverter): - def _read(self, parent, value): - try: - vstore = parent.VarStore - except AttributeError: - vstore = None - file = parent.file - isCFF2 = parent._isCFF2 - file.seek(value) - fdArray = FDArrayIndex(file, isCFF2=isCFF2) - fdArray.vstore = vstore - fdArray.strings = parent.strings - fdArray.GlobalSubrs = parent.GlobalSubrs - return fdArray - - def write(self, parent, value): - return 0 # dummy value - - def xmlRead(self, name, attrs, content, parent): - fdArray = FDArrayIndex() - for element in content: - if isinstance(element, str): - continue - name, attrs, content = element - fdArray.fromXML(name, attrs, content) - return fdArray - - -class FDSelectConverter(SimpleConverter): - def _read(self, parent, value): - file = parent.file - file.seek(value) - fdSelect = FDSelect(file, parent.numGlyphs) - return fdSelect - - def write(self, parent, value): - return 0 # dummy value - - # The FDSelect glyph data is written out to XML in the charstring keys, - # so we write out only the format selector - def xmlWrite(self, xmlWriter, name, value): - xmlWriter.simpletag(name, [("format", value.format)]) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - fmt = safeEval(attrs["format"]) - file = None - numGlyphs = None - fdSelect = FDSelect(file, numGlyphs, fmt) - return fdSelect - - -class VarStoreConverter(SimpleConverter): - def _read(self, parent, value): - file = parent.file - file.seek(value) - varStore = VarStoreData(file) - varStore.decompile() - return varStore - - def write(self, parent, value): - return 0 # dummy value - - def xmlWrite(self, xmlWriter, name, value): - value.writeXML(xmlWriter, name) - - def xmlRead(self, name, attrs, content, parent): - varStore = VarStoreData() - varStore.xmlRead(name, attrs, content, parent) - return varStore - - -def packFDSelect0(fdSelectArray): - fmt = 0 - data = [packCard8(fmt)] - for index in fdSelectArray: - data.append(packCard8(index)) - return bytesjoin(data) - - -def packFDSelect3(fdSelectArray): - fmt = 3 - fdRanges = [] - lenArray = len(fdSelectArray) - lastFDIndex = -1 - for i in range(lenArray): - fdIndex = fdSelectArray[i] - if lastFDIndex != fdIndex: - fdRanges.append([i, fdIndex]) - lastFDIndex = fdIndex - sentinelGID = i + 1 - - data = [packCard8(fmt)] - data.append(packCard16(len(fdRanges))) - for fdRange in fdRanges: - data.append(packCard16(fdRange[0])) - data.append(packCard8(fdRange[1])) - data.append(packCard16(sentinelGID)) - return bytesjoin(data) - - -def packFDSelect4(fdSelectArray): - fmt = 4 - fdRanges = [] - lenArray = len(fdSelectArray) - lastFDIndex = -1 - for i in range(lenArray): - fdIndex = fdSelectArray[i] - if lastFDIndex != fdIndex: - fdRanges.append([i, fdIndex]) - lastFDIndex = fdIndex - sentinelGID = i + 1 - - data = [packCard8(fmt)] - data.append(packCard32(len(fdRanges))) - for fdRange in fdRanges: - data.append(packCard32(fdRange[0])) - data.append(packCard16(fdRange[1])) - data.append(packCard32(sentinelGID)) - return bytesjoin(data) - - -class FDSelectCompiler(object): - def __init__(self, fdSelect, parent): - fmt = fdSelect.format - fdSelectArray = fdSelect.gidArray - if fmt == 0: - self.data = packFDSelect0(fdSelectArray) - elif fmt == 3: - self.data = packFDSelect3(fdSelectArray) - elif fmt == 4: - self.data = packFDSelect4(fdSelectArray) - else: - # choose smaller of the two formats - data0 = packFDSelect0(fdSelectArray) - data3 = packFDSelect3(fdSelectArray) - if len(data0) < len(data3): - self.data = data0 - fdSelect.format = 0 - else: - self.data = data3 - fdSelect.format = 3 - - self.parent = parent - - def setPos(self, pos, endPos): - self.parent.rawDict["FDSelect"] = pos - - def getDataLength(self): - return len(self.data) - - def toFile(self, file): - file.write(self.data) - - -class VarStoreCompiler(object): - def __init__(self, varStoreData, parent): - self.parent = parent - if not varStoreData.data: - varStoreData.compile() - varStoreDataLen = min(0xFFFF, len(varStoreData.data)) - data = [packCard16(varStoreDataLen), varStoreData.data] - self.data = bytesjoin(data) - - def setPos(self, pos, endPos): - self.parent.rawDict["VarStore"] = pos - - def getDataLength(self): - return len(self.data) - - def toFile(self, file): - file.write(self.data) - - -class ROSConverter(SimpleConverter): - def xmlWrite(self, xmlWriter, name, value): - registry, order, supplement = value - xmlWriter.simpletag( - name, - [ - ("Registry", tostr(registry)), - ("Order", tostr(order)), - ("Supplement", supplement), - ], - ) - xmlWriter.newline() - - def xmlRead(self, name, attrs, content, parent): - return (attrs["Registry"], attrs["Order"], safeEval(attrs["Supplement"])) - - -topDictOperators = [ - # opcode name argument type default converter - (25, "maxstack", "number", None, None), - ((12, 30), "ROS", ("SID", "SID", "number"), None, ROSConverter()), - ((12, 20), "SyntheticBase", "number", None, None), - (0, "version", "SID", None, None), - (1, "Notice", "SID", None, Latin1Converter()), - ((12, 0), "Copyright", "SID", None, Latin1Converter()), - (2, "FullName", "SID", None, Latin1Converter()), - ((12, 38), "FontName", "SID", None, Latin1Converter()), - (3, "FamilyName", "SID", None, Latin1Converter()), - (4, "Weight", "SID", None, None), - ((12, 1), "isFixedPitch", "number", 0, None), - ((12, 2), "ItalicAngle", "number", 0, None), - ((12, 3), "UnderlinePosition", "number", -100, None), - ((12, 4), "UnderlineThickness", "number", 50, None), - ((12, 5), "PaintType", "number", 0, None), - ((12, 6), "CharstringType", "number", 2, None), - ((12, 7), "FontMatrix", "array", [0.001, 0, 0, 0.001, 0, 0], None), - (13, "UniqueID", "number", None, None), - (5, "FontBBox", "array", [0, 0, 0, 0], None), - ((12, 8), "StrokeWidth", "number", 0, None), - (14, "XUID", "array", None, None), - ((12, 21), "PostScript", "SID", None, None), - ((12, 22), "BaseFontName", "SID", None, None), - ((12, 23), "BaseFontBlend", "delta", None, None), - ((12, 31), "CIDFontVersion", "number", 0, None), - ((12, 32), "CIDFontRevision", "number", 0, None), - ((12, 33), "CIDFontType", "number", 0, None), - ((12, 34), "CIDCount", "number", 8720, None), - (15, "charset", "number", None, CharsetConverter()), - ((12, 35), "UIDBase", "number", None, None), - (16, "Encoding", "number", 0, EncodingConverter()), - (18, "Private", ("number", "number"), None, PrivateDictConverter()), - ((12, 37), "FDSelect", "number", None, FDSelectConverter()), - ((12, 36), "FDArray", "number", None, FDArrayConverter()), - (17, "CharStrings", "number", None, CharStringsConverter()), - (24, "VarStore", "number", None, VarStoreConverter()), -] - -topDictOperators2 = [ - # opcode name argument type default converter - (25, "maxstack", "number", None, None), - ((12, 7), "FontMatrix", "array", [0.001, 0, 0, 0.001, 0, 0], None), - ((12, 37), "FDSelect", "number", None, FDSelectConverter()), - ((12, 36), "FDArray", "number", None, FDArrayConverter()), - (17, "CharStrings", "number", None, CharStringsConverter()), - (24, "VarStore", "number", None, VarStoreConverter()), -] - -# Note! FDSelect and FDArray must both preceed CharStrings in the output XML build order, -# in order for the font to compile back from xml. - -kBlendDictOpName = "blend" -blendOp = 23 - -privateDictOperators = [ - # opcode name argument type default converter - (22, "vsindex", "number", None, None), - ( - blendOp, - kBlendDictOpName, - "blendList", - None, - None, - ), # This is for reading to/from XML: it not written to CFF. - (6, "BlueValues", "delta", None, None), - (7, "OtherBlues", "delta", None, None), - (8, "FamilyBlues", "delta", None, None), - (9, "FamilyOtherBlues", "delta", None, None), - ((12, 9), "BlueScale", "number", 0.039625, None), - ((12, 10), "BlueShift", "number", 7, None), - ((12, 11), "BlueFuzz", "number", 1, None), - (10, "StdHW", "number", None, None), - (11, "StdVW", "number", None, None), - ((12, 12), "StemSnapH", "delta", None, None), - ((12, 13), "StemSnapV", "delta", None, None), - ((12, 14), "ForceBold", "number", 0, None), - ((12, 15), "ForceBoldThreshold", "number", None, None), # deprecated - ((12, 16), "lenIV", "number", None, None), # deprecated - ((12, 17), "LanguageGroup", "number", 0, None), - ((12, 18), "ExpansionFactor", "number", 0.06, None), - ((12, 19), "initialRandomSeed", "number", 0, None), - (20, "defaultWidthX", "number", 0, None), - (21, "nominalWidthX", "number", 0, None), - (19, "Subrs", "number", None, SubrsConverter()), -] - -privateDictOperators2 = [ - # opcode name argument type default converter - (22, "vsindex", "number", None, None), - ( - blendOp, - kBlendDictOpName, - "blendList", - None, - None, - ), # This is for reading to/from XML: it not written to CFF. - (6, "BlueValues", "delta", None, None), - (7, "OtherBlues", "delta", None, None), - (8, "FamilyBlues", "delta", None, None), - (9, "FamilyOtherBlues", "delta", None, None), - ((12, 9), "BlueScale", "number", 0.039625, None), - ((12, 10), "BlueShift", "number", 7, None), - ((12, 11), "BlueFuzz", "number", 1, None), - (10, "StdHW", "number", None, None), - (11, "StdVW", "number", None, None), - ((12, 12), "StemSnapH", "delta", None, None), - ((12, 13), "StemSnapV", "delta", None, None), - ((12, 17), "LanguageGroup", "number", 0, None), - ((12, 18), "ExpansionFactor", "number", 0.06, None), - (19, "Subrs", "number", None, SubrsConverter()), -] - - -def addConverters(table): - for i in range(len(table)): - op, name, arg, default, conv = table[i] - if conv is not None: - continue - if arg in ("delta", "array"): - conv = ArrayConverter() - elif arg == "number": - conv = NumberConverter() - elif arg == "SID": - conv = ASCIIConverter() - elif arg == "blendList": - conv = None - else: - assert False - table[i] = op, name, arg, default, conv - - -addConverters(privateDictOperators) -addConverters(topDictOperators) - - -class TopDictDecompiler(psCharStrings.DictDecompiler): - operators = buildOperatorDict(topDictOperators) - - -class PrivateDictDecompiler(psCharStrings.DictDecompiler): - operators = buildOperatorDict(privateDictOperators) - - -class DictCompiler(object): - maxBlendStack = 0 - - def __init__(self, dictObj, strings, parent, isCFF2=None): - if strings: - assert isinstance(strings, IndexedStrings) - if isCFF2 is None and hasattr(parent, "isCFF2"): - isCFF2 = parent.isCFF2 - assert isCFF2 is not None - self.isCFF2 = isCFF2 - self.dictObj = dictObj - self.strings = strings - self.parent = parent - rawDict = {} - for name in dictObj.order: - value = getattr(dictObj, name, None) - if value is None: - continue - conv = dictObj.converters[name] - value = conv.write(dictObj, value) - if value == dictObj.defaults.get(name): - continue - rawDict[name] = value - self.rawDict = rawDict - - def setPos(self, pos, endPos): - pass - - def getDataLength(self): - return len(self.compile("getDataLength")) - - def compile(self, reason): - log.log(DEBUG, "-- compiling %s for %s", self.__class__.__name__, reason) - rawDict = self.rawDict - data = [] - for name in self.dictObj.order: - value = rawDict.get(name) - if value is None: - continue - op, argType = self.opcodes[name] - if isinstance(argType, tuple): - l = len(argType) - assert len(value) == l, "value doesn't match arg type" - for i in range(l): - arg = argType[i] - v = value[i] - arghandler = getattr(self, "arg_" + arg) - data.append(arghandler(v)) - else: - arghandler = getattr(self, "arg_" + argType) - data.append(arghandler(value)) - data.append(op) - data = bytesjoin(data) - return data - - def toFile(self, file): - data = self.compile("toFile") - file.write(data) - - def arg_number(self, num): - if isinstance(num, list): - data = [encodeNumber(val) for val in num] - data.append(encodeNumber(1)) - data.append(bytechr(blendOp)) - datum = bytesjoin(data) - else: - datum = encodeNumber(num) - return datum - - def arg_SID(self, s): - return psCharStrings.encodeIntCFF(self.strings.getSID(s)) - - def arg_array(self, value): - data = [] - for num in value: - data.append(self.arg_number(num)) - return bytesjoin(data) - - def arg_delta(self, value): - if not value: - return b"" - val0 = value[0] - if isinstance(val0, list): - data = self.arg_delta_blend(value) - else: - out = [] - last = 0 - for v in value: - out.append(v - last) - last = v - data = [] - for num in out: - data.append(encodeNumber(num)) - return bytesjoin(data) - - def arg_delta_blend(self, value): - """A delta list with blend lists has to be *all* blend lists. - - The value is a list is arranged as follows:: - - [ - [V0, d0..dn] - [V1, d0..dn] - ... - [Vm, d0..dn] - ] - - ``V`` is the absolute coordinate value from the default font, and ``d0-dn`` - are the delta values from the *n* regions. Each ``V`` is an absolute - coordinate from the default font. - - We want to return a list:: - - [ - [v0, v1..vm] - [d0..dn] - ... - [d0..dn] - numBlends - blendOp - ] - - where each ``v`` is relative to the previous default font value. - """ - numMasters = len(value[0]) - numBlends = len(value) - numStack = (numBlends * numMasters) + 1 - if numStack > self.maxBlendStack: - # Figure out the max number of value we can blend - # and divide this list up into chunks of that size. - - numBlendValues = int((self.maxBlendStack - 1) / numMasters) - out = [] - while True: - numVal = min(len(value), numBlendValues) - if numVal == 0: - break - valList = value[0:numVal] - out1 = self.arg_delta_blend(valList) - out.extend(out1) - value = value[numVal:] - else: - firstList = [0] * numBlends - deltaList = [None] * numBlends - i = 0 - prevVal = 0 - while i < numBlends: - # For PrivateDict BlueValues, the default font - # values are absolute, not relative. - # Must convert these back to relative coordinates - # before writing to CFF2. - defaultValue = value[i][0] - firstList[i] = defaultValue - prevVal - prevVal = defaultValue - deltaList[i] = value[i][1:] - i += 1 - - relValueList = firstList - for blendList in deltaList: - relValueList.extend(blendList) - out = [encodeNumber(val) for val in relValueList] - out.append(encodeNumber(numBlends)) - out.append(bytechr(blendOp)) - return out - - -def encodeNumber(num): - if isinstance(num, float): - return psCharStrings.encodeFloat(num) - else: - return psCharStrings.encodeIntCFF(num) - - -class TopDictCompiler(DictCompiler): - opcodes = buildOpcodeDict(topDictOperators) - - def getChildren(self, strings): - isCFF2 = self.isCFF2 - children = [] - if self.dictObj.cff2GetGlyphOrder is None: - if hasattr(self.dictObj, "charset") and self.dictObj.charset: - if hasattr(self.dictObj, "ROS"): # aka isCID - charsetCode = None - else: - charsetCode = getStdCharSet(self.dictObj.charset) - if charsetCode is None: - children.append( - CharsetCompiler(strings, self.dictObj.charset, self) - ) - else: - self.rawDict["charset"] = charsetCode - if hasattr(self.dictObj, "Encoding") and self.dictObj.Encoding: - encoding = self.dictObj.Encoding - if not isinstance(encoding, str): - children.append(EncodingCompiler(strings, encoding, self)) - else: - if hasattr(self.dictObj, "VarStore"): - varStoreData = self.dictObj.VarStore - varStoreComp = VarStoreCompiler(varStoreData, self) - children.append(varStoreComp) - if hasattr(self.dictObj, "FDSelect"): - # I have not yet supported merging a ttx CFF-CID font, as there are - # interesting issues about merging the FDArrays. Here I assume that - # either the font was read from XML, and the FDSelect indices are all - # in the charstring data, or the FDSelect array is already fully defined. - fdSelect = self.dictObj.FDSelect - # probably read in from XML; assume fdIndex in CharString data - if len(fdSelect) == 0: - charStrings = self.dictObj.CharStrings - for name in self.dictObj.charset: - fdSelect.append(charStrings[name].fdSelectIndex) - fdSelectComp = FDSelectCompiler(fdSelect, self) - children.append(fdSelectComp) - if hasattr(self.dictObj, "CharStrings"): - items = [] - charStrings = self.dictObj.CharStrings - for name in self.dictObj.charset: - items.append(charStrings[name]) - charStringsComp = CharStringsCompiler(items, strings, self, isCFF2=isCFF2) - children.append(charStringsComp) - if hasattr(self.dictObj, "FDArray"): - # I have not yet supported merging a ttx CFF-CID font, as there are - # interesting issues about merging the FDArrays. Here I assume that the - # FDArray info is correct and complete. - fdArrayIndexComp = self.dictObj.FDArray.getCompiler(strings, self) - children.append(fdArrayIndexComp) - children.extend(fdArrayIndexComp.getChildren(strings)) - if hasattr(self.dictObj, "Private"): - privComp = self.dictObj.Private.getCompiler(strings, self) - children.append(privComp) - children.extend(privComp.getChildren(strings)) - return children - - -class FontDictCompiler(DictCompiler): - opcodes = buildOpcodeDict(topDictOperators) - - def __init__(self, dictObj, strings, parent, isCFF2=None): - super(FontDictCompiler, self).__init__(dictObj, strings, parent, isCFF2=isCFF2) - # - # We now take some effort to detect if there were any key/value pairs - # supplied that were ignored in the FontDict context, and issue a warning - # for those cases. - # - ignoredNames = [] - dictObj = self.dictObj - for name in sorted(set(dictObj.converters) - set(dictObj.order)): - if name in dictObj.rawDict: - # The font was directly read from binary. In this - # case, we want to report *all* "useless" key/value - # pairs that are in the font, not just the ones that - # are different from the default. - ignoredNames.append(name) - else: - # The font was probably read from a TTX file. We only - # warn about keys whos value is not the default. The - # ones that have the default value will not be written - # to binary anyway. - default = dictObj.defaults.get(name) - if default is not None: - conv = dictObj.converters[name] - default = conv.read(dictObj, default) - if getattr(dictObj, name, None) != default: - ignoredNames.append(name) - if ignoredNames: - log.warning( - "Some CFF FDArray/FontDict keys were ignored upon compile: " - + " ".join(sorted(ignoredNames)) - ) - - def getChildren(self, strings): - children = [] - if hasattr(self.dictObj, "Private"): - privComp = self.dictObj.Private.getCompiler(strings, self) - children.append(privComp) - children.extend(privComp.getChildren(strings)) - return children - - -class PrivateDictCompiler(DictCompiler): - maxBlendStack = maxStackLimit - opcodes = buildOpcodeDict(privateDictOperators) - - def setPos(self, pos, endPos): - size = endPos - pos - self.parent.rawDict["Private"] = size, pos - self.pos = pos - - def getChildren(self, strings): - children = [] - if hasattr(self.dictObj, "Subrs"): - children.append(self.dictObj.Subrs.getCompiler(strings, self)) - return children - - -class BaseDict(object): - def __init__(self, strings=None, file=None, offset=None, isCFF2=None): - assert (isCFF2 is None) == (file is None) - self.rawDict = {} - self.skipNames = [] - self.strings = strings - if file is None: - return - self._isCFF2 = isCFF2 - self.file = file - if offset is not None: - log.log(DEBUG, "loading %s at %s", self.__class__.__name__, offset) - self.offset = offset - - def decompile(self, data): - log.log(DEBUG, " length %s is %d", self.__class__.__name__, len(data)) - dec = self.decompilerClass(self.strings, self) - dec.decompile(data) - self.rawDict = dec.getDict() - self.postDecompile() - - def postDecompile(self): - pass - - def getCompiler(self, strings, parent, isCFF2=None): - return self.compilerClass(self, strings, parent, isCFF2=isCFF2) - - def __getattr__(self, name): - if name[:2] == name[-2:] == "__": - # to make deepcopy() and pickle.load() work, we need to signal with - # AttributeError that dunder methods like '__deepcopy__' or '__getstate__' - # aren't implemented. For more details, see: - # https://github.com/fonttools/fonttools/pull/1488 - raise AttributeError(name) - value = self.rawDict.get(name, None) - if value is None: - value = self.defaults.get(name) - if value is None: - raise AttributeError(name) - conv = self.converters[name] - value = conv.read(self, value) - setattr(self, name, value) - return value - - def toXML(self, xmlWriter): - for name in self.order: - if name in self.skipNames: - continue - value = getattr(self, name, None) - # XXX For "charset" we never skip calling xmlWrite even if the - # value is None, so we always write the following XML comment: - # - # - # - # Charset is None when 'CFF ' table is imported from XML into an - # empty TTFont(). By writing this comment all the time, we obtain - # the same XML output whether roundtripping XML-to-XML or - # dumping binary-to-XML - if value is None and name != "charset": - continue - conv = self.converters[name] - conv.xmlWrite(xmlWriter, name, value) - ignoredNames = set(self.rawDict) - set(self.order) - if ignoredNames: - xmlWriter.comment( - "some keys were ignored: %s" % " ".join(sorted(ignoredNames)) - ) - xmlWriter.newline() - - def fromXML(self, name, attrs, content): - conv = self.converters[name] - value = conv.xmlRead(name, attrs, content, self) - setattr(self, name, value) - - -class TopDict(BaseDict): - """The ``TopDict`` represents the top-level dictionary holding font - information. CFF2 tables contain a restricted set of top-level entries - as described `here `_, - but CFF tables may contain a wider range of information. This information - can be accessed through attributes or through the dictionary returned - through the ``rawDict`` property: - - .. code:: python - - font = tt["CFF "].cff[0] - font.FamilyName - # 'Linux Libertine O' - font.rawDict["FamilyName"] - # 'Linux Libertine O' - - More information is available in the CFF file's private dictionary, accessed - via the ``Private`` property: - - .. code:: python - - tt["CFF "].cff[0].Private.BlueValues - # [-15, 0, 515, 515, 666, 666] - - """ - - defaults = buildDefaults(topDictOperators) - converters = buildConverters(topDictOperators) - compilerClass = TopDictCompiler - order = buildOrder(topDictOperators) - decompilerClass = TopDictDecompiler - - def __init__( - self, - strings=None, - file=None, - offset=None, - GlobalSubrs=None, - cff2GetGlyphOrder=None, - isCFF2=None, - ): - super(TopDict, self).__init__(strings, file, offset, isCFF2=isCFF2) - self.cff2GetGlyphOrder = cff2GetGlyphOrder - self.GlobalSubrs = GlobalSubrs - if isCFF2: - self.defaults = buildDefaults(topDictOperators2) - self.charset = cff2GetGlyphOrder() - self.order = buildOrder(topDictOperators2) - else: - self.defaults = buildDefaults(topDictOperators) - self.order = buildOrder(topDictOperators) - - def getGlyphOrder(self): - """Returns a list of glyph names in the CFF font.""" - return self.charset - - def postDecompile(self): - offset = self.rawDict.get("CharStrings") - if offset is None: - return - # get the number of glyphs beforehand. - self.file.seek(offset) - if self._isCFF2: - self.numGlyphs = readCard32(self.file) - else: - self.numGlyphs = readCard16(self.file) - - def toXML(self, xmlWriter): - if hasattr(self, "CharStrings"): - self.decompileAllCharStrings() - if hasattr(self, "ROS"): - self.skipNames = ["Encoding"] - if not hasattr(self, "ROS") or not hasattr(self, "CharStrings"): - # these values have default values, but I only want them to show up - # in CID fonts. - self.skipNames = [ - "CIDFontVersion", - "CIDFontRevision", - "CIDFontType", - "CIDCount", - ] - BaseDict.toXML(self, xmlWriter) - - def decompileAllCharStrings(self): - # Make sure that all the Private Dicts have been instantiated. - for i, charString in enumerate(self.CharStrings.values()): - try: - charString.decompile() - except: - log.error("Error in charstring %s", i) - raise - - def recalcFontBBox(self): - fontBBox = None - for charString in self.CharStrings.values(): - bounds = charString.calcBounds(self.CharStrings) - if bounds is not None: - if fontBBox is not None: - fontBBox = unionRect(fontBBox, bounds) - else: - fontBBox = bounds - - if fontBBox is None: - self.FontBBox = self.defaults["FontBBox"][:] - else: - self.FontBBox = list(intRect(fontBBox)) - - -class FontDict(BaseDict): - # - # Since fonttools used to pass a lot of fields that are not relevant in the FDArray - # FontDict, there are 'ttx' files in the wild that contain all these. These got in - # the ttx files because fonttools writes explicit values for all the TopDict default - # values. These are not actually illegal in the context of an FDArray FontDict - you - # can legally, per spec, put any arbitrary key/value pair in a FontDict - but are - # useless since current major company CFF interpreters ignore anything but the set - # listed in this file. So, we just silently skip them. An exception is Weight: this - # is not used by any interpreter, but some foundries have asked that this be - # supported in FDArray FontDicts just to preserve information about the design when - # the font is being inspected. - # - # On top of that, there are fonts out there that contain such useless FontDict values. - # - # By subclassing TopDict, we *allow* all key/values from TopDict, both when reading - # from binary or when reading from XML, but by overriding `order` with a limited - # list of names, we ensure that only the useful names ever get exported to XML and - # ever get compiled into the binary font. - # - # We override compilerClass so we can warn about "useless" key/value pairs, either - # from the original binary font or from TTX input. - # - # See: - # - https://github.com/fonttools/fonttools/issues/740 - # - https://github.com/fonttools/fonttools/issues/601 - # - https://github.com/adobe-type-tools/afdko/issues/137 - # - defaults = {} - converters = buildConverters(topDictOperators) - compilerClass = FontDictCompiler - orderCFF = ["FontName", "FontMatrix", "Weight", "Private"] - orderCFF2 = ["Private"] - decompilerClass = TopDictDecompiler - - def __init__( - self, - strings=None, - file=None, - offset=None, - GlobalSubrs=None, - isCFF2=None, - vstore=None, - ): - super(FontDict, self).__init__(strings, file, offset, isCFF2=isCFF2) - self.vstore = vstore - self.setCFF2(isCFF2) - - def setCFF2(self, isCFF2): - # isCFF2 may be None. - if isCFF2: - self.order = self.orderCFF2 - self._isCFF2 = True - else: - self.order = self.orderCFF - self._isCFF2 = False - - -class PrivateDict(BaseDict): - defaults = buildDefaults(privateDictOperators) - converters = buildConverters(privateDictOperators) - order = buildOrder(privateDictOperators) - decompilerClass = PrivateDictDecompiler - compilerClass = PrivateDictCompiler - - def __init__(self, strings=None, file=None, offset=None, isCFF2=None, vstore=None): - super(PrivateDict, self).__init__(strings, file, offset, isCFF2=isCFF2) - self.vstore = vstore - if isCFF2: - self.defaults = buildDefaults(privateDictOperators2) - self.order = buildOrder(privateDictOperators2) - # Provide dummy values. This avoids needing to provide - # an isCFF2 state in a lot of places. - self.nominalWidthX = self.defaultWidthX = None - self._isCFF2 = True - else: - self.defaults = buildDefaults(privateDictOperators) - self.order = buildOrder(privateDictOperators) - self._isCFF2 = False - - @property - def in_cff2(self): - return self._isCFF2 - - def getNumRegions(self, vi=None): # called from misc/psCharStrings.py - # if getNumRegions is being called, we can assume that VarStore exists. - if vi is None: - if hasattr(self, "vsindex"): - vi = self.vsindex - else: - vi = 0 - numRegions = self.vstore.getNumRegions(vi) - return numRegions - - -class IndexedStrings(object): - """SID -> string mapping.""" - - def __init__(self, file=None): - if file is None: - strings = [] - else: - strings = [tostr(s, encoding="latin1") for s in Index(file, isCFF2=False)] - self.strings = strings - - def getCompiler(self): - return IndexedStringsCompiler(self, None, self, isCFF2=False) - - def __len__(self): - return len(self.strings) - - def __getitem__(self, SID): - if SID < cffStandardStringCount: - return cffStandardStrings[SID] - else: - return self.strings[SID - cffStandardStringCount] - - def getSID(self, s): - if not hasattr(self, "stringMapping"): - self.buildStringMapping() - s = tostr(s, encoding="latin1") - if s in cffStandardStringMapping: - SID = cffStandardStringMapping[s] - elif s in self.stringMapping: - SID = self.stringMapping[s] - else: - SID = len(self.strings) + cffStandardStringCount - self.strings.append(s) - self.stringMapping[s] = SID - return SID - - def getStrings(self): - return self.strings - - def buildStringMapping(self): - self.stringMapping = {} - for index in range(len(self.strings)): - self.stringMapping[self.strings[index]] = index + cffStandardStringCount - - -# The 391 Standard Strings as used in the CFF format. -# from Adobe Technical None #5176, version 1.0, 18 March 1998 - -cffStandardStrings = [ - ".notdef", - "space", - "exclam", - "quotedbl", - "numbersign", - "dollar", - "percent", - "ampersand", - "quoteright", - "parenleft", - "parenright", - "asterisk", - "plus", - "comma", - "hyphen", - "period", - "slash", - "zero", - "one", - "two", - "three", - "four", - "five", - "six", - "seven", - "eight", - "nine", - "colon", - "semicolon", - "less", - "equal", - "greater", - "question", - "at", - "A", - "B", - "C", - "D", - "E", - "F", - "G", - "H", - "I", - "J", - "K", - "L", - "M", - "N", - "O", - "P", - "Q", - "R", - "S", - "T", - "U", - "V", - "W", - "X", - "Y", - "Z", - "bracketleft", - "backslash", - "bracketright", - "asciicircum", - "underscore", - "quoteleft", - "a", - "b", - "c", - "d", - "e", - "f", - "g", - "h", - "i", - "j", - "k", - "l", - "m", - "n", - "o", - "p", - "q", - "r", - "s", - "t", - "u", - "v", - "w", - "x", - "y", - "z", - "braceleft", - "bar", - "braceright", - "asciitilde", - "exclamdown", - "cent", - "sterling", - "fraction", - "yen", - "florin", - "section", - "currency", - "quotesingle", - "quotedblleft", - "guillemotleft", - "guilsinglleft", - "guilsinglright", - "fi", - "fl", - "endash", - "dagger", - "daggerdbl", - "periodcentered", - "paragraph", - "bullet", - "quotesinglbase", - "quotedblbase", - "quotedblright", - "guillemotright", - "ellipsis", - "perthousand", - "questiondown", - "grave", - "acute", - "circumflex", - "tilde", - "macron", - "breve", - "dotaccent", - "dieresis", - "ring", - "cedilla", - "hungarumlaut", - "ogonek", - "caron", - "emdash", - "AE", - "ordfeminine", - "Lslash", - "Oslash", - "OE", - "ordmasculine", - "ae", - "dotlessi", - "lslash", - "oslash", - "oe", - "germandbls", - "onesuperior", - "logicalnot", - "mu", - "trademark", - "Eth", - "onehalf", - "plusminus", - "Thorn", - "onequarter", - "divide", - "brokenbar", - "degree", - "thorn", - "threequarters", - "twosuperior", - "registered", - "minus", - "eth", - "multiply", - "threesuperior", - "copyright", - "Aacute", - "Acircumflex", - "Adieresis", - "Agrave", - "Aring", - "Atilde", - "Ccedilla", - "Eacute", - "Ecircumflex", - "Edieresis", - "Egrave", - "Iacute", - "Icircumflex", - "Idieresis", - "Igrave", - "Ntilde", - "Oacute", - "Ocircumflex", - "Odieresis", - "Ograve", - "Otilde", - "Scaron", - "Uacute", - "Ucircumflex", - "Udieresis", - "Ugrave", - "Yacute", - "Ydieresis", - "Zcaron", - "aacute", - "acircumflex", - "adieresis", - "agrave", - "aring", - "atilde", - "ccedilla", - "eacute", - "ecircumflex", - "edieresis", - "egrave", - "iacute", - "icircumflex", - "idieresis", - "igrave", - "ntilde", - "oacute", - "ocircumflex", - "odieresis", - "ograve", - "otilde", - "scaron", - "uacute", - "ucircumflex", - "udieresis", - "ugrave", - "yacute", - "ydieresis", - "zcaron", - "exclamsmall", - "Hungarumlautsmall", - "dollaroldstyle", - "dollarsuperior", - "ampersandsmall", - "Acutesmall", - "parenleftsuperior", - "parenrightsuperior", - "twodotenleader", - "onedotenleader", - "zerooldstyle", - "oneoldstyle", - "twooldstyle", - "threeoldstyle", - "fouroldstyle", - "fiveoldstyle", - "sixoldstyle", - "sevenoldstyle", - "eightoldstyle", - "nineoldstyle", - "commasuperior", - "threequartersemdash", - "periodsuperior", - "questionsmall", - "asuperior", - "bsuperior", - "centsuperior", - "dsuperior", - "esuperior", - "isuperior", - "lsuperior", - "msuperior", - "nsuperior", - "osuperior", - "rsuperior", - "ssuperior", - "tsuperior", - "ff", - "ffi", - "ffl", - "parenleftinferior", - "parenrightinferior", - "Circumflexsmall", - "hyphensuperior", - "Gravesmall", - "Asmall", - "Bsmall", - "Csmall", - "Dsmall", - "Esmall", - "Fsmall", - "Gsmall", - "Hsmall", - "Ismall", - "Jsmall", - "Ksmall", - "Lsmall", - "Msmall", - "Nsmall", - "Osmall", - "Psmall", - "Qsmall", - "Rsmall", - "Ssmall", - "Tsmall", - "Usmall", - "Vsmall", - "Wsmall", - "Xsmall", - "Ysmall", - "Zsmall", - "colonmonetary", - "onefitted", - "rupiah", - "Tildesmall", - "exclamdownsmall", - "centoldstyle", - "Lslashsmall", - "Scaronsmall", - "Zcaronsmall", - "Dieresissmall", - "Brevesmall", - "Caronsmall", - "Dotaccentsmall", - "Macronsmall", - "figuredash", - "hypheninferior", - "Ogoneksmall", - "Ringsmall", - "Cedillasmall", - "questiondownsmall", - "oneeighth", - "threeeighths", - "fiveeighths", - "seveneighths", - "onethird", - "twothirds", - "zerosuperior", - "foursuperior", - "fivesuperior", - "sixsuperior", - "sevensuperior", - "eightsuperior", - "ninesuperior", - "zeroinferior", - "oneinferior", - "twoinferior", - "threeinferior", - "fourinferior", - "fiveinferior", - "sixinferior", - "seveninferior", - "eightinferior", - "nineinferior", - "centinferior", - "dollarinferior", - "periodinferior", - "commainferior", - "Agravesmall", - "Aacutesmall", - "Acircumflexsmall", - "Atildesmall", - "Adieresissmall", - "Aringsmall", - "AEsmall", - "Ccedillasmall", - "Egravesmall", - "Eacutesmall", - "Ecircumflexsmall", - "Edieresissmall", - "Igravesmall", - "Iacutesmall", - "Icircumflexsmall", - "Idieresissmall", - "Ethsmall", - "Ntildesmall", - "Ogravesmall", - "Oacutesmall", - "Ocircumflexsmall", - "Otildesmall", - "Odieresissmall", - "OEsmall", - "Oslashsmall", - "Ugravesmall", - "Uacutesmall", - "Ucircumflexsmall", - "Udieresissmall", - "Yacutesmall", - "Thornsmall", - "Ydieresissmall", - "001.000", - "001.001", - "001.002", - "001.003", - "Black", - "Bold", - "Book", - "Light", - "Medium", - "Regular", - "Roman", - "Semibold", -] - -cffStandardStringCount = 391 -assert len(cffStandardStrings) == cffStandardStringCount -# build reverse mapping -cffStandardStringMapping = {} -for _i in range(cffStandardStringCount): - cffStandardStringMapping[cffStandardStrings[_i]] = _i - -cffISOAdobeStrings = [ - ".notdef", - "space", - "exclam", - "quotedbl", - "numbersign", - "dollar", - "percent", - "ampersand", - "quoteright", - "parenleft", - "parenright", - "asterisk", - "plus", - "comma", - "hyphen", - "period", - "slash", - "zero", - "one", - "two", - "three", - "four", - "five", - "six", - "seven", - "eight", - "nine", - "colon", - "semicolon", - "less", - "equal", - "greater", - "question", - "at", - "A", - "B", - "C", - "D", - "E", - "F", - "G", - "H", - "I", - "J", - "K", - "L", - "M", - "N", - "O", - "P", - "Q", - "R", - "S", - "T", - "U", - "V", - "W", - "X", - "Y", - "Z", - "bracketleft", - "backslash", - "bracketright", - "asciicircum", - "underscore", - "quoteleft", - "a", - "b", - "c", - "d", - "e", - "f", - "g", - "h", - "i", - "j", - "k", - "l", - "m", - "n", - "o", - "p", - "q", - "r", - "s", - "t", - "u", - "v", - "w", - "x", - "y", - "z", - "braceleft", - "bar", - "braceright", - "asciitilde", - "exclamdown", - "cent", - "sterling", - "fraction", - "yen", - "florin", - "section", - "currency", - "quotesingle", - "quotedblleft", - "guillemotleft", - "guilsinglleft", - "guilsinglright", - "fi", - "fl", - "endash", - "dagger", - "daggerdbl", - "periodcentered", - "paragraph", - "bullet", - "quotesinglbase", - "quotedblbase", - "quotedblright", - "guillemotright", - "ellipsis", - "perthousand", - "questiondown", - "grave", - "acute", - "circumflex", - "tilde", - "macron", - "breve", - "dotaccent", - "dieresis", - "ring", - "cedilla", - "hungarumlaut", - "ogonek", - "caron", - "emdash", - "AE", - "ordfeminine", - "Lslash", - "Oslash", - "OE", - "ordmasculine", - "ae", - "dotlessi", - "lslash", - "oslash", - "oe", - "germandbls", - "onesuperior", - "logicalnot", - "mu", - "trademark", - "Eth", - "onehalf", - "plusminus", - "Thorn", - "onequarter", - "divide", - "brokenbar", - "degree", - "thorn", - "threequarters", - "twosuperior", - "registered", - "minus", - "eth", - "multiply", - "threesuperior", - "copyright", - "Aacute", - "Acircumflex", - "Adieresis", - "Agrave", - "Aring", - "Atilde", - "Ccedilla", - "Eacute", - "Ecircumflex", - "Edieresis", - "Egrave", - "Iacute", - "Icircumflex", - "Idieresis", - "Igrave", - "Ntilde", - "Oacute", - "Ocircumflex", - "Odieresis", - "Ograve", - "Otilde", - "Scaron", - "Uacute", - "Ucircumflex", - "Udieresis", - "Ugrave", - "Yacute", - "Ydieresis", - "Zcaron", - "aacute", - "acircumflex", - "adieresis", - "agrave", - "aring", - "atilde", - "ccedilla", - "eacute", - "ecircumflex", - "edieresis", - "egrave", - "iacute", - "icircumflex", - "idieresis", - "igrave", - "ntilde", - "oacute", - "ocircumflex", - "odieresis", - "ograve", - "otilde", - "scaron", - "uacute", - "ucircumflex", - "udieresis", - "ugrave", - "yacute", - "ydieresis", - "zcaron", -] - -cffISOAdobeStringCount = 229 -assert len(cffISOAdobeStrings) == cffISOAdobeStringCount - -cffIExpertStrings = [ - ".notdef", - "space", - "exclamsmall", - "Hungarumlautsmall", - "dollaroldstyle", - "dollarsuperior", - "ampersandsmall", - "Acutesmall", - "parenleftsuperior", - "parenrightsuperior", - "twodotenleader", - "onedotenleader", - "comma", - "hyphen", - "period", - "fraction", - "zerooldstyle", - "oneoldstyle", - "twooldstyle", - "threeoldstyle", - "fouroldstyle", - "fiveoldstyle", - "sixoldstyle", - "sevenoldstyle", - "eightoldstyle", - "nineoldstyle", - "colon", - "semicolon", - "commasuperior", - "threequartersemdash", - "periodsuperior", - "questionsmall", - "asuperior", - "bsuperior", - "centsuperior", - "dsuperior", - "esuperior", - "isuperior", - "lsuperior", - "msuperior", - "nsuperior", - "osuperior", - "rsuperior", - "ssuperior", - "tsuperior", - "ff", - "fi", - "fl", - "ffi", - "ffl", - "parenleftinferior", - "parenrightinferior", - "Circumflexsmall", - "hyphensuperior", - "Gravesmall", - "Asmall", - "Bsmall", - "Csmall", - "Dsmall", - "Esmall", - "Fsmall", - "Gsmall", - "Hsmall", - "Ismall", - "Jsmall", - "Ksmall", - "Lsmall", - "Msmall", - "Nsmall", - "Osmall", - "Psmall", - "Qsmall", - "Rsmall", - "Ssmall", - "Tsmall", - "Usmall", - "Vsmall", - "Wsmall", - "Xsmall", - "Ysmall", - "Zsmall", - "colonmonetary", - "onefitted", - "rupiah", - "Tildesmall", - "exclamdownsmall", - "centoldstyle", - "Lslashsmall", - "Scaronsmall", - "Zcaronsmall", - "Dieresissmall", - "Brevesmall", - "Caronsmall", - "Dotaccentsmall", - "Macronsmall", - "figuredash", - "hypheninferior", - "Ogoneksmall", - "Ringsmall", - "Cedillasmall", - "onequarter", - "onehalf", - "threequarters", - "questiondownsmall", - "oneeighth", - "threeeighths", - "fiveeighths", - "seveneighths", - "onethird", - "twothirds", - "zerosuperior", - "onesuperior", - "twosuperior", - "threesuperior", - "foursuperior", - "fivesuperior", - "sixsuperior", - "sevensuperior", - "eightsuperior", - "ninesuperior", - "zeroinferior", - "oneinferior", - "twoinferior", - "threeinferior", - "fourinferior", - "fiveinferior", - "sixinferior", - "seveninferior", - "eightinferior", - "nineinferior", - "centinferior", - "dollarinferior", - "periodinferior", - "commainferior", - "Agravesmall", - "Aacutesmall", - "Acircumflexsmall", - "Atildesmall", - "Adieresissmall", - "Aringsmall", - "AEsmall", - "Ccedillasmall", - "Egravesmall", - "Eacutesmall", - "Ecircumflexsmall", - "Edieresissmall", - "Igravesmall", - "Iacutesmall", - "Icircumflexsmall", - "Idieresissmall", - "Ethsmall", - "Ntildesmall", - "Ogravesmall", - "Oacutesmall", - "Ocircumflexsmall", - "Otildesmall", - "Odieresissmall", - "OEsmall", - "Oslashsmall", - "Ugravesmall", - "Uacutesmall", - "Ucircumflexsmall", - "Udieresissmall", - "Yacutesmall", - "Thornsmall", - "Ydieresissmall", -] - -cffExpertStringCount = 166 -assert len(cffIExpertStrings) == cffExpertStringCount - -cffExpertSubsetStrings = [ - ".notdef", - "space", - "dollaroldstyle", - "dollarsuperior", - "parenleftsuperior", - "parenrightsuperior", - "twodotenleader", - "onedotenleader", - "comma", - "hyphen", - "period", - "fraction", - "zerooldstyle", - "oneoldstyle", - "twooldstyle", - "threeoldstyle", - "fouroldstyle", - "fiveoldstyle", - "sixoldstyle", - "sevenoldstyle", - "eightoldstyle", - "nineoldstyle", - "colon", - "semicolon", - "commasuperior", - "threequartersemdash", - "periodsuperior", - "asuperior", - "bsuperior", - "centsuperior", - "dsuperior", - "esuperior", - "isuperior", - "lsuperior", - "msuperior", - "nsuperior", - "osuperior", - "rsuperior", - "ssuperior", - "tsuperior", - "ff", - "fi", - "fl", - "ffi", - "ffl", - "parenleftinferior", - "parenrightinferior", - "hyphensuperior", - "colonmonetary", - "onefitted", - "rupiah", - "centoldstyle", - "figuredash", - "hypheninferior", - "onequarter", - "onehalf", - "threequarters", - "oneeighth", - "threeeighths", - "fiveeighths", - "seveneighths", - "onethird", - "twothirds", - "zerosuperior", - "onesuperior", - "twosuperior", - "threesuperior", - "foursuperior", - "fivesuperior", - "sixsuperior", - "sevensuperior", - "eightsuperior", - "ninesuperior", - "zeroinferior", - "oneinferior", - "twoinferior", - "threeinferior", - "fourinferior", - "fiveinferior", - "sixinferior", - "seveninferior", - "eightinferior", - "nineinferior", - "centinferior", - "dollarinferior", - "periodinferior", - "commainferior", -] - -cffExpertSubsetStringCount = 87 -assert len(cffExpertSubsetStrings) == cffExpertSubsetStringCount diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/CFF2ToCFF.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/CFF2ToCFF.cpython-312.pyc deleted file mode 100644 index d54c0344..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/CFF2ToCFF.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/CFFToCFF2.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/CFFToCFF2.cpython-312.pyc deleted file mode 100644 index ca3909ff..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/CFFToCFF2.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 37333b85..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/specializer.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/specializer.cpython-312.pyc deleted file mode 100644 index c9027972..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/specializer.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/transforms.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/transforms.cpython-312.pyc deleted file mode 100644 index 05fc3950..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/transforms.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/width.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/width.cpython-312.pyc deleted file mode 100644 index 0c12a992..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/__pycache__/width.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/specializer.py b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/specializer.py deleted file mode 100644 index 974060c4..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/specializer.py +++ /dev/null @@ -1,927 +0,0 @@ -# -*- coding: utf-8 -*- - -"""T2CharString operator specializer and generalizer. - -PostScript glyph drawing operations can be expressed in multiple different -ways. For example, as well as the ``lineto`` operator, there is also a -``hlineto`` operator which draws a horizontal line, removing the need to -specify a ``dx`` coordinate, and a ``vlineto`` operator which draws a -vertical line, removing the need to specify a ``dy`` coordinate. As well -as decompiling :class:`fontTools.misc.psCharStrings.T2CharString` objects -into lists of operations, this module allows for conversion between general -and specific forms of the operation. - -""" - -from fontTools.cffLib import maxStackLimit - - -def stringToProgram(string): - if isinstance(string, str): - string = string.split() - program = [] - for token in string: - try: - token = int(token) - except ValueError: - try: - token = float(token) - except ValueError: - pass - program.append(token) - return program - - -def programToString(program): - return " ".join(str(x) for x in program) - - -def programToCommands(program, getNumRegions=None): - """Takes a T2CharString program list and returns list of commands. - Each command is a two-tuple of commandname,arg-list. The commandname might - be empty string if no commandname shall be emitted (used for glyph width, - hintmask/cntrmask argument, as well as stray arguments at the end of the - program (🀷). - 'getNumRegions' may be None, or a callable object. It must return the - number of regions. 'getNumRegions' takes a single argument, vsindex. It - returns the numRegions for the vsindex. - The Charstring may or may not start with a width value. If the first - non-blend operator has an odd number of arguments, then the first argument is - a width, and is popped off. This is complicated with blend operators, as - there may be more than one before the first hint or moveto operator, and each - one reduces several arguments to just one list argument. We have to sum the - number of arguments that are not part of the blend arguments, and all the - 'numBlends' values. We could instead have said that by definition, if there - is a blend operator, there is no width value, since CFF2 Charstrings don't - have width values. I discussed this with Behdad, and we are allowing for an - initial width value in this case because developers may assemble a CFF2 - charstring from CFF Charstrings, which could have width values. - """ - - seenWidthOp = False - vsIndex = 0 - lenBlendStack = 0 - lastBlendIndex = 0 - commands = [] - stack = [] - it = iter(program) - - for token in it: - if not isinstance(token, str): - stack.append(token) - continue - - if token == "blend": - assert getNumRegions is not None - numSourceFonts = 1 + getNumRegions(vsIndex) - # replace the blend op args on the stack with a single list - # containing all the blend op args. - numBlends = stack[-1] - numBlendArgs = numBlends * numSourceFonts + 1 - # replace first blend op by a list of the blend ops. - stack[-numBlendArgs:] = [stack[-numBlendArgs:]] - lenStack = len(stack) - lenBlendStack += numBlends + lenStack - 1 - lastBlendIndex = lenStack - # if a blend op exists, this is or will be a CFF2 charstring. - continue - - elif token == "vsindex": - vsIndex = stack[-1] - assert type(vsIndex) is int - - elif (not seenWidthOp) and token in { - "hstem", - "hstemhm", - "vstem", - "vstemhm", - "cntrmask", - "hintmask", - "hmoveto", - "vmoveto", - "rmoveto", - "endchar", - }: - seenWidthOp = True - parity = token in {"hmoveto", "vmoveto"} - if lenBlendStack: - # lenBlendStack has the number of args represented by the last blend - # arg and all the preceding args. We need to now add the number of - # args following the last blend arg. - numArgs = lenBlendStack + len(stack[lastBlendIndex:]) - else: - numArgs = len(stack) - if numArgs and (numArgs % 2) ^ parity: - width = stack.pop(0) - commands.append(("", [width])) - - if token in {"hintmask", "cntrmask"}: - if stack: - commands.append(("", stack)) - commands.append((token, [])) - commands.append(("", [next(it)])) - else: - commands.append((token, stack)) - stack = [] - if stack: - commands.append(("", stack)) - return commands - - -def _flattenBlendArgs(args): - token_list = [] - for arg in args: - if isinstance(arg, list): - token_list.extend(arg) - token_list.append("blend") - else: - token_list.append(arg) - return token_list - - -def commandsToProgram(commands): - """Takes a commands list as returned by programToCommands() and converts - it back to a T2CharString program list.""" - program = [] - for op, args in commands: - if any(isinstance(arg, list) for arg in args): - args = _flattenBlendArgs(args) - program.extend(args) - if op: - program.append(op) - return program - - -def _everyN(el, n): - """Group the list el into groups of size n""" - l = len(el) - if l % n != 0: - raise ValueError(el) - for i in range(0, l, n): - yield el[i : i + n] - - -class _GeneralizerDecombinerCommandsMap(object): - @staticmethod - def rmoveto(args): - if len(args) != 2: - raise ValueError(args) - yield ("rmoveto", args) - - @staticmethod - def hmoveto(args): - if len(args) != 1: - raise ValueError(args) - yield ("rmoveto", [args[0], 0]) - - @staticmethod - def vmoveto(args): - if len(args) != 1: - raise ValueError(args) - yield ("rmoveto", [0, args[0]]) - - @staticmethod - def rlineto(args): - if not args: - raise ValueError(args) - for args in _everyN(args, 2): - yield ("rlineto", args) - - @staticmethod - def hlineto(args): - if not args: - raise ValueError(args) - it = iter(args) - try: - while True: - yield ("rlineto", [next(it), 0]) - yield ("rlineto", [0, next(it)]) - except StopIteration: - pass - - @staticmethod - def vlineto(args): - if not args: - raise ValueError(args) - it = iter(args) - try: - while True: - yield ("rlineto", [0, next(it)]) - yield ("rlineto", [next(it), 0]) - except StopIteration: - pass - - @staticmethod - def rrcurveto(args): - if not args: - raise ValueError(args) - for args in _everyN(args, 6): - yield ("rrcurveto", args) - - @staticmethod - def hhcurveto(args): - l = len(args) - if l < 4 or l % 4 > 1: - raise ValueError(args) - if l % 2 == 1: - yield ("rrcurveto", [args[1], args[0], args[2], args[3], args[4], 0]) - args = args[5:] - for args in _everyN(args, 4): - yield ("rrcurveto", [args[0], 0, args[1], args[2], args[3], 0]) - - @staticmethod - def vvcurveto(args): - l = len(args) - if l < 4 or l % 4 > 1: - raise ValueError(args) - if l % 2 == 1: - yield ("rrcurveto", [args[0], args[1], args[2], args[3], 0, args[4]]) - args = args[5:] - for args in _everyN(args, 4): - yield ("rrcurveto", [0, args[0], args[1], args[2], 0, args[3]]) - - @staticmethod - def hvcurveto(args): - l = len(args) - if l < 4 or l % 8 not in {0, 1, 4, 5}: - raise ValueError(args) - last_args = None - if l % 2 == 1: - lastStraight = l % 8 == 5 - args, last_args = args[:-5], args[-5:] - it = _everyN(args, 4) - try: - while True: - args = next(it) - yield ("rrcurveto", [args[0], 0, args[1], args[2], 0, args[3]]) - args = next(it) - yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], 0]) - except StopIteration: - pass - if last_args: - args = last_args - if lastStraight: - yield ("rrcurveto", [args[0], 0, args[1], args[2], args[4], args[3]]) - else: - yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], args[4]]) - - @staticmethod - def vhcurveto(args): - l = len(args) - if l < 4 or l % 8 not in {0, 1, 4, 5}: - raise ValueError(args) - last_args = None - if l % 2 == 1: - lastStraight = l % 8 == 5 - args, last_args = args[:-5], args[-5:] - it = _everyN(args, 4) - try: - while True: - args = next(it) - yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], 0]) - args = next(it) - yield ("rrcurveto", [args[0], 0, args[1], args[2], 0, args[3]]) - except StopIteration: - pass - if last_args: - args = last_args - if lastStraight: - yield ("rrcurveto", [0, args[0], args[1], args[2], args[3], args[4]]) - else: - yield ("rrcurveto", [args[0], 0, args[1], args[2], args[4], args[3]]) - - @staticmethod - def rcurveline(args): - l = len(args) - if l < 8 or l % 6 != 2: - raise ValueError(args) - args, last_args = args[:-2], args[-2:] - for args in _everyN(args, 6): - yield ("rrcurveto", args) - yield ("rlineto", last_args) - - @staticmethod - def rlinecurve(args): - l = len(args) - if l < 8 or l % 2 != 0: - raise ValueError(args) - args, last_args = args[:-6], args[-6:] - for args in _everyN(args, 2): - yield ("rlineto", args) - yield ("rrcurveto", last_args) - - -def _convertBlendOpToArgs(blendList): - # args is list of blend op args. Since we are supporting - # recursive blend op calls, some of these args may also - # be a list of blend op args, and need to be converted before - # we convert the current list. - if any([isinstance(arg, list) for arg in blendList]): - args = [ - i - for e in blendList - for i in (_convertBlendOpToArgs(e) if isinstance(e, list) else [e]) - ] - else: - args = blendList - - # We now know that blendList contains a blend op argument list, even if - # some of the args are lists that each contain a blend op argument list. - # Convert from: - # [default font arg sequence x0,...,xn] + [delta tuple for x0] + ... + [delta tuple for xn] - # to: - # [ [x0] + [delta tuple for x0], - # ..., - # [xn] + [delta tuple for xn] ] - numBlends = args[-1] - # Can't use args.pop() when the args are being used in a nested list - # comprehension. See calling context - args = args[:-1] - - l = len(args) - numRegions = l // numBlends - 1 - if not (numBlends * (numRegions + 1) == l): - raise ValueError(blendList) - - defaultArgs = [[arg] for arg in args[:numBlends]] - deltaArgs = args[numBlends:] - numDeltaValues = len(deltaArgs) - deltaList = [ - deltaArgs[i : i + numRegions] for i in range(0, numDeltaValues, numRegions) - ] - blend_args = [a + b + [1] for a, b in zip(defaultArgs, deltaList)] - return blend_args - - -def generalizeCommands(commands, ignoreErrors=False): - result = [] - mapping = _GeneralizerDecombinerCommandsMap - for op, args in commands: - # First, generalize any blend args in the arg list. - if any([isinstance(arg, list) for arg in args]): - try: - args = [ - n - for arg in args - for n in ( - _convertBlendOpToArgs(arg) if isinstance(arg, list) else [arg] - ) - ] - except ValueError: - if ignoreErrors: - # Store op as data, such that consumers of commands do not have to - # deal with incorrect number of arguments. - result.append(("", args)) - result.append(("", [op])) - else: - raise - - func = getattr(mapping, op, None) - if func is None: - result.append((op, args)) - continue - try: - for command in func(args): - result.append(command) - except ValueError: - if ignoreErrors: - # Store op as data, such that consumers of commands do not have to - # deal with incorrect number of arguments. - result.append(("", args)) - result.append(("", [op])) - else: - raise - return result - - -def generalizeProgram(program, getNumRegions=None, **kwargs): - return commandsToProgram( - generalizeCommands(programToCommands(program, getNumRegions), **kwargs) - ) - - -def _categorizeVector(v): - """ - Takes X,Y vector v and returns one of r, h, v, or 0 depending on which - of X and/or Y are zero, plus tuple of nonzero ones. If both are zero, - it returns a single zero still. - - >>> _categorizeVector((0,0)) - ('0', (0,)) - >>> _categorizeVector((1,0)) - ('h', (1,)) - >>> _categorizeVector((0,2)) - ('v', (2,)) - >>> _categorizeVector((1,2)) - ('r', (1, 2)) - """ - if not v[0]: - if not v[1]: - return "0", v[:1] - else: - return "v", v[1:] - else: - if not v[1]: - return "h", v[:1] - else: - return "r", v - - -def _mergeCategories(a, b): - if a == "0": - return b - if b == "0": - return a - if a == b: - return a - return None - - -def _negateCategory(a): - if a == "h": - return "v" - if a == "v": - return "h" - assert a in "0r" - return a - - -def _convertToBlendCmds(args): - # return a list of blend commands, and - # the remaining non-blended args, if any. - num_args = len(args) - stack_use = 0 - new_args = [] - i = 0 - while i < num_args: - arg = args[i] - i += 1 - if not isinstance(arg, list): - new_args.append(arg) - stack_use += 1 - else: - prev_stack_use = stack_use - # The arg is a tuple of blend values. - # These are each (master 0,delta 1..delta n, 1) - # Combine as many successive tuples as we can, - # up to the max stack limit. - num_sources = len(arg) - 1 - blendlist = [arg] - stack_use += 1 + num_sources # 1 for the num_blends arg - - # if we are here, max stack is the CFF2 max stack. - # I use the CFF2 max stack limit here rather than - # the 'maxstack' chosen by the client, as the default - # maxstack may have been used unintentionally. For all - # the other operators, this just produces a little less - # optimization, but here it puts a hard (and low) limit - # on the number of source fonts that can be used. - # - # Make sure the stack depth does not exceed (maxstack - 1), so - # that subroutinizer can insert subroutine calls at any point. - while ( - (i < num_args) - and isinstance(args[i], list) - and stack_use + num_sources < maxStackLimit - ): - blendlist.append(args[i]) - i += 1 - stack_use += num_sources - # blendList now contains as many single blend tuples as can be - # combined without exceeding the CFF2 stack limit. - num_blends = len(blendlist) - # append the 'num_blends' default font values - blend_args = [] - for arg in blendlist: - blend_args.append(arg[0]) - for arg in blendlist: - assert arg[-1] == 1 - blend_args.extend(arg[1:-1]) - blend_args.append(num_blends) - new_args.append(blend_args) - stack_use = prev_stack_use + num_blends - - return new_args - - -def _addArgs(a, b): - if isinstance(b, list): - if isinstance(a, list): - if len(a) != len(b) or a[-1] != b[-1]: - raise ValueError() - return [_addArgs(va, vb) for va, vb in zip(a[:-1], b[:-1])] + [a[-1]] - else: - a, b = b, a - if isinstance(a, list): - assert a[-1] == 1 - return [_addArgs(a[0], b)] + a[1:] - return a + b - - -def _argsStackUse(args): - stackLen = 0 - maxLen = 0 - for arg in args: - if type(arg) is list: - # Blended arg - maxLen = max(maxLen, stackLen + _argsStackUse(arg)) - stackLen += arg[-1] - else: - stackLen += 1 - return max(stackLen, maxLen) - - -def specializeCommands( - commands, - ignoreErrors=False, - generalizeFirst=True, - preserveTopology=False, - maxstack=48, -): - # We perform several rounds of optimizations. They are carefully ordered and are: - # - # 0. Generalize commands. - # This ensures that they are in our expected simple form, with each line/curve only - # having arguments for one segment, and using the generic form (rlineto/rrcurveto). - # If caller is sure the input is in this form, they can turn off generalization to - # save time. - # - # 1. Combine successive rmoveto operations. - # - # 2. Specialize rmoveto/rlineto/rrcurveto operators into horizontal/vertical variants. - # We specialize into some, made-up, variants as well, which simplifies following - # passes. - # - # 3. Merge or delete redundant operations, to the extent requested. - # OpenType spec declares point numbers in CFF undefined. As such, we happily - # change topology. If client relies on point numbers (in GPOS anchors, or for - # hinting purposes(what?)) they can turn this off. - # - # 4. Peephole optimization to revert back some of the h/v variants back into their - # original "relative" operator (rline/rrcurveto) if that saves a byte. - # - # 5. Combine adjacent operators when possible, minding not to go over max stack size. - # - # 6. Resolve any remaining made-up operators into real operators. - # - # I have convinced myself that this produces optimal bytecode (except for, possibly - # one byte each time maxstack size prohibits combining.) YMMV, but you'd be wrong. :-) - # A dynamic-programming approach can do the same but would be significantly slower. - # - # 7. For any args which are blend lists, convert them to a blend command. - - # 0. Generalize commands. - if generalizeFirst: - commands = generalizeCommands(commands, ignoreErrors=ignoreErrors) - else: - commands = list(commands) # Make copy since we modify in-place later. - - # 1. Combine successive rmoveto operations. - for i in range(len(commands) - 1, 0, -1): - if "rmoveto" == commands[i][0] == commands[i - 1][0]: - v1, v2 = commands[i - 1][1], commands[i][1] - commands[i - 1] = ( - "rmoveto", - [_addArgs(v1[0], v2[0]), _addArgs(v1[1], v2[1])], - ) - del commands[i] - - # 2. Specialize rmoveto/rlineto/rrcurveto operators into horizontal/vertical variants. - # - # We, in fact, specialize into more, made-up, variants that special-case when both - # X and Y components are zero. This simplifies the following optimization passes. - # This case is rare, but OCD does not let me skip it. - # - # After this round, we will have four variants that use the following mnemonics: - # - # - 'r' for relative, ie. non-zero X and non-zero Y, - # - 'h' for horizontal, ie. zero X and non-zero Y, - # - 'v' for vertical, ie. non-zero X and zero Y, - # - '0' for zeros, ie. zero X and zero Y. - # - # The '0' pseudo-operators are not part of the spec, but help simplify the following - # optimization rounds. We resolve them at the end. So, after this, we will have four - # moveto and four lineto variants: - # - # - 0moveto, 0lineto - # - hmoveto, hlineto - # - vmoveto, vlineto - # - rmoveto, rlineto - # - # and sixteen curveto variants. For example, a '0hcurveto' operator means a curve - # dx0,dy0,dx1,dy1,dx2,dy2,dx3,dy3 where dx0, dx1, and dy3 are zero but not dx3. - # An 'rvcurveto' means dx3 is zero but not dx0,dy0,dy3. - # - # There are nine different variants of curves without the '0'. Those nine map exactly - # to the existing curve variants in the spec: rrcurveto, and the four variants hhcurveto, - # vvcurveto, hvcurveto, and vhcurveto each cover two cases, one with an odd number of - # arguments and one without. Eg. an hhcurveto with an extra argument (odd number of - # arguments) is in fact an rhcurveto. The operators in the spec are designed such that - # all four of rhcurveto, rvcurveto, hrcurveto, and vrcurveto are encodable for one curve. - # - # Of the curve types with '0', the 00curveto is equivalent to a lineto variant. The rest - # of the curve types with a 0 need to be encoded as a h or v variant. Ie. a '0' can be - # thought of a "don't care" and can be used as either an 'h' or a 'v'. As such, we always - # encode a number 0 as argument when we use a '0' variant. Later on, we can just substitute - # the '0' with either 'h' or 'v' and it works. - # - # When we get to curve splines however, things become more complicated... XXX finish this. - # There's one more complexity with splines. If one side of the spline is not horizontal or - # vertical (or zero), ie. if it's 'r', then it limits which spline types we can encode. - # Only hhcurveto and vvcurveto operators can encode a spline starting with 'r', and - # only hvcurveto and vhcurveto operators can encode a spline ending with 'r'. - # This limits our merge opportunities later. - # - for i in range(len(commands)): - op, args = commands[i] - - if op in {"rmoveto", "rlineto"}: - c, args = _categorizeVector(args) - commands[i] = c + op[1:], args - continue - - if op == "rrcurveto": - c1, args1 = _categorizeVector(args[:2]) - c2, args2 = _categorizeVector(args[-2:]) - commands[i] = c1 + c2 + "curveto", args1 + args[2:4] + args2 - continue - - # 3. Merge or delete redundant operations, to the extent requested. - # - # TODO - # A 0moveto that comes before all other path operations can be removed. - # though I find conflicting evidence for this. - # - # TODO - # "If hstem and vstem hints are both declared at the beginning of a - # CharString, and this sequence is followed directly by the hintmask or - # cntrmask operators, then the vstem hint operator (or, if applicable, - # the vstemhm operator) need not be included." - # - # "The sequence and form of a CFF2 CharString program may be represented as: - # {hs* vs* cm* hm* mt subpath}? {mt subpath}*" - # - # https://www.microsoft.com/typography/otspec/cff2charstr.htm#section3.1 - # - # For Type2 CharStrings the sequence is: - # w? {hs* vs* cm* hm* mt subpath}? {mt subpath}* endchar" - - # Some other redundancies change topology (point numbers). - if not preserveTopology: - for i in range(len(commands) - 1, -1, -1): - op, args = commands[i] - - # A 00curveto is demoted to a (specialized) lineto. - if op == "00curveto": - assert len(args) == 4 - c, args = _categorizeVector(args[1:3]) - op = c + "lineto" - commands[i] = op, args - # and then... - - # A 0lineto can be deleted. - if op == "0lineto": - del commands[i] - continue - - # Merge adjacent hlineto's and vlineto's. - # In CFF2 charstrings from variable fonts, each - # arg item may be a list of blendable values, one from - # each source font. - if i and op in {"hlineto", "vlineto"} and (op == commands[i - 1][0]): - _, other_args = commands[i - 1] - assert len(args) == 1 and len(other_args) == 1 - try: - new_args = [_addArgs(args[0], other_args[0])] - except ValueError: - continue - commands[i - 1] = (op, new_args) - del commands[i] - continue - - # 4. Peephole optimization to revert back some of the h/v variants back into their - # original "relative" operator (rline/rrcurveto) if that saves a byte. - for i in range(1, len(commands) - 1): - op, args = commands[i] - prv, nxt = commands[i - 1][0], commands[i + 1][0] - - if op in {"0lineto", "hlineto", "vlineto"} and prv == nxt == "rlineto": - assert len(args) == 1 - args = [0, args[0]] if op[0] == "v" else [args[0], 0] - commands[i] = ("rlineto", args) - continue - - if op[2:] == "curveto" and len(args) == 5 and prv == nxt == "rrcurveto": - assert (op[0] == "r") ^ (op[1] == "r") - if op[0] == "v": - pos = 0 - elif op[0] != "r": - pos = 1 - elif op[1] == "v": - pos = 4 - else: - pos = 5 - # Insert, while maintaining the type of args (can be tuple or list). - args = args[:pos] + type(args)((0,)) + args[pos:] - commands[i] = ("rrcurveto", args) - continue - - # 5. Combine adjacent operators when possible, minding not to go over max stack size. - stackUse = _argsStackUse(commands[-1][1]) if commands else 0 - for i in range(len(commands) - 1, 0, -1): - op1, args1 = commands[i - 1] - op2, args2 = commands[i] - new_op = None - - # Merge logic... - if {op1, op2} <= {"rlineto", "rrcurveto"}: - if op1 == op2: - new_op = op1 - else: - l = len(args2) - if op2 == "rrcurveto" and l == 6: - new_op = "rlinecurve" - elif l == 2: - new_op = "rcurveline" - - elif (op1, op2) in {("rlineto", "rlinecurve"), ("rrcurveto", "rcurveline")}: - new_op = op2 - - elif {op1, op2} == {"vlineto", "hlineto"}: - new_op = op1 - - elif "curveto" == op1[2:] == op2[2:]: - d0, d1 = op1[:2] - d2, d3 = op2[:2] - - if d1 == "r" or d2 == "r" or d0 == d3 == "r": - continue - - d = _mergeCategories(d1, d2) - if d is None: - continue - if d0 == "r": - d = _mergeCategories(d, d3) - if d is None: - continue - new_op = "r" + d + "curveto" - elif d3 == "r": - d0 = _mergeCategories(d0, _negateCategory(d)) - if d0 is None: - continue - new_op = d0 + "r" + "curveto" - else: - d0 = _mergeCategories(d0, d3) - if d0 is None: - continue - new_op = d0 + d + "curveto" - - # Make sure the stack depth does not exceed (maxstack - 1), so - # that subroutinizer can insert subroutine calls at any point. - args1StackUse = _argsStackUse(args1) - combinedStackUse = max(args1StackUse, len(args1) + stackUse) - if new_op and combinedStackUse < maxstack: - commands[i - 1] = (new_op, args1 + args2) - del commands[i] - stackUse = combinedStackUse - else: - stackUse = args1StackUse - - # 6. Resolve any remaining made-up operators into real operators. - for i in range(len(commands)): - op, args = commands[i] - - if op in {"0moveto", "0lineto"}: - commands[i] = "h" + op[1:], args - continue - - if op[2:] == "curveto" and op[:2] not in {"rr", "hh", "vv", "vh", "hv"}: - l = len(args) - - op0, op1 = op[:2] - if (op0 == "r") ^ (op1 == "r"): - assert l % 2 == 1 - if op0 == "0": - op0 = "h" - if op1 == "0": - op1 = "h" - if op0 == "r": - op0 = op1 - if op1 == "r": - op1 = _negateCategory(op0) - assert {op0, op1} <= {"h", "v"}, (op0, op1) - - if l % 2: - if op0 != op1: # vhcurveto / hvcurveto - if (op0 == "h") ^ (l % 8 == 1): - # Swap last two args order - args = args[:-2] + args[-1:] + args[-2:-1] - else: # hhcurveto / vvcurveto - if op0 == "h": # hhcurveto - # Swap first two args order - args = args[1:2] + args[:1] + args[2:] - - commands[i] = op0 + op1 + "curveto", args - continue - - # 7. For any series of args which are blend lists, convert the series to a single blend arg. - for i in range(len(commands)): - op, args = commands[i] - if any(isinstance(arg, list) for arg in args): - commands[i] = op, _convertToBlendCmds(args) - - return commands - - -def specializeProgram(program, getNumRegions=None, **kwargs): - return commandsToProgram( - specializeCommands(programToCommands(program, getNumRegions), **kwargs) - ) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) == 1: - import doctest - - sys.exit(doctest.testmod().failed) - - import argparse - - parser = argparse.ArgumentParser( - "fonttools cffLib.specializer", - description="CFF CharString generalizer/specializer", - ) - parser.add_argument("program", metavar="command", nargs="*", help="Commands.") - parser.add_argument( - "--num-regions", - metavar="NumRegions", - nargs="*", - default=None, - help="Number of variable-font regions for blend opertaions.", - ) - parser.add_argument( - "--font", - metavar="FONTFILE", - default=None, - help="CFF2 font to specialize.", - ) - parser.add_argument( - "-o", - "--output-file", - type=str, - help="Output font file name.", - ) - - options = parser.parse_args(sys.argv[1:]) - - if options.program: - getNumRegions = ( - None - if options.num_regions is None - else lambda vsIndex: int( - options.num_regions[0 if vsIndex is None else vsIndex] - ) - ) - - program = stringToProgram(options.program) - print("Program:") - print(programToString(program)) - commands = programToCommands(program, getNumRegions) - print("Commands:") - print(commands) - program2 = commandsToProgram(commands) - print("Program from commands:") - print(programToString(program2)) - assert program == program2 - print("Generalized program:") - print(programToString(generalizeProgram(program, getNumRegions))) - print("Specialized program:") - print(programToString(specializeProgram(program, getNumRegions))) - - if options.font: - from fontTools.ttLib import TTFont - - font = TTFont(options.font) - cff2 = font["CFF2"].cff.topDictIndex[0] - charstrings = cff2.CharStrings - for glyphName in charstrings.keys(): - charstring = charstrings[glyphName] - charstring.decompile() - getNumRegions = charstring.private.getNumRegions - charstring.program = specializeProgram( - charstring.program, getNumRegions, maxstack=maxStackLimit - ) - - if options.output_file is None: - from fontTools.misc.cliTools import makeOutputFileName - - outfile = makeOutputFileName( - options.font, overWrite=True, suffix=".specialized" - ) - else: - outfile = options.output_file - if outfile: - print("Saving", outfile) - font.save(outfile) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/transforms.py b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/transforms.py deleted file mode 100644 index b9b7c86c..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/transforms.py +++ /dev/null @@ -1,495 +0,0 @@ -from fontTools.misc.psCharStrings import ( - SimpleT2Decompiler, - T2WidthExtractor, - calcSubrBias, -) - - -def _uniq_sort(l): - return sorted(set(l)) - - -class StopHintCountEvent(Exception): - pass - - -class _DesubroutinizingT2Decompiler(SimpleT2Decompiler): - stop_hintcount_ops = ( - "op_hintmask", - "op_cntrmask", - "op_rmoveto", - "op_hmoveto", - "op_vmoveto", - ) - - def __init__(self, localSubrs, globalSubrs, private=None): - SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private) - - def execute(self, charString): - self.need_hintcount = True # until proven otherwise - for op_name in self.stop_hintcount_ops: - setattr(self, op_name, self.stop_hint_count) - - if hasattr(charString, "_desubroutinized"): - # If a charstring has already been desubroutinized, we will still - # need to execute it if we need to count hints in order to - # compute the byte length for mask arguments, and haven't finished - # counting hints pairs. - if self.need_hintcount and self.callingStack: - try: - SimpleT2Decompiler.execute(self, charString) - except StopHintCountEvent: - del self.callingStack[-1] - return - - charString._patches = [] - SimpleT2Decompiler.execute(self, charString) - desubroutinized = charString.program[:] - for idx, expansion in reversed(charString._patches): - assert idx >= 2 - assert desubroutinized[idx - 1] in [ - "callsubr", - "callgsubr", - ], desubroutinized[idx - 1] - assert type(desubroutinized[idx - 2]) == int - if expansion[-1] == "return": - expansion = expansion[:-1] - desubroutinized[idx - 2 : idx] = expansion - if not self.private.in_cff2: - if "endchar" in desubroutinized: - # Cut off after first endchar - desubroutinized = desubroutinized[ - : desubroutinized.index("endchar") + 1 - ] - - charString._desubroutinized = desubroutinized - del charString._patches - - def op_callsubr(self, index): - subr = self.localSubrs[self.operandStack[-1] + self.localBias] - SimpleT2Decompiler.op_callsubr(self, index) - self.processSubr(index, subr) - - def op_callgsubr(self, index): - subr = self.globalSubrs[self.operandStack[-1] + self.globalBias] - SimpleT2Decompiler.op_callgsubr(self, index) - self.processSubr(index, subr) - - def stop_hint_count(self, *args): - self.need_hintcount = False - for op_name in self.stop_hintcount_ops: - setattr(self, op_name, None) - cs = self.callingStack[-1] - if hasattr(cs, "_desubroutinized"): - raise StopHintCountEvent() - - def op_hintmask(self, index): - SimpleT2Decompiler.op_hintmask(self, index) - if self.need_hintcount: - self.stop_hint_count() - - def processSubr(self, index, subr): - cs = self.callingStack[-1] - if not hasattr(cs, "_desubroutinized"): - cs._patches.append((index, subr._desubroutinized)) - - -def desubroutinizeCharString(cs): - """Desubroutinize a charstring in-place.""" - cs.decompile() - subrs = getattr(cs.private, "Subrs", []) - decompiler = _DesubroutinizingT2Decompiler(subrs, cs.globalSubrs, cs.private) - decompiler.execute(cs) - cs.program = cs._desubroutinized - del cs._desubroutinized - - -def desubroutinize(cff): - for fontName in cff.fontNames: - font = cff[fontName] - cs = font.CharStrings - for c in cs.values(): - desubroutinizeCharString(c) - # Delete all the local subrs - if hasattr(font, "FDArray"): - for fd in font.FDArray: - pd = fd.Private - if hasattr(pd, "Subrs"): - del pd.Subrs - if "Subrs" in pd.rawDict: - del pd.rawDict["Subrs"] - else: - pd = font.Private - if hasattr(pd, "Subrs"): - del pd.Subrs - if "Subrs" in pd.rawDict: - del pd.rawDict["Subrs"] - # as well as the global subrs - cff.GlobalSubrs.clear() - - -class _MarkingT2Decompiler(SimpleT2Decompiler): - def __init__(self, localSubrs, globalSubrs, private): - SimpleT2Decompiler.__init__(self, localSubrs, globalSubrs, private) - for subrs in [localSubrs, globalSubrs]: - if subrs and not hasattr(subrs, "_used"): - subrs._used = set() - - def op_callsubr(self, index): - self.localSubrs._used.add(self.operandStack[-1] + self.localBias) - SimpleT2Decompiler.op_callsubr(self, index) - - def op_callgsubr(self, index): - self.globalSubrs._used.add(self.operandStack[-1] + self.globalBias) - SimpleT2Decompiler.op_callgsubr(self, index) - - -class _DehintingT2Decompiler(T2WidthExtractor): - class Hints(object): - def __init__(self): - # Whether calling this charstring produces any hint stems - # Note that if a charstring starts with hintmask, it will - # have has_hint set to True, because it *might* produce an - # implicit vstem if called under certain conditions. - self.has_hint = False - # Index to start at to drop all hints - self.last_hint = 0 - # Index up to which we know more hints are possible. - # Only relevant if status is 0 or 1. - self.last_checked = 0 - # The status means: - # 0: after dropping hints, this charstring is empty - # 1: after dropping hints, there may be more hints - # continuing after this, or there might be - # other things. Not clear yet. - # 2: no more hints possible after this charstring - self.status = 0 - # Has hintmask instructions; not recursive - self.has_hintmask = False - # List of indices of calls to empty subroutines to remove. - self.deletions = [] - - pass - - def __init__( - self, css, localSubrs, globalSubrs, nominalWidthX, defaultWidthX, private=None - ): - self._css = css - T2WidthExtractor.__init__( - self, localSubrs, globalSubrs, nominalWidthX, defaultWidthX - ) - self.private = private - - def execute(self, charString): - old_hints = charString._hints if hasattr(charString, "_hints") else None - charString._hints = self.Hints() - - T2WidthExtractor.execute(self, charString) - - hints = charString._hints - - if hints.has_hint or hints.has_hintmask: - self._css.add(charString) - - if hints.status != 2: - # Check from last_check, make sure we didn't have any operators. - for i in range(hints.last_checked, len(charString.program) - 1): - if isinstance(charString.program[i], str): - hints.status = 2 - break - else: - hints.status = 1 # There's *something* here - hints.last_checked = len(charString.program) - - if old_hints: - assert hints.__dict__ == old_hints.__dict__ - - def op_callsubr(self, index): - subr = self.localSubrs[self.operandStack[-1] + self.localBias] - T2WidthExtractor.op_callsubr(self, index) - self.processSubr(index, subr) - - def op_callgsubr(self, index): - subr = self.globalSubrs[self.operandStack[-1] + self.globalBias] - T2WidthExtractor.op_callgsubr(self, index) - self.processSubr(index, subr) - - def op_hstem(self, index): - T2WidthExtractor.op_hstem(self, index) - self.processHint(index) - - def op_vstem(self, index): - T2WidthExtractor.op_vstem(self, index) - self.processHint(index) - - def op_hstemhm(self, index): - T2WidthExtractor.op_hstemhm(self, index) - self.processHint(index) - - def op_vstemhm(self, index): - T2WidthExtractor.op_vstemhm(self, index) - self.processHint(index) - - def op_hintmask(self, index): - rv = T2WidthExtractor.op_hintmask(self, index) - self.processHintmask(index) - return rv - - def op_cntrmask(self, index): - rv = T2WidthExtractor.op_cntrmask(self, index) - self.processHintmask(index) - return rv - - def processHintmask(self, index): - cs = self.callingStack[-1] - hints = cs._hints - hints.has_hintmask = True - if hints.status != 2: - # Check from last_check, see if we may be an implicit vstem - for i in range(hints.last_checked, index - 1): - if isinstance(cs.program[i], str): - hints.status = 2 - break - else: - # We are an implicit vstem - hints.has_hint = True - hints.last_hint = index + 1 - hints.status = 0 - hints.last_checked = index + 1 - - def processHint(self, index): - cs = self.callingStack[-1] - hints = cs._hints - hints.has_hint = True - hints.last_hint = index - hints.last_checked = index - - def processSubr(self, index, subr): - cs = self.callingStack[-1] - hints = cs._hints - subr_hints = subr._hints - - # Check from last_check, make sure we didn't have - # any operators. - if hints.status != 2: - for i in range(hints.last_checked, index - 1): - if isinstance(cs.program[i], str): - hints.status = 2 - break - hints.last_checked = index - - if hints.status != 2: - if subr_hints.has_hint: - hints.has_hint = True - - # Decide where to chop off from - if subr_hints.status == 0: - hints.last_hint = index - else: - hints.last_hint = index - 2 # Leave the subr call in - - elif subr_hints.status == 0: - hints.deletions.append(index) - - hints.status = max(hints.status, subr_hints.status) - - -def _cs_subset_subroutines(charstring, subrs, gsubrs): - p = charstring.program - for i in range(1, len(p)): - if p[i] == "callsubr": - assert isinstance(p[i - 1], int) - p[i - 1] = subrs._used.index(p[i - 1] + subrs._old_bias) - subrs._new_bias - elif p[i] == "callgsubr": - assert isinstance(p[i - 1], int) - p[i - 1] = ( - gsubrs._used.index(p[i - 1] + gsubrs._old_bias) - gsubrs._new_bias - ) - - -def _cs_drop_hints(charstring): - hints = charstring._hints - - if hints.deletions: - p = charstring.program - for idx in reversed(hints.deletions): - del p[idx - 2 : idx] - - if hints.has_hint: - assert not hints.deletions or hints.last_hint <= hints.deletions[0] - charstring.program = charstring.program[hints.last_hint :] - if not charstring.program: - # TODO CFF2 no need for endchar. - charstring.program.append("endchar") - if hasattr(charstring, "width"): - # Insert width back if needed - if charstring.width != charstring.private.defaultWidthX: - # For CFF2 charstrings, this should never happen - assert ( - charstring.private.defaultWidthX is not None - ), "CFF2 CharStrings must not have an initial width value" - charstring.program.insert( - 0, charstring.width - charstring.private.nominalWidthX - ) - - if hints.has_hintmask: - i = 0 - p = charstring.program - while i < len(p): - if p[i] in ["hintmask", "cntrmask"]: - assert i + 1 <= len(p) - del p[i : i + 2] - continue - i += 1 - - assert len(charstring.program) - - del charstring._hints - - -def remove_hints(cff, *, removeUnusedSubrs: bool = True): - for fontname in cff.keys(): - font = cff[fontname] - cs = font.CharStrings - # This can be tricky, but doesn't have to. What we do is: - # - # - Run all used glyph charstrings and recurse into subroutines, - # - For each charstring (including subroutines), if it has any - # of the hint stem operators, we mark it as such. - # Upon returning, for each charstring we note all the - # subroutine calls it makes that (recursively) contain a stem, - # - Dropping hinting then consists of the following two ops: - # * Drop the piece of the program in each charstring before the - # last call to a stem op or a stem-calling subroutine, - # * Drop all hintmask operations. - # - It's trickier... A hintmask right after hints and a few numbers - # will act as an implicit vstemhm. As such, we track whether - # we have seen any non-hint operators so far and do the right - # thing, recursively... Good luck understanding that :( - css = set() - for c in cs.values(): - c.decompile() - subrs = getattr(c.private, "Subrs", []) - decompiler = _DehintingT2Decompiler( - css, - subrs, - c.globalSubrs, - c.private.nominalWidthX, - c.private.defaultWidthX, - c.private, - ) - decompiler.execute(c) - c.width = decompiler.width - for charstring in css: - _cs_drop_hints(charstring) - del css - - # Drop font-wide hinting values - all_privs = [] - if hasattr(font, "FDArray"): - all_privs.extend(fd.Private for fd in font.FDArray) - else: - all_privs.append(font.Private) - for priv in all_privs: - for k in [ - "BlueValues", - "OtherBlues", - "FamilyBlues", - "FamilyOtherBlues", - "BlueScale", - "BlueShift", - "BlueFuzz", - "StemSnapH", - "StemSnapV", - "StdHW", - "StdVW", - "ForceBold", - "LanguageGroup", - "ExpansionFactor", - ]: - if hasattr(priv, k): - setattr(priv, k, None) - if removeUnusedSubrs: - remove_unused_subroutines(cff) - - -def _pd_delete_empty_subrs(private_dict): - if hasattr(private_dict, "Subrs") and not private_dict.Subrs: - if "Subrs" in private_dict.rawDict: - del private_dict.rawDict["Subrs"] - del private_dict.Subrs - - -def remove_unused_subroutines(cff): - for fontname in cff.keys(): - font = cff[fontname] - cs = font.CharStrings - # Renumber subroutines to remove unused ones - - # Mark all used subroutines - for c in cs.values(): - subrs = getattr(c.private, "Subrs", []) - decompiler = _MarkingT2Decompiler(subrs, c.globalSubrs, c.private) - decompiler.execute(c) - - all_subrs = [font.GlobalSubrs] - if hasattr(font, "FDArray"): - all_subrs.extend( - fd.Private.Subrs - for fd in font.FDArray - if hasattr(fd.Private, "Subrs") and fd.Private.Subrs - ) - elif hasattr(font.Private, "Subrs") and font.Private.Subrs: - all_subrs.append(font.Private.Subrs) - - subrs = set(subrs) # Remove duplicates - - # Prepare - for subrs in all_subrs: - if not hasattr(subrs, "_used"): - subrs._used = set() - subrs._used = _uniq_sort(subrs._used) - subrs._old_bias = calcSubrBias(subrs) - subrs._new_bias = calcSubrBias(subrs._used) - - # Renumber glyph charstrings - for c in cs.values(): - subrs = getattr(c.private, "Subrs", None) - _cs_subset_subroutines(c, subrs, font.GlobalSubrs) - - # Renumber subroutines themselves - for subrs in all_subrs: - if subrs == font.GlobalSubrs: - if not hasattr(font, "FDArray") and hasattr(font.Private, "Subrs"): - local_subrs = font.Private.Subrs - elif ( - hasattr(font, "FDArray") - and len(font.FDArray) == 1 - and hasattr(font.FDArray[0].Private, "Subrs") - ): - # Technically we shouldn't do this. But I've run into fonts that do it. - local_subrs = font.FDArray[0].Private.Subrs - else: - local_subrs = None - else: - local_subrs = subrs - - subrs.items = [subrs.items[i] for i in subrs._used] - if hasattr(subrs, "file"): - del subrs.file - if hasattr(subrs, "offsets"): - del subrs.offsets - - for subr in subrs.items: - _cs_subset_subroutines(subr, local_subrs, font.GlobalSubrs) - - # Delete local SubrsIndex if empty - if hasattr(font, "FDArray"): - for fd in font.FDArray: - _pd_delete_empty_subrs(fd.Private) - else: - _pd_delete_empty_subrs(font.Private) - - # Cleanup - for subrs in all_subrs: - del subrs._used, subrs._old_bias, subrs._new_bias diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/width.py b/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/width.py deleted file mode 100644 index 78ff27e4..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cffLib/width.py +++ /dev/null @@ -1,210 +0,0 @@ -# -*- coding: utf-8 -*- - -"""T2CharString glyph width optimizer. - -CFF glyphs whose width equals the CFF Private dictionary's ``defaultWidthX`` -value do not need to specify their width in their charstring, saving bytes. -This module determines the optimum ``defaultWidthX`` and ``nominalWidthX`` -values for a font, when provided with a list of glyph widths.""" - -from fontTools.ttLib import TTFont -from collections import defaultdict -from operator import add -from functools import reduce - - -__all__ = ["optimizeWidths", "main"] - - -class missingdict(dict): - def __init__(self, missing_func): - self.missing_func = missing_func - - def __missing__(self, v): - return self.missing_func(v) - - -def cumSum(f, op=add, start=0, decreasing=False): - keys = sorted(f.keys()) - minx, maxx = keys[0], keys[-1] - - total = reduce(op, f.values(), start) - - if decreasing: - missing = lambda x: start if x > maxx else total - domain = range(maxx, minx - 1, -1) - else: - missing = lambda x: start if x < minx else total - domain = range(minx, maxx + 1) - - out = missingdict(missing) - - v = start - for x in domain: - v = op(v, f[x]) - out[x] = v - - return out - - -def byteCost(widths, default, nominal): - if not hasattr(widths, "items"): - d = defaultdict(int) - for w in widths: - d[w] += 1 - widths = d - - cost = 0 - for w, freq in widths.items(): - if w == default: - continue - diff = abs(w - nominal) - if diff <= 107: - cost += freq - elif diff <= 1131: - cost += freq * 2 - else: - cost += freq * 5 - return cost - - -def optimizeWidthsBruteforce(widths): - """Bruteforce version. Veeeeeeeeeeeeeeeeery slow. Only works for smallests of fonts.""" - - d = defaultdict(int) - for w in widths: - d[w] += 1 - - # Maximum number of bytes using default can possibly save - maxDefaultAdvantage = 5 * max(d.values()) - - minw, maxw = min(widths), max(widths) - domain = list(range(minw, maxw + 1)) - - bestCostWithoutDefault = min(byteCost(widths, None, nominal) for nominal in domain) - - bestCost = len(widths) * 5 + 1 - for nominal in domain: - if byteCost(widths, None, nominal) > bestCost + maxDefaultAdvantage: - continue - for default in domain: - cost = byteCost(widths, default, nominal) - if cost < bestCost: - bestCost = cost - bestDefault = default - bestNominal = nominal - - return bestDefault, bestNominal - - -def optimizeWidths(widths): - """Given a list of glyph widths, or dictionary mapping glyph width to number of - glyphs having that, returns a tuple of best CFF default and nominal glyph widths. - - This algorithm is linear in UPEM+numGlyphs.""" - - if not hasattr(widths, "items"): - d = defaultdict(int) - for w in widths: - d[w] += 1 - widths = d - - keys = sorted(widths.keys()) - minw, maxw = keys[0], keys[-1] - domain = list(range(minw, maxw + 1)) - - # Cumulative sum/max forward/backward. - cumFrqU = cumSum(widths, op=add) - cumMaxU = cumSum(widths, op=max) - cumFrqD = cumSum(widths, op=add, decreasing=True) - cumMaxD = cumSum(widths, op=max, decreasing=True) - - # Cost per nominal choice, without default consideration. - nomnCostU = missingdict( - lambda x: cumFrqU[x] + cumFrqU[x - 108] + cumFrqU[x - 1132] * 3 - ) - nomnCostD = missingdict( - lambda x: cumFrqD[x] + cumFrqD[x + 108] + cumFrqD[x + 1132] * 3 - ) - nomnCost = missingdict(lambda x: nomnCostU[x] + nomnCostD[x] - widths[x]) - - # Cost-saving per nominal choice, by best default choice. - dfltCostU = missingdict( - lambda x: max(cumMaxU[x], cumMaxU[x - 108] * 2, cumMaxU[x - 1132] * 5) - ) - dfltCostD = missingdict( - lambda x: max(cumMaxD[x], cumMaxD[x + 108] * 2, cumMaxD[x + 1132] * 5) - ) - dfltCost = missingdict(lambda x: max(dfltCostU[x], dfltCostD[x])) - - # Combined cost per nominal choice. - bestCost = missingdict(lambda x: nomnCost[x] - dfltCost[x]) - - # Best nominal. - nominal = min(domain, key=lambda x: bestCost[x]) - - # Work back the best default. - bestC = bestCost[nominal] - dfltC = nomnCost[nominal] - bestCost[nominal] - ends = [] - if dfltC == dfltCostU[nominal]: - starts = [nominal, nominal - 108, nominal - 1132] - for start in starts: - while cumMaxU[start] and cumMaxU[start] == cumMaxU[start - 1]: - start -= 1 - ends.append(start) - else: - starts = [nominal, nominal + 108, nominal + 1132] - for start in starts: - while cumMaxD[start] and cumMaxD[start] == cumMaxD[start + 1]: - start += 1 - ends.append(start) - default = min(ends, key=lambda default: byteCost(widths, default, nominal)) - - return default, nominal - - -def main(args=None): - """Calculate optimum defaultWidthX/nominalWidthX values""" - - import argparse - - parser = argparse.ArgumentParser( - "fonttools cffLib.width", - description=main.__doc__, - ) - parser.add_argument( - "inputs", metavar="FILE", type=str, nargs="+", help="Input TTF files" - ) - parser.add_argument( - "-b", - "--brute-force", - dest="brute", - action="store_true", - help="Use brute-force approach (VERY slow)", - ) - - args = parser.parse_args(args) - - for fontfile in args.inputs: - font = TTFont(fontfile) - hmtx = font["hmtx"] - widths = [m[0] for m in hmtx.metrics.values()] - if args.brute: - default, nominal = optimizeWidthsBruteforce(widths) - else: - default, nominal = optimizeWidths(widths) - print( - "glyphs=%d default=%d nominal=%d byteCost=%d" - % (len(widths), default, nominal, byteCost(widths, default, nominal)) - ) - - -if __name__ == "__main__": - import sys - - if len(sys.argv) == 1: - import doctest - - sys.exit(doctest.testmod().failed) - main() diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__init__.py b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index 9c8c508f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/builder.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/builder.cpython-312.pyc deleted file mode 100644 index e72c457a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/builder.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/errors.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/errors.cpython-312.pyc deleted file mode 100644 index 798e84b1..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/errors.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/geometry.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/geometry.cpython-312.pyc deleted file mode 100644 index 4e54fd07..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/geometry.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/table_builder.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/table_builder.cpython-312.pyc deleted file mode 100644 index 12d3d872..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/table_builder.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/unbuilder.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/unbuilder.cpython-312.pyc deleted file mode 100644 index 665da4e7..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/__pycache__/unbuilder.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/builder.py b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/builder.py deleted file mode 100644 index 6e45e7a8..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/builder.py +++ /dev/null @@ -1,664 +0,0 @@ -""" -colorLib.builder: Build COLR/CPAL tables from scratch - -""" - -import collections -import copy -import enum -from functools import partial -from math import ceil, log -from typing import ( - Any, - Dict, - Generator, - Iterable, - List, - Mapping, - Optional, - Sequence, - Tuple, - Type, - TypeVar, - Union, -) -from fontTools.misc.arrayTools import intRect -from fontTools.misc.fixedTools import fixedToFloat -from fontTools.misc.treeTools import build_n_ary_tree -from fontTools.ttLib.tables import C_O_L_R_ -from fontTools.ttLib.tables import C_P_A_L_ -from fontTools.ttLib.tables import _n_a_m_e -from fontTools.ttLib.tables import otTables as ot -from fontTools.ttLib.tables.otTables import ExtendMode, CompositeMode -from .errors import ColorLibError -from .geometry import round_start_circle_stable_containment -from .table_builder import BuildCallback, TableBuilder - - -# TODO move type aliases to colorLib.types? -T = TypeVar("T") -_Kwargs = Mapping[str, Any] -_PaintInput = Union[int, _Kwargs, ot.Paint, Tuple[str, "_PaintInput"]] -_PaintInputList = Sequence[_PaintInput] -_ColorGlyphsDict = Dict[str, Union[_PaintInputList, _PaintInput]] -_ColorGlyphsV0Dict = Dict[str, Sequence[Tuple[str, int]]] -_ClipBoxInput = Union[ - Tuple[int, int, int, int, int], # format 1, variable - Tuple[int, int, int, int], # format 0, non-variable - ot.ClipBox, -] - - -MAX_PAINT_COLR_LAYER_COUNT = 255 -_DEFAULT_ALPHA = 1.0 -_MAX_REUSE_LEN = 32 - - -def _beforeBuildPaintRadialGradient(paint, source): - x0 = source["x0"] - y0 = source["y0"] - r0 = source["r0"] - x1 = source["x1"] - y1 = source["y1"] - r1 = source["r1"] - - # TODO apparently no builder_test confirms this works (?) - - # avoid abrupt change after rounding when c0 is near c1's perimeter - c = round_start_circle_stable_containment((x0, y0), r0, (x1, y1), r1) - x0, y0 = c.centre - r0 = c.radius - - # update source to ensure paint is built with corrected values - source["x0"] = x0 - source["y0"] = y0 - source["r0"] = r0 - source["x1"] = x1 - source["y1"] = y1 - source["r1"] = r1 - - return paint, source - - -def _defaultColorStop(): - colorStop = ot.ColorStop() - colorStop.Alpha = _DEFAULT_ALPHA - return colorStop - - -def _defaultVarColorStop(): - colorStop = ot.VarColorStop() - colorStop.Alpha = _DEFAULT_ALPHA - return colorStop - - -def _defaultColorLine(): - colorLine = ot.ColorLine() - colorLine.Extend = ExtendMode.PAD - return colorLine - - -def _defaultVarColorLine(): - colorLine = ot.VarColorLine() - colorLine.Extend = ExtendMode.PAD - return colorLine - - -def _defaultPaintSolid(): - paint = ot.Paint() - paint.Alpha = _DEFAULT_ALPHA - return paint - - -def _buildPaintCallbacks(): - return { - ( - BuildCallback.BEFORE_BUILD, - ot.Paint, - ot.PaintFormat.PaintRadialGradient, - ): _beforeBuildPaintRadialGradient, - ( - BuildCallback.BEFORE_BUILD, - ot.Paint, - ot.PaintFormat.PaintVarRadialGradient, - ): _beforeBuildPaintRadialGradient, - (BuildCallback.CREATE_DEFAULT, ot.ColorStop): _defaultColorStop, - (BuildCallback.CREATE_DEFAULT, ot.VarColorStop): _defaultVarColorStop, - (BuildCallback.CREATE_DEFAULT, ot.ColorLine): _defaultColorLine, - (BuildCallback.CREATE_DEFAULT, ot.VarColorLine): _defaultVarColorLine, - ( - BuildCallback.CREATE_DEFAULT, - ot.Paint, - ot.PaintFormat.PaintSolid, - ): _defaultPaintSolid, - ( - BuildCallback.CREATE_DEFAULT, - ot.Paint, - ot.PaintFormat.PaintVarSolid, - ): _defaultPaintSolid, - } - - -def populateCOLRv0( - table: ot.COLR, - colorGlyphsV0: _ColorGlyphsV0Dict, - glyphMap: Optional[Mapping[str, int]] = None, -): - """Build v0 color layers and add to existing COLR table. - - Args: - table: a raw ``otTables.COLR()`` object (not ttLib's ``table_C_O_L_R_``). - colorGlyphsV0: map of base glyph names to lists of (layer glyph names, - color palette index) tuples. Can be empty. - glyphMap: a map from glyph names to glyph indices, as returned from - ``TTFont.getReverseGlyphMap()``, to optionally sort base records by GID. - """ - if glyphMap is not None: - colorGlyphItems = sorted( - colorGlyphsV0.items(), key=lambda item: glyphMap[item[0]] - ) - else: - colorGlyphItems = colorGlyphsV0.items() - baseGlyphRecords = [] - layerRecords = [] - for baseGlyph, layers in colorGlyphItems: - baseRec = ot.BaseGlyphRecord() - baseRec.BaseGlyph = baseGlyph - baseRec.FirstLayerIndex = len(layerRecords) - baseRec.NumLayers = len(layers) - baseGlyphRecords.append(baseRec) - - for layerGlyph, paletteIndex in layers: - layerRec = ot.LayerRecord() - layerRec.LayerGlyph = layerGlyph - layerRec.PaletteIndex = paletteIndex - layerRecords.append(layerRec) - - table.BaseGlyphRecordArray = table.LayerRecordArray = None - if baseGlyphRecords: - table.BaseGlyphRecordArray = ot.BaseGlyphRecordArray() - table.BaseGlyphRecordArray.BaseGlyphRecord = baseGlyphRecords - if layerRecords: - table.LayerRecordArray = ot.LayerRecordArray() - table.LayerRecordArray.LayerRecord = layerRecords - table.BaseGlyphRecordCount = len(baseGlyphRecords) - table.LayerRecordCount = len(layerRecords) - - -def buildCOLR( - colorGlyphs: _ColorGlyphsDict, - version: Optional[int] = None, - *, - glyphMap: Optional[Mapping[str, int]] = None, - varStore: Optional[ot.VarStore] = None, - varIndexMap: Optional[ot.DeltaSetIndexMap] = None, - clipBoxes: Optional[Dict[str, _ClipBoxInput]] = None, - allowLayerReuse: bool = True, -) -> C_O_L_R_.table_C_O_L_R_: - """Build COLR table from color layers mapping. - - Args: - - colorGlyphs: map of base glyph name to, either list of (layer glyph name, - color palette index) tuples for COLRv0; or a single ``Paint`` (dict) or - list of ``Paint`` for COLRv1. - version: the version of COLR table. If None, the version is determined - by the presence of COLRv1 paints or variation data (varStore), which - require version 1; otherwise, if all base glyphs use only simple color - layers, version 0 is used. - glyphMap: a map from glyph names to glyph indices, as returned from - TTFont.getReverseGlyphMap(), to optionally sort base records by GID. - varStore: Optional ItemVarationStore for deltas associated with v1 layer. - varIndexMap: Optional DeltaSetIndexMap for deltas associated with v1 layer. - clipBoxes: Optional map of base glyph name to clip box 4- or 5-tuples: - (xMin, yMin, xMax, yMax) or (xMin, yMin, xMax, yMax, varIndexBase). - - Returns: - A new COLR table. - """ - self = C_O_L_R_.table_C_O_L_R_() - - if varStore is not None and version == 0: - raise ValueError("Can't add VarStore to COLRv0") - - if version in (None, 0) and not varStore: - # split color glyphs into v0 and v1 and encode separately - colorGlyphsV0, colorGlyphsV1 = _split_color_glyphs_by_version(colorGlyphs) - if version == 0 and colorGlyphsV1: - raise ValueError("Can't encode COLRv1 glyphs in COLRv0") - else: - # unless explicitly requested for v1 or have variations, in which case - # we encode all color glyph as v1 - colorGlyphsV0, colorGlyphsV1 = {}, colorGlyphs - - colr = ot.COLR() - - populateCOLRv0(colr, colorGlyphsV0, glyphMap) - - colr.LayerList, colr.BaseGlyphList = buildColrV1( - colorGlyphsV1, - glyphMap, - allowLayerReuse=allowLayerReuse, - ) - - if version is None: - version = 1 if (varStore or colorGlyphsV1) else 0 - elif version not in (0, 1): - raise NotImplementedError(version) - self.version = colr.Version = version - - if version == 0: - self.ColorLayers = self._decompileColorLayersV0(colr) - else: - colr.ClipList = buildClipList(clipBoxes) if clipBoxes else None - colr.VarIndexMap = varIndexMap - colr.VarStore = varStore - self.table = colr - - return self - - -def buildClipList(clipBoxes: Dict[str, _ClipBoxInput]) -> ot.ClipList: - clipList = ot.ClipList() - clipList.Format = 1 - clipList.clips = {name: buildClipBox(box) for name, box in clipBoxes.items()} - return clipList - - -def buildClipBox(clipBox: _ClipBoxInput) -> ot.ClipBox: - if isinstance(clipBox, ot.ClipBox): - return clipBox - n = len(clipBox) - clip = ot.ClipBox() - if n not in (4, 5): - raise ValueError(f"Invalid ClipBox: expected 4 or 5 values, found {n}") - clip.xMin, clip.yMin, clip.xMax, clip.yMax = intRect(clipBox[:4]) - clip.Format = int(n == 5) + 1 - if n == 5: - clip.VarIndexBase = int(clipBox[4]) - return clip - - -class ColorPaletteType(enum.IntFlag): - USABLE_WITH_LIGHT_BACKGROUND = 0x0001 - USABLE_WITH_DARK_BACKGROUND = 0x0002 - - @classmethod - def _missing_(cls, value): - # enforce reserved bits - if isinstance(value, int) and (value < 0 or value & 0xFFFC != 0): - raise ValueError(f"{value} is not a valid {cls.__name__}") - return super()._missing_(value) - - -# None, 'abc' or {'en': 'abc', 'de': 'xyz'} -_OptionalLocalizedString = Union[None, str, Dict[str, str]] - - -def buildPaletteLabels( - labels: Iterable[_OptionalLocalizedString], nameTable: _n_a_m_e.table__n_a_m_e -) -> List[Optional[int]]: - return [ - ( - nameTable.addMultilingualName(l, mac=False) - if isinstance(l, dict) - else ( - C_P_A_L_.table_C_P_A_L_.NO_NAME_ID - if l is None - else nameTable.addMultilingualName({"en": l}, mac=False) - ) - ) - for l in labels - ] - - -def buildCPAL( - palettes: Sequence[Sequence[Tuple[float, float, float, float]]], - paletteTypes: Optional[Sequence[ColorPaletteType]] = None, - paletteLabels: Optional[Sequence[_OptionalLocalizedString]] = None, - paletteEntryLabels: Optional[Sequence[_OptionalLocalizedString]] = None, - nameTable: Optional[_n_a_m_e.table__n_a_m_e] = None, -) -> C_P_A_L_.table_C_P_A_L_: - """Build CPAL table from list of color palettes. - - Args: - palettes: list of lists of colors encoded as tuples of (R, G, B, A) floats - in the range [0..1]. - paletteTypes: optional list of ColorPaletteType, one for each palette. - paletteLabels: optional list of palette labels. Each lable can be either: - None (no label), a string (for for default English labels), or a - localized string (as a dict keyed with BCP47 language codes). - paletteEntryLabels: optional list of palette entry labels, one for each - palette entry (see paletteLabels). - nameTable: optional name table where to store palette and palette entry - labels. Required if either paletteLabels or paletteEntryLabels is set. - - Return: - A new CPAL v0 or v1 table, if custom palette types or labels are specified. - """ - if len({len(p) for p in palettes}) != 1: - raise ColorLibError("color palettes have different lengths") - - if (paletteLabels or paletteEntryLabels) and not nameTable: - raise TypeError( - "nameTable is required if palette or palette entries have labels" - ) - - cpal = C_P_A_L_.table_C_P_A_L_() - cpal.numPaletteEntries = len(palettes[0]) - - cpal.palettes = [] - for i, palette in enumerate(palettes): - colors = [] - for j, color in enumerate(palette): - if not isinstance(color, tuple) or len(color) != 4: - raise ColorLibError( - f"In palette[{i}][{j}]: expected (R, G, B, A) tuple, got {color!r}" - ) - if any(v > 1 or v < 0 for v in color): - raise ColorLibError( - f"palette[{i}][{j}] has invalid out-of-range [0..1] color: {color!r}" - ) - # input colors are RGBA, CPAL encodes them as BGRA - red, green, blue, alpha = color - colors.append( - C_P_A_L_.Color(*(round(v * 255) for v in (blue, green, red, alpha))) - ) - cpal.palettes.append(colors) - - if any(v is not None for v in (paletteTypes, paletteLabels, paletteEntryLabels)): - cpal.version = 1 - - if paletteTypes is not None: - if len(paletteTypes) != len(palettes): - raise ColorLibError( - f"Expected {len(palettes)} paletteTypes, got {len(paletteTypes)}" - ) - cpal.paletteTypes = [ColorPaletteType(t).value for t in paletteTypes] - else: - cpal.paletteTypes = [C_P_A_L_.table_C_P_A_L_.DEFAULT_PALETTE_TYPE] * len( - palettes - ) - - if paletteLabels is not None: - if len(paletteLabels) != len(palettes): - raise ColorLibError( - f"Expected {len(palettes)} paletteLabels, got {len(paletteLabels)}" - ) - cpal.paletteLabels = buildPaletteLabels(paletteLabels, nameTable) - else: - cpal.paletteLabels = [C_P_A_L_.table_C_P_A_L_.NO_NAME_ID] * len(palettes) - - if paletteEntryLabels is not None: - if len(paletteEntryLabels) != cpal.numPaletteEntries: - raise ColorLibError( - f"Expected {cpal.numPaletteEntries} paletteEntryLabels, " - f"got {len(paletteEntryLabels)}" - ) - cpal.paletteEntryLabels = buildPaletteLabels(paletteEntryLabels, nameTable) - else: - cpal.paletteEntryLabels = [ - C_P_A_L_.table_C_P_A_L_.NO_NAME_ID - ] * cpal.numPaletteEntries - else: - cpal.version = 0 - - return cpal - - -# COLR v1 tables -# See draft proposal at: https://github.com/googlefonts/colr-gradients-spec - - -def _is_colrv0_layer(layer: Any) -> bool: - # Consider as COLRv0 layer any sequence of length 2 (be it tuple or list) in which - # the first element is a str (the layerGlyph) and the second element is an int - # (CPAL paletteIndex). - # https://github.com/googlefonts/ufo2ft/issues/426 - try: - layerGlyph, paletteIndex = layer - except (TypeError, ValueError): - return False - else: - return isinstance(layerGlyph, str) and isinstance(paletteIndex, int) - - -def _split_color_glyphs_by_version( - colorGlyphs: _ColorGlyphsDict, -) -> Tuple[_ColorGlyphsV0Dict, _ColorGlyphsDict]: - colorGlyphsV0 = {} - colorGlyphsV1 = {} - for baseGlyph, layers in colorGlyphs.items(): - if all(_is_colrv0_layer(l) for l in layers): - colorGlyphsV0[baseGlyph] = layers - else: - colorGlyphsV1[baseGlyph] = layers - - # sanity check - assert set(colorGlyphs) == (set(colorGlyphsV0) | set(colorGlyphsV1)) - - return colorGlyphsV0, colorGlyphsV1 - - -def _reuse_ranges(num_layers: int) -> Generator[Tuple[int, int], None, None]: - # TODO feels like something itertools might have already - for lbound in range(num_layers): - # Reuse of very large #s of layers is relatively unlikely - # +2: we want sequences of at least 2 - # otData handles single-record duplication - for ubound in range( - lbound + 2, min(num_layers + 1, lbound + 2 + _MAX_REUSE_LEN) - ): - yield (lbound, ubound) - - -class LayerReuseCache: - reusePool: Mapping[Tuple[Any, ...], int] - tuples: Mapping[int, Tuple[Any, ...]] - keepAlive: List[ot.Paint] # we need id to remain valid - - def __init__(self): - self.reusePool = {} - self.tuples = {} - self.keepAlive = [] - - def _paint_tuple(self, paint: ot.Paint): - # start simple, who even cares about cyclic graphs or interesting field types - def _tuple_safe(value): - if isinstance(value, enum.Enum): - return value - elif hasattr(value, "__dict__"): - return tuple( - (k, _tuple_safe(v)) for k, v in sorted(value.__dict__.items()) - ) - elif isinstance(value, collections.abc.MutableSequence): - return tuple(_tuple_safe(e) for e in value) - return value - - # Cache the tuples for individual Paint instead of the whole sequence - # because the seq could be a transient slice - result = self.tuples.get(id(paint), None) - if result is None: - result = _tuple_safe(paint) - self.tuples[id(paint)] = result - self.keepAlive.append(paint) - return result - - def _as_tuple(self, paints: Sequence[ot.Paint]) -> Tuple[Any, ...]: - return tuple(self._paint_tuple(p) for p in paints) - - def try_reuse(self, layers: List[ot.Paint]) -> List[ot.Paint]: - found_reuse = True - while found_reuse: - found_reuse = False - - ranges = sorted( - _reuse_ranges(len(layers)), - key=lambda t: (t[1] - t[0], t[1], t[0]), - reverse=True, - ) - for lbound, ubound in ranges: - reuse_lbound = self.reusePool.get( - self._as_tuple(layers[lbound:ubound]), -1 - ) - if reuse_lbound == -1: - continue - new_slice = ot.Paint() - new_slice.Format = int(ot.PaintFormat.PaintColrLayers) - new_slice.NumLayers = ubound - lbound - new_slice.FirstLayerIndex = reuse_lbound - layers = layers[:lbound] + [new_slice] + layers[ubound:] - found_reuse = True - break - return layers - - def add(self, layers: List[ot.Paint], first_layer_index: int): - for lbound, ubound in _reuse_ranges(len(layers)): - self.reusePool[self._as_tuple(layers[lbound:ubound])] = ( - lbound + first_layer_index - ) - - -class LayerListBuilder: - layers: List[ot.Paint] - cache: LayerReuseCache - allowLayerReuse: bool - - def __init__(self, *, allowLayerReuse=True): - self.layers = [] - if allowLayerReuse: - self.cache = LayerReuseCache() - else: - self.cache = None - - # We need to intercept construction of PaintColrLayers - callbacks = _buildPaintCallbacks() - callbacks[ - ( - BuildCallback.BEFORE_BUILD, - ot.Paint, - ot.PaintFormat.PaintColrLayers, - ) - ] = self._beforeBuildPaintColrLayers - self.tableBuilder = TableBuilder(callbacks) - - # COLR layers is unusual in that it modifies shared state - # so we need a callback into an object - def _beforeBuildPaintColrLayers(self, dest, source): - # Sketchy gymnastics: a sequence input will have dropped it's layers - # into NumLayers; get it back - if isinstance(source.get("NumLayers", None), collections.abc.Sequence): - layers = source["NumLayers"] - else: - layers = source["Layers"] - - # Convert maps seqs or whatever into typed objects - layers = [self.buildPaint(l) for l in layers] - - # No reason to have a colr layers with just one entry - if len(layers) == 1: - return layers[0], {} - - if self.cache is not None: - # Look for reuse, with preference to longer sequences - # This may make the layer list smaller - layers = self.cache.try_reuse(layers) - - # The layer list is now final; if it's too big we need to tree it - is_tree = len(layers) > MAX_PAINT_COLR_LAYER_COUNT - layers = build_n_ary_tree(layers, n=MAX_PAINT_COLR_LAYER_COUNT) - - # We now have a tree of sequences with Paint leaves. - # Convert the sequences into PaintColrLayers. - def listToColrLayers(layer): - if isinstance(layer, collections.abc.Sequence): - return self.buildPaint( - { - "Format": ot.PaintFormat.PaintColrLayers, - "Layers": [listToColrLayers(l) for l in layer], - } - ) - return layer - - layers = [listToColrLayers(l) for l in layers] - - # No reason to have a colr layers with just one entry - if len(layers) == 1: - return layers[0], {} - - paint = ot.Paint() - paint.Format = int(ot.PaintFormat.PaintColrLayers) - paint.NumLayers = len(layers) - paint.FirstLayerIndex = len(self.layers) - self.layers.extend(layers) - - # Register our parts for reuse provided we aren't a tree - # If we are a tree the leaves registered for reuse and that will suffice - if self.cache is not None and not is_tree: - self.cache.add(layers, paint.FirstLayerIndex) - - # we've fully built dest; empty source prevents generalized build from kicking in - return paint, {} - - def buildPaint(self, paint: _PaintInput) -> ot.Paint: - return self.tableBuilder.build(ot.Paint, paint) - - def build(self) -> Optional[ot.LayerList]: - if not self.layers: - return None - layers = ot.LayerList() - layers.LayerCount = len(self.layers) - layers.Paint = self.layers - return layers - - -def buildBaseGlyphPaintRecord( - baseGlyph: str, layerBuilder: LayerListBuilder, paint: _PaintInput -) -> ot.BaseGlyphList: - self = ot.BaseGlyphPaintRecord() - self.BaseGlyph = baseGlyph - self.Paint = layerBuilder.buildPaint(paint) - return self - - -def _format_glyph_errors(errors: Mapping[str, Exception]) -> str: - lines = [] - for baseGlyph, error in sorted(errors.items()): - lines.append(f" {baseGlyph} => {type(error).__name__}: {error}") - return "\n".join(lines) - - -def buildColrV1( - colorGlyphs: _ColorGlyphsDict, - glyphMap: Optional[Mapping[str, int]] = None, - *, - allowLayerReuse: bool = True, -) -> Tuple[Optional[ot.LayerList], ot.BaseGlyphList]: - if glyphMap is not None: - colorGlyphItems = sorted( - colorGlyphs.items(), key=lambda item: glyphMap[item[0]] - ) - else: - colorGlyphItems = colorGlyphs.items() - - errors = {} - baseGlyphs = [] - layerBuilder = LayerListBuilder(allowLayerReuse=allowLayerReuse) - for baseGlyph, paint in colorGlyphItems: - try: - baseGlyphs.append(buildBaseGlyphPaintRecord(baseGlyph, layerBuilder, paint)) - - except (ColorLibError, OverflowError, ValueError, TypeError) as e: - errors[baseGlyph] = e - - if errors: - failed_glyphs = _format_glyph_errors(errors) - exc = ColorLibError(f"Failed to build BaseGlyphList:\n{failed_glyphs}") - exc.errors = errors - raise exc from next(iter(errors.values())) - - layers = layerBuilder.build() - glyphs = ot.BaseGlyphList() - glyphs.BaseGlyphCount = len(baseGlyphs) - glyphs.BaseGlyphPaintRecord = baseGlyphs - return (layers, glyphs) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/errors.py b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/errors.py deleted file mode 100644 index 18cbebba..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/errors.py +++ /dev/null @@ -1,2 +0,0 @@ -class ColorLibError(Exception): - pass diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/geometry.py b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/geometry.py deleted file mode 100644 index 1ce161bf..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/geometry.py +++ /dev/null @@ -1,143 +0,0 @@ -"""Helpers for manipulating 2D points and vectors in COLR table.""" - -from math import copysign, cos, hypot, isclose, pi -from fontTools.misc.roundTools import otRound - - -def _vector_between(origin, target): - return (target[0] - origin[0], target[1] - origin[1]) - - -def _round_point(pt): - return (otRound(pt[0]), otRound(pt[1])) - - -def _unit_vector(vec): - length = hypot(*vec) - if length == 0: - return None - return (vec[0] / length, vec[1] / length) - - -_CIRCLE_INSIDE_TOLERANCE = 1e-4 - - -# The unit vector's X and Y components are respectively -# U = (cos(Ξ±), sin(Ξ±)) -# where Ξ± is the angle between the unit vector and the positive x axis. -_UNIT_VECTOR_THRESHOLD = cos(3 / 8 * pi) # == sin(1/8 * pi) == 0.38268343236508984 - - -def _rounding_offset(direction): - # Return 2-tuple of -/+ 1.0 or 0.0 approximately based on the direction vector. - # We divide the unit circle in 8 equal slices oriented towards the cardinal - # (N, E, S, W) and intermediate (NE, SE, SW, NW) directions. To each slice we - # map one of the possible cases: -1, 0, +1 for either X and Y coordinate. - # E.g. Return (+1.0, -1.0) if unit vector is oriented towards SE, or - # (-1.0, 0.0) if it's pointing West, etc. - uv = _unit_vector(direction) - if not uv: - return (0, 0) - - result = [] - for uv_component in uv: - if -_UNIT_VECTOR_THRESHOLD <= uv_component < _UNIT_VECTOR_THRESHOLD: - # unit vector component near 0: direction almost orthogonal to the - # direction of the current axis, thus keep coordinate unchanged - result.append(0) - else: - # nudge coord by +/- 1.0 in direction of unit vector - result.append(copysign(1.0, uv_component)) - return tuple(result) - - -class Circle: - def __init__(self, centre, radius): - self.centre = centre - self.radius = radius - - def __repr__(self): - return f"Circle(centre={self.centre}, radius={self.radius})" - - def round(self): - return Circle(_round_point(self.centre), otRound(self.radius)) - - def inside(self, outer_circle, tolerance=_CIRCLE_INSIDE_TOLERANCE): - dist = self.radius + hypot(*_vector_between(self.centre, outer_circle.centre)) - return ( - isclose(outer_circle.radius, dist, rel_tol=_CIRCLE_INSIDE_TOLERANCE) - or outer_circle.radius > dist - ) - - def concentric(self, other): - return self.centre == other.centre - - def move(self, dx, dy): - self.centre = (self.centre[0] + dx, self.centre[1] + dy) - - -def round_start_circle_stable_containment(c0, r0, c1, r1): - """Round start circle so that it stays inside/outside end circle after rounding. - - The rounding of circle coordinates to integers may cause an abrupt change - if the start circle c0 is so close to the end circle c1's perimiter that - it ends up falling outside (or inside) as a result of the rounding. - To keep the gradient unchanged, we nudge it in the right direction. - - See: - https://github.com/googlefonts/colr-gradients-spec/issues/204 - https://github.com/googlefonts/picosvg/issues/158 - """ - start, end = Circle(c0, r0), Circle(c1, r1) - - inside_before_round = start.inside(end) - - round_start = start.round() - round_end = end.round() - inside_after_round = round_start.inside(round_end) - - if inside_before_round == inside_after_round: - return round_start - elif inside_after_round: - # start was outside before rounding: we need to push start away from end - direction = _vector_between(round_end.centre, round_start.centre) - radius_delta = +1.0 - else: - # start was inside before rounding: we need to push start towards end - direction = _vector_between(round_start.centre, round_end.centre) - radius_delta = -1.0 - dx, dy = _rounding_offset(direction) - - # At most 2 iterations ought to be enough to converge. Before the loop, we - # know the start circle didn't keep containment after normal rounding; thus - # we continue adjusting by -/+ 1.0 until containment is restored. - # Normal rounding can at most move each coordinates -/+0.5; in the worst case - # both the start and end circle's centres and radii will be rounded in opposite - # directions, e.g. when they move along a 45 degree diagonal: - # c0 = (1.5, 1.5) ===> (2.0, 2.0) - # r0 = 0.5 ===> 1.0 - # c1 = (0.499, 0.499) ===> (0.0, 0.0) - # r1 = 2.499 ===> 2.0 - # In this example, the relative distance between the circles, calculated - # as r1 - (r0 + distance(c0, c1)) is initially 0.57437 (c0 is inside c1), and - # -1.82842 after rounding (c0 is now outside c1). Nudging c0 by -1.0 on both - # x and y axes moves it towards c1 by hypot(-1.0, -1.0) = 1.41421. Two of these - # moves cover twice that distance, which is enough to restore containment. - max_attempts = 2 - for _ in range(max_attempts): - if round_start.concentric(round_end): - # can't move c0 towards c1 (they are the same), so we change the radius - round_start.radius += radius_delta - assert round_start.radius >= 0 - else: - round_start.move(dx, dy) - if inside_before_round == round_start.inside(round_end): - break - else: # likely a bug - raise AssertionError( - f"Rounding circle {start} " - f"{'inside' if inside_before_round else 'outside'} " - f"{end} failed after {max_attempts} attempts!" - ) - - return round_start diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/table_builder.py b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/table_builder.py deleted file mode 100644 index f1e182c4..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/table_builder.py +++ /dev/null @@ -1,223 +0,0 @@ -""" -colorLib.table_builder: Generic helper for filling in BaseTable derivatives from tuples and maps and such. - -""" - -import collections -import enum -from fontTools.ttLib.tables.otBase import ( - BaseTable, - FormatSwitchingBaseTable, - UInt8FormatSwitchingBaseTable, -) -from fontTools.ttLib.tables.otConverters import ( - ComputedInt, - SimpleValue, - Struct, - Short, - UInt8, - UShort, - IntValue, - FloatValue, - OptionalValue, -) -from fontTools.misc.roundTools import otRound - - -class BuildCallback(enum.Enum): - """Keyed on (BEFORE_BUILD, class[, Format if available]). - Receives (dest, source). - Should return (dest, source), which can be new objects. - """ - - BEFORE_BUILD = enum.auto() - - """Keyed on (AFTER_BUILD, class[, Format if available]). - Receives (dest). - Should return dest, which can be a new object. - """ - AFTER_BUILD = enum.auto() - - """Keyed on (CREATE_DEFAULT, class[, Format if available]). - Receives no arguments. - Should return a new instance of class. - """ - CREATE_DEFAULT = enum.auto() - - -def _assignable(convertersByName): - return {k: v for k, v in convertersByName.items() if not isinstance(v, ComputedInt)} - - -def _isNonStrSequence(value): - return isinstance(value, collections.abc.Sequence) and not isinstance(value, str) - - -def _split_format(cls, source): - if _isNonStrSequence(source): - assert len(source) > 0, f"{cls} needs at least format from {source}" - fmt, remainder = source[0], source[1:] - elif isinstance(source, collections.abc.Mapping): - assert "Format" in source, f"{cls} needs at least Format from {source}" - remainder = source.copy() - fmt = remainder.pop("Format") - else: - raise ValueError(f"Not sure how to populate {cls} from {source}") - - assert isinstance( - fmt, collections.abc.Hashable - ), f"{cls} Format is not hashable: {fmt!r}" - assert fmt in cls.convertersByName, f"{cls} invalid Format: {fmt!r}" - - return fmt, remainder - - -class TableBuilder: - """ - Helps to populate things derived from BaseTable from maps, tuples, etc. - - A table of lifecycle callbacks may be provided to add logic beyond what is possible - based on otData info for the target class. See BuildCallbacks. - """ - - def __init__(self, callbackTable=None): - if callbackTable is None: - callbackTable = {} - self._callbackTable = callbackTable - - def _convert(self, dest, field, converter, value): - enumClass = getattr(converter, "enumClass", None) - - if enumClass: - if isinstance(value, enumClass): - pass - elif isinstance(value, str): - try: - value = getattr(enumClass, value.upper()) - except AttributeError: - raise ValueError(f"{value} is not a valid {enumClass}") - else: - value = enumClass(value) - - elif isinstance(converter, IntValue): - value = otRound(value) - elif isinstance(converter, FloatValue): - value = float(value) - - elif isinstance(converter, Struct): - if converter.repeat: - if _isNonStrSequence(value): - value = [self.build(converter.tableClass, v) for v in value] - else: - value = [self.build(converter.tableClass, value)] - setattr(dest, converter.repeat, len(value)) - else: - value = self.build(converter.tableClass, value) - elif callable(converter): - value = converter(value) - - setattr(dest, field, value) - - def build(self, cls, source): - assert issubclass(cls, BaseTable) - - if isinstance(source, cls): - return source - - callbackKey = (cls,) - fmt = None - if issubclass(cls, FormatSwitchingBaseTable): - fmt, source = _split_format(cls, source) - callbackKey = (cls, fmt) - - dest = self._callbackTable.get( - (BuildCallback.CREATE_DEFAULT,) + callbackKey, lambda: cls() - )() - assert isinstance(dest, cls) - - convByName = _assignable(cls.convertersByName) - skippedFields = set() - - # For format switchers we need to resolve converters based on format - if issubclass(cls, FormatSwitchingBaseTable): - dest.Format = fmt - convByName = _assignable(convByName[dest.Format]) - skippedFields.add("Format") - - # Convert sequence => mapping so before thunk only has to handle one format - if _isNonStrSequence(source): - # Sequence (typically list or tuple) assumed to match fields in declaration order - assert len(source) <= len( - convByName - ), f"Sequence of {len(source)} too long for {cls}; expected <= {len(convByName)} values" - source = dict(zip(convByName.keys(), source)) - - dest, source = self._callbackTable.get( - (BuildCallback.BEFORE_BUILD,) + callbackKey, lambda d, s: (d, s) - )(dest, source) - - if isinstance(source, collections.abc.Mapping): - for field, value in source.items(): - if field in skippedFields: - continue - converter = convByName.get(field, None) - if not converter: - raise ValueError( - f"Unrecognized field {field} for {cls}; expected one of {sorted(convByName.keys())}" - ) - self._convert(dest, field, converter, value) - else: - # let's try as a 1-tuple - dest = self.build(cls, (source,)) - - for field, conv in convByName.items(): - if not hasattr(dest, field) and isinstance(conv, OptionalValue): - setattr(dest, field, conv.DEFAULT) - - dest = self._callbackTable.get( - (BuildCallback.AFTER_BUILD,) + callbackKey, lambda d: d - )(dest) - - return dest - - -class TableUnbuilder: - def __init__(self, callbackTable=None): - if callbackTable is None: - callbackTable = {} - self._callbackTable = callbackTable - - def unbuild(self, table): - assert isinstance(table, BaseTable) - - source = {} - - callbackKey = (type(table),) - if isinstance(table, FormatSwitchingBaseTable): - source["Format"] = int(table.Format) - callbackKey += (table.Format,) - - for converter in table.getConverters(): - if isinstance(converter, ComputedInt): - continue - value = getattr(table, converter.name) - - enumClass = getattr(converter, "enumClass", None) - if enumClass: - source[converter.name] = value.name.lower() - elif isinstance(converter, Struct): - if converter.repeat: - source[converter.name] = [self.unbuild(v) for v in value] - else: - source[converter.name] = self.unbuild(value) - elif isinstance(converter, SimpleValue): - # "simple" values (e.g. int, float, str) need no further un-building - source[converter.name] = value - else: - raise NotImplementedError( - "Don't know how unbuild {value!r} with {converter!r}" - ) - - source = self._callbackTable.get(callbackKey, lambda s: s)(source) - - return source diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/unbuilder.py b/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/unbuilder.py deleted file mode 100644 index ac243550..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/colorLib/unbuilder.py +++ /dev/null @@ -1,81 +0,0 @@ -from fontTools.ttLib.tables import otTables as ot -from .table_builder import TableUnbuilder - - -def unbuildColrV1(layerList, baseGlyphList): - layers = [] - if layerList: - layers = layerList.Paint - unbuilder = LayerListUnbuilder(layers) - return { - rec.BaseGlyph: unbuilder.unbuildPaint(rec.Paint) - for rec in baseGlyphList.BaseGlyphPaintRecord - } - - -def _flatten_layers(lst): - for paint in lst: - if paint["Format"] == ot.PaintFormat.PaintColrLayers: - yield from _flatten_layers(paint["Layers"]) - else: - yield paint - - -class LayerListUnbuilder: - def __init__(self, layers): - self.layers = layers - - callbacks = { - ( - ot.Paint, - ot.PaintFormat.PaintColrLayers, - ): self._unbuildPaintColrLayers, - } - self.tableUnbuilder = TableUnbuilder(callbacks) - - def unbuildPaint(self, paint): - assert isinstance(paint, ot.Paint) - return self.tableUnbuilder.unbuild(paint) - - def _unbuildPaintColrLayers(self, source): - assert source["Format"] == ot.PaintFormat.PaintColrLayers - - layers = list( - _flatten_layers( - [ - self.unbuildPaint(childPaint) - for childPaint in self.layers[ - source["FirstLayerIndex"] : source["FirstLayerIndex"] - + source["NumLayers"] - ] - ] - ) - ) - - if len(layers) == 1: - return layers[0] - - return {"Format": source["Format"], "Layers": layers} - - -if __name__ == "__main__": - from pprint import pprint - import sys - from fontTools.ttLib import TTFont - - try: - fontfile = sys.argv[1] - except IndexError: - sys.exit("usage: fonttools colorLib.unbuilder FONTFILE") - - font = TTFont(fontfile) - colr = font["COLR"] - if colr.version < 1: - sys.exit(f"error: No COLR table version=1 found in {fontfile}") - - colorGlyphs = unbuildColrV1( - colr.table.LayerList, - colr.table.BaseGlyphList, - ) - - pprint(colorGlyphs) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/config/__init__.py b/pptx-env/lib/python3.12/site-packages/fontTools/config/__init__.py deleted file mode 100644 index ff0328a3..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/config/__init__.py +++ /dev/null @@ -1,90 +0,0 @@ -""" -Define all configuration options that can affect the working of fontTools -modules. E.g. optimization levels of varLib IUP, otlLib GPOS compression level, -etc. If this file gets too big, split it into smaller files per-module. - -An instance of the Config class can be attached to a TTFont object, so that -the various modules can access their configuration options from it. -""" - -from textwrap import dedent - -from fontTools.misc.configTools import * - - -class Config(AbstractConfig): - options = Options() - - -OPTIONS = Config.options - - -Config.register_option( - name="fontTools.otlLib.optimize.gpos:COMPRESSION_LEVEL", - help=dedent( - """\ - GPOS Lookup type 2 (PairPos) compression level: - 0 = do not attempt to compact PairPos lookups; - 1 to 8 = create at most 1 to 8 new subtables for each existing - subtable, provided that it would yield a 50%% file size saving; - 9 = create as many new subtables as needed to yield a file size saving. - Default: 0. - - This compaction aims to save file size, by splitting large class - kerning subtables (Format 2) that contain many zero values into - smaller and denser subtables. It's a trade-off between the overhead - of several subtables versus the sparseness of one big subtable. - - See the pull request: https://github.com/fonttools/fonttools/pull/2326 - """ - ), - default=0, - parse=int, - validate=lambda v: v in range(10), -) - -Config.register_option( - name="fontTools.ttLib.tables.otBase:USE_HARFBUZZ_REPACKER", - help=dedent( - """\ - FontTools tries to use the HarfBuzz Repacker to serialize GPOS/GSUB tables - if the uharfbuzz python bindings are importable, otherwise falls back to its - slower, less efficient serializer. Set to False to always use the latter. - Set to True to explicitly request the HarfBuzz Repacker (will raise an - error if uharfbuzz cannot be imported). - """ - ), - default=None, - parse=Option.parse_optional_bool, - validate=Option.validate_optional_bool, -) - -Config.register_option( - name="fontTools.otlLib.builder:WRITE_GPOS7", - help=dedent( - """\ - macOS before 13.2 didn’t support GPOS LookupType 7 (non-chaining - ContextPos lookups), so FontTools.otlLib.builder disables a file size - optimisation that would use LookupType 7 instead of 8 when there is no - chaining (no prefix or suffix). Set to True to enable the optimization. - """ - ), - default=False, - parse=Option.parse_optional_bool, - validate=Option.validate_optional_bool, -) - -Config.register_option( - name="fontTools.ttLib:OPTIMIZE_FONT_SPEED", - help=dedent( - """\ - Enable optimizations that prioritize speed over file size. This - mainly affects how glyf table and gvar / VARC tables are compiled. - The produced fonts will be larger, but rendering performance will - be improved with HarfBuzz and other text layout engines. - """ - ), - default=False, - parse=Option.parse_optional_bool, - validate=Option.validate_optional_bool, -) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/config/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/config/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index e456b3e4..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/config/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__init__.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__init__.py deleted file mode 100644 index 4ae6356e..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from .cu2qu import * diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__main__.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__main__.py deleted file mode 100644 index 5205ffee..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__main__.py +++ /dev/null @@ -1,6 +0,0 @@ -import sys -from .cli import _main as main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/__init__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/__init__.cpython-312.pyc deleted file mode 100644 index afee069f..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/__init__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/__main__.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/__main__.cpython-312.pyc deleted file mode 100644 index 33030749..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/__main__.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/benchmark.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/benchmark.cpython-312.pyc deleted file mode 100644 index c0f1992b..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/benchmark.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/cli.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/cli.cpython-312.pyc deleted file mode 100644 index 2d52b603..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/cli.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/cu2qu.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/cu2qu.cpython-312.pyc deleted file mode 100644 index c881213e..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/cu2qu.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/errors.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/errors.cpython-312.pyc deleted file mode 100644 index c43b8c26..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/errors.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/ufo.cpython-312.pyc b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/ufo.cpython-312.pyc deleted file mode 100644 index e365088a..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/__pycache__/ufo.cpython-312.pyc and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/benchmark.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/benchmark.py deleted file mode 100644 index 007f75d8..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/benchmark.py +++ /dev/null @@ -1,54 +0,0 @@ -"""Benchmark the cu2qu algorithm performance.""" - -from .cu2qu import * -import random -import timeit - -MAX_ERR = 0.05 - - -def generate_curve(): - return [ - tuple(float(random.randint(0, 2048)) for coord in range(2)) - for point in range(4) - ] - - -def setup_curve_to_quadratic(): - return generate_curve(), MAX_ERR - - -def setup_curves_to_quadratic(): - num_curves = 3 - return ([generate_curve() for curve in range(num_curves)], [MAX_ERR] * num_curves) - - -def run_benchmark(module, function, setup_suffix="", repeat=5, number=1000): - setup_func = "setup_" + function - if setup_suffix: - print("%s with %s:" % (function, setup_suffix), end="") - setup_func += "_" + setup_suffix - else: - print("%s:" % function, end="") - - def wrapper(function, setup_func): - function = globals()[function] - setup_func = globals()[setup_func] - - def wrapped(): - return function(*setup_func()) - - return wrapped - - results = timeit.repeat(wrapper(function, setup_func), repeat=repeat, number=number) - print("\t%5.1fus" % (min(results) * 1000000.0 / number)) - - -def main(): - run_benchmark("cu2qu", "curve_to_quadratic") - run_benchmark("cu2qu", "curves_to_quadratic") - - -if __name__ == "__main__": - random.seed(1) - main() diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cli.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cli.py deleted file mode 100644 index ddc64502..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cli.py +++ /dev/null @@ -1,198 +0,0 @@ -import os -import argparse -import logging -import shutil -import multiprocessing as mp -from contextlib import closing -from functools import partial - -import fontTools -from .ufo import font_to_quadratic, fonts_to_quadratic - -ufo_module = None -try: - import ufoLib2 as ufo_module -except ImportError: - try: - import defcon as ufo_module - except ImportError as e: - pass - - -logger = logging.getLogger("fontTools.cu2qu") - - -def _cpu_count(): - try: - return mp.cpu_count() - except NotImplementedError: # pragma: no cover - return 1 - - -def open_ufo(path): - if hasattr(ufo_module.Font, "open"): # ufoLib2 - return ufo_module.Font.open(path) - return ufo_module.Font(path) # defcon - - -def _font_to_quadratic(input_path, output_path=None, **kwargs): - ufo = open_ufo(input_path) - logger.info("Converting curves for %s", input_path) - if font_to_quadratic(ufo, **kwargs): - logger.info("Saving %s", output_path) - if output_path: - ufo.save(output_path) - else: - ufo.save() # save in-place - elif output_path: - _copytree(input_path, output_path) - - -def _samepath(path1, path2): - # TODO on python3+, there's os.path.samefile - path1 = os.path.normcase(os.path.abspath(os.path.realpath(path1))) - path2 = os.path.normcase(os.path.abspath(os.path.realpath(path2))) - return path1 == path2 - - -def _copytree(input_path, output_path): - if _samepath(input_path, output_path): - logger.debug("input and output paths are the same file; skipped copy") - return - if os.path.exists(output_path): - shutil.rmtree(output_path) - shutil.copytree(input_path, output_path) - - -def _main(args=None): - """Convert a UFO font from cubic to quadratic curves""" - parser = argparse.ArgumentParser(prog="cu2qu") - parser.add_argument("--version", action="version", version=fontTools.__version__) - parser.add_argument( - "infiles", - nargs="+", - metavar="INPUT", - help="one or more input UFO source file(s).", - ) - parser.add_argument("-v", "--verbose", action="count", default=0) - parser.add_argument( - "-e", - "--conversion-error", - type=float, - metavar="ERROR", - default=None, - help="maxiumum approximation error measured in EM (default: 0.001)", - ) - parser.add_argument( - "-m", - "--mixed", - default=False, - action="store_true", - help="whether to used mixed quadratic and cubic curves", - ) - parser.add_argument( - "--keep-direction", - dest="reverse_direction", - action="store_false", - help="do not reverse the contour direction", - ) - - mode_parser = parser.add_mutually_exclusive_group() - mode_parser.add_argument( - "-i", - "--interpolatable", - action="store_true", - help="whether curve conversion should keep interpolation compatibility", - ) - mode_parser.add_argument( - "-j", - "--jobs", - type=int, - nargs="?", - default=1, - const=_cpu_count(), - metavar="N", - help="Convert using N multiple processes (default: %(default)s)", - ) - - output_parser = parser.add_mutually_exclusive_group() - output_parser.add_argument( - "-o", - "--output-file", - default=None, - metavar="OUTPUT", - help=( - "output filename for the converted UFO. By default fonts are " - "modified in place. This only works with a single input." - ), - ) - output_parser.add_argument( - "-d", - "--output-dir", - default=None, - metavar="DIRECTORY", - help="output directory where to save converted UFOs", - ) - - options = parser.parse_args(args) - - if ufo_module is None: - parser.error("Either ufoLib2 or defcon are required to run this script.") - - if not options.verbose: - level = "WARNING" - elif options.verbose == 1: - level = "INFO" - else: - level = "DEBUG" - logging.basicConfig(level=level) - - if len(options.infiles) > 1 and options.output_file: - parser.error("-o/--output-file can't be used with multile inputs") - - if options.output_dir: - output_dir = options.output_dir - if not os.path.exists(output_dir): - os.mkdir(output_dir) - elif not os.path.isdir(output_dir): - parser.error("'%s' is not a directory" % output_dir) - output_paths = [ - os.path.join(output_dir, os.path.basename(p)) for p in options.infiles - ] - elif options.output_file: - output_paths = [options.output_file] - else: - # save in-place - output_paths = [None] * len(options.infiles) - - kwargs = dict( - dump_stats=options.verbose > 0, - max_err_em=options.conversion_error, - reverse_direction=options.reverse_direction, - all_quadratic=False if options.mixed else True, - ) - - if options.interpolatable: - logger.info("Converting curves compatibly") - ufos = [open_ufo(infile) for infile in options.infiles] - if fonts_to_quadratic(ufos, **kwargs): - for ufo, output_path in zip(ufos, output_paths): - logger.info("Saving %s", output_path) - if output_path: - ufo.save(output_path) - else: - ufo.save() - else: - for input_path, output_path in zip(options.infiles, output_paths): - if output_path: - _copytree(input_path, output_path) - else: - jobs = min(len(options.infiles), options.jobs) if options.jobs > 1 else 1 - if jobs > 1: - func = partial(_font_to_quadratic, **kwargs) - logger.info("Running %d parallel processes", jobs) - with closing(mp.Pool(jobs)) as pool: - pool.starmap(func, zip(options.infiles, output_paths)) - else: - for input_path, output_path in zip(options.infiles, output_paths): - _font_to_quadratic(input_path, output_path, **kwargs) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.c b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.c deleted file mode 100644 index 0b26ad1b..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.c +++ /dev/null @@ -1,15773 +0,0 @@ -/* Generated by Cython 3.1.4 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "define_macros": [ - [ - "CYTHON_TRACE_NOGIL", - "1" - ] - ], - "name": "fontTools.cu2qu.cu2qu", - "sources": [ - "Lib/fontTools/cu2qu/cu2qu.py" - ] - }, - "module_name": "fontTools.cu2qu.cu2qu" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -/* InitLimitedAPI */ -#if defined(Py_LIMITED_API) && !defined(CYTHON_LIMITED_API) - #define CYTHON_LIMITED_API 1 -#endif - -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x03080000 - #error Cython requires Python 3.8+. -#else -#define __PYX_ABI_VERSION "3_1_4" -#define CYTHON_HEX_VERSION 0x030104F0 -#define CYTHON_FUTURE_DIVISION 1 -/* CModulePreamble */ -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(_WIN32) && !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #define HAVE_LONG_LONG -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#define __PYX_LIMITED_VERSION_HEX PY_VERSION_HEX -#if defined(GRAALVM_PYTHON) - /* For very preliminary testing purposes. Most variables are set the same as PyPy. - The existence of this section does not imply that anything works or is even tested */ - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 1 - #define CYTHON_COMPILING_IN_CPYTHON_FREETHREADING 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - #define CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_ASSUME_SAFE_SIZE - #define CYTHON_ASSUME_SAFE_SIZE 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_SYS_MONITORING - #define CYTHON_USE_SYS_MONITORING 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_AM_SEND - #define CYTHON_USE_AM_SEND 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 1 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif - #undef CYTHON_USE_FREELISTS - #define CYTHON_USE_FREELISTS 0 -#elif defined(PYPY_VERSION) - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_CPYTHON_FREETHREADING 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - #define CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #ifndef CYTHON_ASSUME_SAFE_SIZE - #define CYTHON_ASSUME_SAFE_SIZE 1 - #endif - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #if PY_VERSION_HEX < 0x03090000 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #elif !defined(CYTHON_PEP489_MULTI_PHASE_INIT) - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #undef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #undef CYTHON_USE_SYS_MONITORING - #define CYTHON_USE_SYS_MONITORING 0 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PYPY_VERSION_NUM >= 0x07030C00) - #endif - #undef CYTHON_USE_AM_SEND - #define CYTHON_USE_AM_SEND 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC (PYPY_VERSION_NUM >= 0x07031100) - #endif - #undef CYTHON_USE_FREELISTS - #define CYTHON_USE_FREELISTS 0 -#elif defined(CYTHON_LIMITED_API) - #ifdef Py_LIMITED_API - #undef __PYX_LIMITED_VERSION_HEX - #define __PYX_LIMITED_VERSION_HEX Py_LIMITED_API - #endif - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_LIMITED_API 1 - #define CYTHON_COMPILING_IN_GRAAL 0 - #define CYTHON_COMPILING_IN_CPYTHON_FREETHREADING 0 - #undef CYTHON_CLINE_IN_TRACEBACK - #define CYTHON_CLINE_IN_TRACEBACK 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 1 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #endif - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - #define CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS 0 - #endif - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_ASSUME_SAFE_SIZE - #define CYTHON_ASSUME_SAFE_SIZE 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #undef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL (__PYX_LIMITED_VERSION_HEX >= 0x030C0000) - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #undef CYTHON_USE_SYS_MONITORING - #define CYTHON_USE_SYS_MONITORING 0 - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #endif - #ifndef CYTHON_USE_AM_SEND - #define CYTHON_USE_AM_SEND (__PYX_LIMITED_VERSION_HEX >= 0x030A0000) - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif - #undef CYTHON_USE_FREELISTS - #define CYTHON_USE_FREELISTS 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_LIMITED_API 0 - #define CYTHON_COMPILING_IN_GRAAL 0 - #ifdef Py_GIL_DISABLED - #define CYTHON_COMPILING_IN_CPYTHON_FREETHREADING 1 - #else - #define CYTHON_COMPILING_IN_CPYTHON_FREETHREADING 0 - #endif - #if PY_VERSION_HEX < 0x030A0000 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #elif !defined(CYTHON_USE_TYPE_SLOTS) - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #ifndef CYTHON_USE_TYPE_SPECS - #define CYTHON_USE_TYPE_SPECS 0 - #endif - #ifndef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #ifndef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLIST_INTERNALS) - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - #undef CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - #define CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS 1 - #elif !defined(CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS) - #define CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_ASSUME_SAFE_SIZE - #define CYTHON_ASSUME_SAFE_SIZE 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #ifndef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - #undef CYTHON_FAST_GIL - #define CYTHON_FAST_GIL 0 - #elif !defined(CYTHON_FAST_GIL) - #define CYTHON_FAST_GIL (PY_VERSION_HEX < 0x030C00A6) - #endif - #ifndef CYTHON_METH_FASTCALL - #define CYTHON_METH_FASTCALL 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 1 - #endif - #ifndef CYTHON_PEP487_INIT_SUBCLASS - #define CYTHON_PEP487_INIT_SUBCLASS 1 - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_MODULE_STATE - #define CYTHON_USE_MODULE_STATE 0 - #endif - #ifndef CYTHON_USE_SYS_MONITORING - #define CYTHON_USE_SYS_MONITORING (PY_VERSION_HEX >= 0x030d00B1) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #ifndef CYTHON_USE_AM_SEND - #define CYTHON_USE_AM_SEND 1 - #endif - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #elif !defined(CYTHON_USE_DICT_VERSIONS) - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX < 0x030C00A5 && !CYTHON_USE_MODULE_STATE) - #endif - #ifndef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 1 - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif - #ifndef CYTHON_USE_FREELISTS - #define CYTHON_USE_FREELISTS (!CYTHON_COMPILING_IN_CPYTHON_FREETHREADING) - #endif -#endif -#ifndef CYTHON_FAST_PYCCALL -#define CYTHON_FAST_PYCCALL CYTHON_FAST_PYCALL -#endif -#ifndef CYTHON_VECTORCALL -#if CYTHON_COMPILING_IN_LIMITED_API -#define CYTHON_VECTORCALL (__PYX_LIMITED_VERSION_HEX >= 0x030C0000) -#else -#define CYTHON_VECTORCALL (CYTHON_FAST_PYCCALL && PY_VERSION_HEX >= 0x030800B1) -#endif -#endif -#define CYTHON_BACKPORT_VECTORCALL (CYTHON_METH_FASTCALL && PY_VERSION_HEX < 0x030800B1) -#if CYTHON_USE_PYLONG_INTERNALS - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef CYTHON_LOCK_AND_GIL_DEADLOCK_AVOIDANCE_TIME - #define CYTHON_LOCK_AND_GIL_DEADLOCK_AVOIDANCE_TIME 100 -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(maybe_unused) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(maybe_unused) - #define CYTHON_UNUSED [[maybe_unused]] - #endif - #endif - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR - #define CYTHON_MAYBE_UNUSED_VAR(x) CYTHON_UNUSED_VAR(x) -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON && !CYTHON_COMPILING_IN_CPYTHON_FREETHREADING -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_USE_CPP_STD_MOVE - #if defined(__cplusplus) && (\ - __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1600)) - #define CYTHON_USE_CPP_STD_MOVE 1 - #else - #define CYTHON_USE_CPP_STD_MOVE 0 - #endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned short uint16_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int16 uint16_t; - typedef unsigned __int32 uint32_t; - #endif - #endif - #if _MSC_VER < 1300 - #ifdef _WIN64 - typedef unsigned long long __pyx_uintptr_t; - #else - typedef unsigned int __pyx_uintptr_t; - #endif - #else - #ifdef _WIN64 - typedef unsigned __int64 __pyx_uintptr_t; - #else - typedef unsigned __int32 __pyx_uintptr_t; - #endif - #endif -#else - #include - typedef uintptr_t __pyx_uintptr_t; -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) - /* for clang __has_cpp_attribute(fallthrough) is true even before C++17 - * but leads to warnings with -pedantic, since it is a C++17 feature */ - #if ((defined(_MSVC_LANG) && _MSVC_LANG >= 201703L) || __cplusplus >= 201703L) - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif -#ifndef Py_UNREACHABLE - #define Py_UNREACHABLE() assert(0); abort() -#endif -#ifdef __cplusplus - template - struct __PYX_IS_UNSIGNED_IMPL {static const bool value = T(0) < T(-1);}; - #define __PYX_IS_UNSIGNED(type) (__PYX_IS_UNSIGNED_IMPL::value) -#else - #define __PYX_IS_UNSIGNED(type) (((type)-1) > 0) -#endif -#if CYTHON_COMPILING_IN_PYPY == 1 - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x030A0000) -#else - #define __PYX_NEED_TP_PRINT_SLOT (PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000) -#endif -#define __PYX_REINTERPRET_FUNCION(func_pointer, other_pointer) ((func_pointer)(void(*)(void))(other_pointer)) - -/* CInitCode */ -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -/* PythonCompatibility */ -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#define __Pyx_BUILTIN_MODULE_NAME "builtins" -#define __Pyx_DefaultClassType PyType_Type -#if CYTHON_COMPILING_IN_LIMITED_API - #ifndef CO_OPTIMIZED - static int CO_OPTIMIZED; - #endif - #ifndef CO_NEWLOCALS - static int CO_NEWLOCALS; - #endif - #ifndef CO_VARARGS - static int CO_VARARGS; - #endif - #ifndef CO_VARKEYWORDS - static int CO_VARKEYWORDS; - #endif - #ifndef CO_ASYNC_GENERATOR - static int CO_ASYNC_GENERATOR; - #endif - #ifndef CO_GENERATOR - static int CO_GENERATOR; - #endif - #ifndef CO_COROUTINE - static int CO_COROUTINE; - #endif -#else - #ifndef CO_COROUTINE - #define CO_COROUTINE 0x80 - #endif - #ifndef CO_ASYNC_GENERATOR - #define CO_ASYNC_GENERATOR 0x200 - #endif -#endif -static int __Pyx_init_co_variables(void); -#if PY_VERSION_HEX >= 0x030900A4 || defined(Py_IS_TYPE) - #define __Pyx_IS_TYPE(ob, type) Py_IS_TYPE(ob, type) -#else - #define __Pyx_IS_TYPE(ob, type) (((const PyObject*)ob)->ob_type == (type)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_Is) - #define __Pyx_Py_Is(x, y) Py_Is(x, y) -#else - #define __Pyx_Py_Is(x, y) ((x) == (y)) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsNone) - #define __Pyx_Py_IsNone(ob) Py_IsNone(ob) -#else - #define __Pyx_Py_IsNone(ob) __Pyx_Py_Is((ob), Py_None) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsTrue) - #define __Pyx_Py_IsTrue(ob) Py_IsTrue(ob) -#else - #define __Pyx_Py_IsTrue(ob) __Pyx_Py_Is((ob), Py_True) -#endif -#if PY_VERSION_HEX >= 0x030A00B1 || defined(Py_IsFalse) - #define __Pyx_Py_IsFalse(ob) Py_IsFalse(ob) -#else - #define __Pyx_Py_IsFalse(ob) __Pyx_Py_Is((ob), Py_False) -#endif -#define __Pyx_NoneAsNull(obj) (__Pyx_Py_IsNone(obj) ? NULL : (obj)) -#if PY_VERSION_HEX >= 0x030900F0 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyObject_GC_IsFinalized(o) PyObject_GC_IsFinalized(o) -#else - #define __Pyx_PyObject_GC_IsFinalized(o) _PyGC_FINALIZED(o) -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef Py_TPFLAGS_SEQUENCE - #define Py_TPFLAGS_SEQUENCE 0 -#endif -#ifndef Py_TPFLAGS_MAPPING - #define Py_TPFLAGS_MAPPING 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#ifndef METH_FASTCALL - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #if PY_VERSION_HEX >= 0x030d00A4 - # define __Pyx_PyCFunctionFast PyCFunctionFast - # define __Pyx_PyCFunctionFastWithKeywords PyCFunctionFastWithKeywords - #else - # define __Pyx_PyCFunctionFast _PyCFunctionFast - # define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords - #endif -#endif -#if CYTHON_METH_FASTCALL - #define __Pyx_METH_FASTCALL METH_FASTCALL - #define __Pyx_PyCFunction_FastCall __Pyx_PyCFunctionFast - #define __Pyx_PyCFunction_FastCallWithKeywords __Pyx_PyCFunctionFastWithKeywords -#else - #define __Pyx_METH_FASTCALL METH_VARARGS - #define __Pyx_PyCFunction_FastCall PyCFunction - #define __Pyx_PyCFunction_FastCallWithKeywords PyCFunctionWithKeywords -#endif -#if CYTHON_VECTORCALL - #define __pyx_vectorcallfunc vectorcallfunc - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET PY_VECTORCALL_ARGUMENTS_OFFSET - #define __Pyx_PyVectorcall_NARGS(n) PyVectorcall_NARGS((size_t)(n)) -#elif CYTHON_BACKPORT_VECTORCALL - typedef PyObject *(*__pyx_vectorcallfunc)(PyObject *callable, PyObject *const *args, - size_t nargsf, PyObject *kwnames); - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET ((size_t)1 << (8 * sizeof(size_t) - 1)) - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(((size_t)(n)) & ~__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)) -#else - #define __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET 0 - #define __Pyx_PyVectorcall_NARGS(n) ((Py_ssize_t)(n)) -#endif -#if PY_VERSION_HEX >= 0x030900B1 -#define __Pyx_PyCFunction_CheckExact(func) PyCFunction_CheckExact(func) -#else -#define __Pyx_PyCFunction_CheckExact(func) PyCFunction_Check(func) -#endif -#define __Pyx_CyOrPyCFunction_Check(func) PyCFunction_Check(func) -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_CyOrPyCFunction_GET_FUNCTION(func) (((PyCFunctionObject*)(func))->m_ml->ml_meth) -#elif !CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_CyOrPyCFunction_GET_FUNCTION(func) PyCFunction_GET_FUNCTION(func) -#endif -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_CyOrPyCFunction_GET_FLAGS(func) (((PyCFunctionObject*)(func))->m_ml->ml_flags) -static CYTHON_INLINE PyObject* __Pyx_CyOrPyCFunction_GET_SELF(PyObject *func) { - return (__Pyx_CyOrPyCFunction_GET_FLAGS(func) & METH_STATIC) ? NULL : ((PyCFunctionObject*)func)->m_self; -} -#endif -static CYTHON_INLINE int __Pyx__IsSameCFunction(PyObject *func, void (*cfunc)(void)) { -#if CYTHON_COMPILING_IN_LIMITED_API - return PyCFunction_Check(func) && PyCFunction_GetFunction(func) == (PyCFunction) cfunc; -#else - return PyCFunction_Check(func) && PyCFunction_GET_FUNCTION(func) == (PyCFunction) cfunc; -#endif -} -#define __Pyx_IsSameCFunction(func, cfunc) __Pyx__IsSameCFunction(func, cfunc) -#if __PYX_LIMITED_VERSION_HEX < 0x03090000 - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) ((void)m, PyType_FromSpecWithBases(s, b)) - typedef PyObject *(*__Pyx_PyCMethod)(PyObject *, PyTypeObject *, PyObject *const *, size_t, PyObject *); -#else - #define __Pyx_PyType_FromModuleAndSpec(m, s, b) PyType_FromModuleAndSpec(m, s, b) - #define __Pyx_PyCMethod PyCMethod -#endif -#ifndef METH_METHOD - #define METH_METHOD 0x200 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) -#elif CYTHON_COMPILING_IN_GRAAL - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) _PyFrame_SetLineNumber((frame), (lineno)) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyThreadState_Current PyThreadState_Get() -#elif !CYTHON_FAST_THREAD_STATE - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x030d00A1 - #define __Pyx_PyThreadState_Current PyThreadState_GetUnchecked() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#endif -#if CYTHON_USE_MODULE_STATE -static CYTHON_INLINE void *__Pyx__PyModule_GetState(PyObject *op) -{ - void *result; - result = PyModule_GetState(op); - if (!result) - Py_FatalError("Couldn't find the module state"); - return result; -} -#define __Pyx_PyModule_GetState(o) (__pyx_mstatetype *)__Pyx__PyModule_GetState(o) -#else -#define __Pyx_PyModule_GetState(op) ((void)op,__pyx_mstate_global) -#endif -#define __Pyx_PyObject_GetSlot(obj, name, func_ctype) __Pyx_PyType_GetSlot(Py_TYPE((PyObject *) obj), name, func_ctype) -#define __Pyx_PyObject_TryGetSlot(obj, name, func_ctype) __Pyx_PyType_TryGetSlot(Py_TYPE(obj), name, func_ctype) -#define __Pyx_PyObject_GetSubSlot(obj, sub, name, func_ctype) __Pyx_PyType_GetSubSlot(Py_TYPE(obj), sub, name, func_ctype) -#define __Pyx_PyObject_TryGetSubSlot(obj, sub, name, func_ctype) __Pyx_PyType_TryGetSubSlot(Py_TYPE(obj), sub, name, func_ctype) -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((type)->name) - #define __Pyx_PyType_TryGetSlot(type, name, func_ctype) __Pyx_PyType_GetSlot(type, name, func_ctype) - #define __Pyx_PyType_GetSubSlot(type, sub, name, func_ctype) (((type)->sub) ? ((type)->sub->name) : NULL) - #define __Pyx_PyType_TryGetSubSlot(type, sub, name, func_ctype) __Pyx_PyType_GetSubSlot(type, sub, name, func_ctype) -#else - #define __Pyx_PyType_GetSlot(type, name, func_ctype) ((func_ctype) PyType_GetSlot((type), Py_##name)) - #define __Pyx_PyType_TryGetSlot(type, name, func_ctype)\ - ((__PYX_LIMITED_VERSION_HEX >= 0x030A0000 ||\ - (PyType_GetFlags(type) & Py_TPFLAGS_HEAPTYPE) || __Pyx_get_runtime_version() >= 0x030A0000) ?\ - __Pyx_PyType_GetSlot(type, name, func_ctype) : NULL) - #define __Pyx_PyType_GetSubSlot(obj, sub, name, func_ctype) __Pyx_PyType_GetSlot(obj, name, func_ctype) - #define __Pyx_PyType_TryGetSubSlot(obj, sub, name, func_ctype) __Pyx_PyType_TryGetSlot(obj, name, func_ctype) -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) -#define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStrWithError(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStr(PyObject *dict, PyObject *name) { - PyObject *res = __Pyx_PyDict_GetItemStrWithError(dict, name); - if (res == NULL) PyErr_Clear(); - return res; -} -#elif !CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07020000 -#define __Pyx_PyDict_GetItemStrWithError PyDict_GetItemWithError -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#else -static CYTHON_INLINE PyObject * __Pyx_PyDict_GetItemStrWithError(PyObject *dict, PyObject *name) { -#if CYTHON_COMPILING_IN_PYPY - return PyDict_GetItem(dict, name); -#else - PyDictEntry *ep; - PyDictObject *mp = (PyDictObject*) dict; - long hash = ((PyStringObject *) name)->ob_shash; - assert(hash != -1); - ep = (mp->ma_lookup)(mp, name, hash); - if (ep == NULL) { - return NULL; - } - return ep->me_value; -#endif -} -#define __Pyx_PyDict_GetItemStr PyDict_GetItem -#endif -#if CYTHON_USE_TYPE_SLOTS - #define __Pyx_PyType_GetFlags(tp) (((PyTypeObject *)tp)->tp_flags) - #define __Pyx_PyType_HasFeature(type, feature) ((__Pyx_PyType_GetFlags(type) & (feature)) != 0) -#else - #define __Pyx_PyType_GetFlags(tp) (PyType_GetFlags((PyTypeObject *)tp)) - #define __Pyx_PyType_HasFeature(type, feature) PyType_HasFeature(type, feature) -#endif -#define __Pyx_PyObject_GetIterNextFunc(iterator) __Pyx_PyObject_GetSlot(iterator, tp_iternext, iternextfunc) -#if CYTHON_USE_TYPE_SPECS && PY_VERSION_HEX >= 0x03080000 -#define __Pyx_PyHeapTypeObject_GC_Del(obj) {\ - PyTypeObject *type = Py_TYPE((PyObject*)obj);\ - assert(__Pyx_PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE));\ - PyObject_GC_Del(obj);\ - Py_DECREF(type);\ -} -#else -#define __Pyx_PyHeapTypeObject_GC_Del(obj) PyObject_GC_Del(obj) -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_ReadChar(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((void)u, 1114111U) - #define __Pyx_PyUnicode_KIND(u) ((void)u, (0)) - #define __Pyx_PyUnicode_DATA(u) ((void*)u) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)k, PyUnicode_ReadChar((PyObject*)(d), i)) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GetLength(u)) -#else - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_READY(op) (0) - #else - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #endif - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) ((int)PyUnicode_KIND(u)) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, (Py_UCS4) ch) - #if PY_VERSION_HEX >= 0x030C0000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #else - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #endif -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #if !defined(PyUnicode_DecodeUnicodeEscape) - #define PyUnicode_DecodeUnicodeEscape(s, size, errors) PyUnicode_Decode(s, size, "unicode_escape", errors) - #endif - #if !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) - #endif - #if !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) - #endif - #if !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) - #endif -#endif -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if CYTHON_COMPILING_IN_CPYTHON - #define __Pyx_PySequence_ListKeepNew(obj)\ - (likely(PyList_CheckExact(obj) && Py_REFCNT(obj) == 1) ? __Pyx_NewRef(obj) : PySequence_List(obj)) -#else - #define __Pyx_PySequence_ListKeepNew(obj) PySequence_List(obj) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) __Pyx_IS_TYPE(obj, &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_AVOID_BORROWED_REFS || CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - #if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 - #define __Pyx_PyList_GetItemRef(o, i) PyList_GetItemRef(o, i) - #elif CYTHON_COMPILING_IN_LIMITED_API || !CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PyList_GetItemRef(o, i) (likely((i) >= 0) ? PySequence_GetItem(o, i) : (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) - #else - #define __Pyx_PyList_GetItemRef(o, i) PySequence_ITEM(o, i) - #endif -#elif CYTHON_COMPILING_IN_LIMITED_API || !CYTHON_ASSUME_SAFE_MACROS - #if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 - #define __Pyx_PyList_GetItemRef(o, i) PyList_GetItemRef(o, i) - #else - #define __Pyx_PyList_GetItemRef(o, i) __Pyx_XNewRef(PyList_GetItem(o, i)) - #endif -#else - #define __Pyx_PyList_GetItemRef(o, i) __Pyx_NewRef(PyList_GET_ITEM(o, i)) -#endif -#if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 -#define __Pyx_PyDict_GetItemRef(dict, key, result) PyDict_GetItemRef(dict, key, result) -#elif CYTHON_AVOID_BORROWED_REFS || CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS -static CYTHON_INLINE int __Pyx_PyDict_GetItemRef(PyObject *dict, PyObject *key, PyObject **result) { - *result = PyObject_GetItem(dict, key); - if (*result == NULL) { - if (PyErr_ExceptionMatches(PyExc_KeyError)) { - PyErr_Clear(); - return 0; - } - return -1; - } - return 1; -} -#else -static CYTHON_INLINE int __Pyx_PyDict_GetItemRef(PyObject *dict, PyObject *key, PyObject **result) { - *result = PyDict_GetItemWithError(dict, key); - if (*result == NULL) { - return PyErr_Occurred() ? -1 : 0; - } - Py_INCREF(*result); - return 1; -} -#endif -#if defined(CYTHON_DEBUG_VISIT_CONST) && CYTHON_DEBUG_VISIT_CONST - #define __Pyx_VISIT_CONST(obj) Py_VISIT(obj) -#else - #define __Pyx_VISIT_CONST(obj) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_ITEM(o, i) PySequence_ITEM(o, i) - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) - #define __Pyx_PyTuple_SET_ITEM(o, i, v) (PyTuple_SET_ITEM(o, i, v), (0)) - #define __Pyx_PyTuple_GET_ITEM(o, i) PyTuple_GET_ITEM(o, i) - #define __Pyx_PyList_SET_ITEM(o, i, v) (PyList_SET_ITEM(o, i, v), (0)) - #define __Pyx_PyList_GET_ITEM(o, i) PyList_GET_ITEM(o, i) -#else - #define __Pyx_PySequence_ITEM(o, i) PySequence_GetItem(o, i) - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) - #define __Pyx_PyTuple_SET_ITEM(o, i, v) PyTuple_SetItem(o, i, v) - #define __Pyx_PyTuple_GET_ITEM(o, i) PyTuple_GetItem(o, i) - #define __Pyx_PyList_SET_ITEM(o, i, v) PyList_SetItem(o, i, v) - #define __Pyx_PyList_GET_ITEM(o, i) PyList_GetItem(o, i) -#endif -#if CYTHON_ASSUME_SAFE_SIZE - #define __Pyx_PyTuple_GET_SIZE(o) PyTuple_GET_SIZE(o) - #define __Pyx_PyList_GET_SIZE(o) PyList_GET_SIZE(o) - #define __Pyx_PySet_GET_SIZE(o) PySet_GET_SIZE(o) - #define __Pyx_PyBytes_GET_SIZE(o) PyBytes_GET_SIZE(o) - #define __Pyx_PyByteArray_GET_SIZE(o) PyByteArray_GET_SIZE(o) - #define __Pyx_PyUnicode_GET_LENGTH(o) PyUnicode_GET_LENGTH(o) -#else - #define __Pyx_PyTuple_GET_SIZE(o) PyTuple_Size(o) - #define __Pyx_PyList_GET_SIZE(o) PyList_Size(o) - #define __Pyx_PySet_GET_SIZE(o) PySet_Size(o) - #define __Pyx_PyBytes_GET_SIZE(o) PyBytes_Size(o) - #define __Pyx_PyByteArray_GET_SIZE(o) PyByteArray_Size(o) - #define __Pyx_PyUnicode_GET_LENGTH(o) PyUnicode_GetLength(o) -#endif -#if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 - #define __Pyx_PyImport_AddModuleRef(name) PyImport_AddModuleRef(name) -#else - static CYTHON_INLINE PyObject *__Pyx_PyImport_AddModuleRef(const char *name) { - PyObject *module = PyImport_AddModule(name); - Py_XINCREF(module); - return module; - } -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_InternFromString) - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) -#endif -#define __Pyx_PyLong_FromHash_t PyLong_FromSsize_t -#define __Pyx_PyLong_AsHash_t __Pyx_PyIndex_AsSsize_t -#if __PYX_LIMITED_VERSION_HEX >= 0x030A0000 - #define __Pyx_PySendResult PySendResult -#else - typedef enum { - PYGEN_RETURN = 0, - PYGEN_ERROR = -1, - PYGEN_NEXT = 1, - } __Pyx_PySendResult; -#endif -#if CYTHON_COMPILING_IN_LIMITED_API || PY_VERSION_HEX < 0x030A00A3 - typedef __Pyx_PySendResult (*__Pyx_pyiter_sendfunc)(PyObject *iter, PyObject *value, PyObject **result); -#else - #define __Pyx_pyiter_sendfunc sendfunc -#endif -#if !CYTHON_USE_AM_SEND -#define __PYX_HAS_PY_AM_SEND 0 -#elif __PYX_LIMITED_VERSION_HEX >= 0x030A0000 -#define __PYX_HAS_PY_AM_SEND 1 -#else -#define __PYX_HAS_PY_AM_SEND 2 // our own backported implementation -#endif -#if __PYX_HAS_PY_AM_SEND < 2 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods -#else - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - __Pyx_pyiter_sendfunc am_send; - } __Pyx_PyAsyncMethodsStruct; - #define __Pyx_SlotTpAsAsync(s) ((PyAsyncMethods*)(s)) -#endif -#if CYTHON_USE_AM_SEND && PY_VERSION_HEX < 0x030A00F0 - #define __Pyx_TPFLAGS_HAVE_AM_SEND (1UL << 21) -#else - #define __Pyx_TPFLAGS_HAVE_AM_SEND (0) -#endif -#if PY_VERSION_HEX >= 0x03090000 -#define __Pyx_PyInterpreterState_Get() PyInterpreterState_Get() -#else -#define __Pyx_PyInterpreterState_Get() PyThreadState_Get()->interp -#endif -#if CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030A0000 -#ifdef __cplusplus -extern "C" -#endif -PyAPI_FUNC(void *) PyMem_Calloc(size_t nelem, size_t elsize); -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static int __Pyx_init_co_variable(PyObject *inspect, const char* name, int *write_to) { - int value; - PyObject *py_value = PyObject_GetAttrString(inspect, name); - if (!py_value) return 0; - value = (int) PyLong_AsLong(py_value); - Py_DECREF(py_value); - *write_to = value; - return value != -1 || !PyErr_Occurred(); -} -static int __Pyx_init_co_variables(void) { - PyObject *inspect; - int result; - inspect = PyImport_ImportModule("inspect"); - result = -#if !defined(CO_OPTIMIZED) - __Pyx_init_co_variable(inspect, "CO_OPTIMIZED", &CO_OPTIMIZED) && -#endif -#if !defined(CO_NEWLOCALS) - __Pyx_init_co_variable(inspect, "CO_NEWLOCALS", &CO_NEWLOCALS) && -#endif -#if !defined(CO_VARARGS) - __Pyx_init_co_variable(inspect, "CO_VARARGS", &CO_VARARGS) && -#endif -#if !defined(CO_VARKEYWORDS) - __Pyx_init_co_variable(inspect, "CO_VARKEYWORDS", &CO_VARKEYWORDS) && -#endif -#if !defined(CO_ASYNC_GENERATOR) - __Pyx_init_co_variable(inspect, "CO_ASYNC_GENERATOR", &CO_ASYNC_GENERATOR) && -#endif -#if !defined(CO_GENERATOR) - __Pyx_init_co_variable(inspect, "CO_GENERATOR", &CO_GENERATOR) && -#endif -#if !defined(CO_COROUTINE) - __Pyx_init_co_variable(inspect, "CO_COROUTINE", &CO_COROUTINE) && -#endif - 1; - Py_DECREF(inspect); - return result ? 0 : -1; -} -#else -static int __Pyx_init_co_variables(void) { - return 0; // It's a limited API-only feature -} -#endif - -/* MathInitCode */ -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #ifndef _USE_MATH_DEFINES - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#ifndef CYTHON_CLINE_IN_TRACEBACK_RUNTIME -#define CYTHON_CLINE_IN_TRACEBACK_RUNTIME 0 -#endif -#ifndef CYTHON_CLINE_IN_TRACEBACK -#define CYTHON_CLINE_IN_TRACEBACK CYTHON_CLINE_IN_TRACEBACK_RUNTIME -#endif -#if CYTHON_CLINE_IN_TRACEBACK -#define __PYX_MARK_ERR_POS(f_index, lineno) { __pyx_filename = __pyx_f[f_index]; (void) __pyx_filename; __pyx_lineno = lineno; (void) __pyx_lineno; __pyx_clineno = __LINE__; (void) __pyx_clineno; } -#else -#define __PYX_MARK_ERR_POS(f_index, lineno) { __pyx_filename = __pyx_f[f_index]; (void) __pyx_filename; __pyx_lineno = lineno; (void) __pyx_lineno; (void) __pyx_clineno; } -#endif -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifdef CYTHON_EXTERN_C - #undef __PYX_EXTERN_C - #define __PYX_EXTERN_C CYTHON_EXTERN_C -#elif defined(__PYX_EXTERN_C) - #ifdef _MSC_VER - #pragma message ("Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead.") - #else - #warning Please do not define the '__PYX_EXTERN_C' macro externally. Use 'CYTHON_EXTERN_C' instead. - #endif -#else - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__fontTools__cu2qu__cu2qu -#define __PYX_HAVE_API__fontTools__cu2qu__cu2qu -/* Early includes */ -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s); -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -static CYTHON_INLINE PyObject* __Pyx_PyByteArray_FromString(const char*); -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) - #define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) - #define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) - #define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) - #define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) - #define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) - #define __Pyx_PyByteArray_AsString(s) PyByteArray_AS_STRING(s) -#else - #define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AsString(s)) - #define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AsString(s)) - #define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AsString(s)) - #define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AsString(s)) - #define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AsString(s)) - #define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AsString(s)) - #define __Pyx_PyByteArray_AsString(s) PyByteArray_AsString(s) -#endif -#define __Pyx_PyObject_AsWritableString(s) ((char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*)(__pyx_uintptr_t) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -#define __Pyx_PyUnicode_FromOrdinal(o) PyUnicode_FromOrdinal((int)o) -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -static CYTHON_INLINE PyObject *__Pyx_NewRef(PyObject *obj) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030a0000 || defined(Py_NewRef) - return Py_NewRef(obj); -#else - Py_INCREF(obj); - return obj; -#endif -} -static CYTHON_INLINE PyObject *__Pyx_XNewRef(PyObject *obj) { -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030a0000 || defined(Py_XNewRef) - return Py_XNewRef(obj); -#else - Py_XINCREF(obj); - return obj; -#endif -} -static CYTHON_INLINE PyObject *__Pyx_Owned_Py_None(int b); -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_Long(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyLong_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __Pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#define __Pyx_PyFloat_AS_DOUBLE(x) PyFloat_AS_DOUBLE(x) -#else -#define __Pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#define __Pyx_PyFloat_AS_DOUBLE(x) PyFloat_AsDouble(x) -#endif -#define __Pyx_PyFloat_AsFloat(x) ((float) __Pyx_PyFloat_AsDouble(x)) -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_VERSION_HEX >= 0x030C00A7 - #ifndef _PyLong_SIGN_MASK - #define _PyLong_SIGN_MASK 3 - #endif - #ifndef _PyLong_NON_SIZE_BITS - #define _PyLong_NON_SIZE_BITS 3 - #endif - #define __Pyx_PyLong_Sign(x) (((PyLongObject*)x)->long_value.lv_tag & _PyLong_SIGN_MASK) - #define __Pyx_PyLong_IsNeg(x) ((__Pyx_PyLong_Sign(x) & 2) != 0) - #define __Pyx_PyLong_IsNonNeg(x) (!__Pyx_PyLong_IsNeg(x)) - #define __Pyx_PyLong_IsZero(x) (__Pyx_PyLong_Sign(x) & 1) - #define __Pyx_PyLong_IsPos(x) (__Pyx_PyLong_Sign(x) == 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) (__Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) ((Py_ssize_t) (((PyLongObject*)x)->long_value.lv_tag >> _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_SignedDigitCount(x)\ - ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * __Pyx_PyLong_DigitCount(x)) - #if defined(PyUnstable_Long_IsCompact) && defined(PyUnstable_Long_CompactValue) - #define __Pyx_PyLong_IsCompact(x) PyUnstable_Long_IsCompact((PyLongObject*) x) - #define __Pyx_PyLong_CompactValue(x) PyUnstable_Long_CompactValue((PyLongObject*) x) - #else - #define __Pyx_PyLong_IsCompact(x) (((PyLongObject*)x)->long_value.lv_tag < (2 << _PyLong_NON_SIZE_BITS)) - #define __Pyx_PyLong_CompactValue(x) ((1 - (Py_ssize_t) __Pyx_PyLong_Sign(x)) * (Py_ssize_t) __Pyx_PyLong_Digits(x)[0]) - #endif - typedef Py_ssize_t __Pyx_compact_pylong; - typedef size_t __Pyx_compact_upylong; - #else - #define __Pyx_PyLong_IsNeg(x) (Py_SIZE(x) < 0) - #define __Pyx_PyLong_IsNonNeg(x) (Py_SIZE(x) >= 0) - #define __Pyx_PyLong_IsZero(x) (Py_SIZE(x) == 0) - #define __Pyx_PyLong_IsPos(x) (Py_SIZE(x) > 0) - #define __Pyx_PyLong_CompactValueUnsigned(x) ((Py_SIZE(x) == 0) ? 0 : __Pyx_PyLong_Digits(x)[0]) - #define __Pyx_PyLong_DigitCount(x) __Pyx_sst_abs(Py_SIZE(x)) - #define __Pyx_PyLong_SignedDigitCount(x) Py_SIZE(x) - #define __Pyx_PyLong_IsCompact(x) (Py_SIZE(x) == 0 || Py_SIZE(x) == 1 || Py_SIZE(x) == -1) - #define __Pyx_PyLong_CompactValue(x)\ - ((Py_SIZE(x) == 0) ? (sdigit) 0 : ((Py_SIZE(x) < 0) ? -(sdigit)__Pyx_PyLong_Digits(x)[0] : (sdigit)__Pyx_PyLong_Digits(x)[0])) - typedef sdigit __Pyx_compact_pylong; - typedef digit __Pyx_compact_upylong; - #endif - #if PY_VERSION_HEX >= 0x030C00A5 - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->long_value.ob_digit) - #else - #define __Pyx_PyLong_Digits(x) (((PyLongObject*)x)->ob_digit) - #endif -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 - #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#elif __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeASCII(c_str, size, NULL) -#else - #define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -/* PretendToInitialize */ -#ifdef __cplusplus -#if __cplusplus > 201103L -#include -#endif -template -static void __Pyx_pretend_to_initialize(T* ptr) { -#if __cplusplus > 201103L - if ((std::is_trivially_default_constructible::value)) -#endif - *ptr = T(); - (void)ptr; -} -#else -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } -#endif - - -#if !CYTHON_USE_MODULE_STATE -static PyObject *__pyx_m = NULL; -#endif -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * const __pyx_cfilenm = __FILE__; -static const char *__pyx_filename; - -/* Header.proto */ -#if !defined(CYTHON_CCOMPLEX) - #if defined(__cplusplus) - #define CYTHON_CCOMPLEX 1 - #elif (defined(_Complex_I) && !defined(_MSC_VER)) || ((defined (__STDC_VERSION__) && __STDC_VERSION__ >= 201112L) && !defined(__STDC_NO_COMPLEX__) && !defined(_MSC_VER)) - #define CYTHON_CCOMPLEX 1 - #else - #define CYTHON_CCOMPLEX 0 - #endif -#endif -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - #include - #else - #include - #endif -#endif -#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__) - #undef _Complex_I - #define _Complex_I 1.0fj -#endif - -/* #### Code section: filename_table ### */ - -static const char* const __pyx_f[] = { - "Lib/fontTools/cu2qu/cu2qu.py", -}; -/* #### Code section: utility_code_proto_before_types ### */ -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __PYX_CYTHON_ATOMICS_ENABLED() CYTHON_ATOMICS -#define __PYX_GET_CYTHON_COMPILING_IN_CPYTHON_FREETHREADING() CYTHON_COMPILING_IN_CPYTHON_FREETHREADING -#define __pyx_atomic_int_type int -#define __pyx_nonatomic_int_type int -#if CYTHON_ATOMICS && (defined(__STDC_VERSION__) &&\ - (__STDC_VERSION__ >= 201112L) &&\ - !defined(__STDC_NO_ATOMICS__)) - #include -#elif CYTHON_ATOMICS && (defined(__cplusplus) && (\ - (__cplusplus >= 201103L) ||\ - (defined(_MSC_VER) && _MSC_VER >= 1700))) - #include -#endif -#if CYTHON_ATOMICS && (defined(__STDC_VERSION__) &&\ - (__STDC_VERSION__ >= 201112L) &&\ - !defined(__STDC_NO_ATOMICS__) &&\ - ATOMIC_INT_LOCK_FREE == 2) - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type atomic_int - #define __pyx_atomic_ptr_type atomic_uintptr_t - #define __pyx_nonatomic_ptr_type uintptr_t - #define __pyx_atomic_incr_relaxed(value) atomic_fetch_add_explicit(value, 1, memory_order_relaxed) - #define __pyx_atomic_incr_acq_rel(value) atomic_fetch_add_explicit(value, 1, memory_order_acq_rel) - #define __pyx_atomic_decr_acq_rel(value) atomic_fetch_sub_explicit(value, 1, memory_order_acq_rel) - #define __pyx_atomic_sub(value, arg) atomic_fetch_sub(value, arg) - #define __pyx_atomic_int_cmp_exchange(value, expected, desired) atomic_compare_exchange_strong(value, expected, desired) - #define __pyx_atomic_load(value) atomic_load(value) - #define __pyx_atomic_store(value, new_value) atomic_store(value, new_value) - #define __pyx_atomic_pointer_load_relaxed(value) atomic_load_explicit(value, memory_order_relaxed) - #define __pyx_atomic_pointer_load_acquire(value) atomic_load_explicit(value, memory_order_acquire) - #define __pyx_atomic_pointer_exchange(value, new_value) atomic_exchange(value, (__pyx_nonatomic_ptr_type)new_value) - #if defined(__PYX_DEBUG_ATOMICS) && defined(_MSC_VER) - #pragma message ("Using standard C atomics") - #elif defined(__PYX_DEBUG_ATOMICS) - #warning "Using standard C atomics" - #endif -#elif CYTHON_ATOMICS && (defined(__cplusplus) && (\ - (__cplusplus >= 201103L) ||\ -\ - (defined(_MSC_VER) && _MSC_VER >= 1700)) &&\ - ATOMIC_INT_LOCK_FREE == 2) - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type std::atomic_int - #define __pyx_atomic_ptr_type std::atomic_uintptr_t - #define __pyx_nonatomic_ptr_type uintptr_t - #define __pyx_atomic_incr_relaxed(value) std::atomic_fetch_add_explicit(value, 1, std::memory_order_relaxed) - #define __pyx_atomic_incr_acq_rel(value) std::atomic_fetch_add_explicit(value, 1, std::memory_order_acq_rel) - #define __pyx_atomic_decr_acq_rel(value) std::atomic_fetch_sub_explicit(value, 1, std::memory_order_acq_rel) - #define __pyx_atomic_sub(value, arg) std::atomic_fetch_sub(value, arg) - #define __pyx_atomic_int_cmp_exchange(value, expected, desired) std::atomic_compare_exchange_strong(value, expected, desired) - #define __pyx_atomic_load(value) std::atomic_load(value) - #define __pyx_atomic_store(value, new_value) std::atomic_store(value, new_value) - #define __pyx_atomic_pointer_load_relaxed(value) std::atomic_load_explicit(value, std::memory_order_relaxed) - #define __pyx_atomic_pointer_load_acquire(value) std::atomic_load_explicit(value, std::memory_order_acquire) - #define __pyx_atomic_pointer_exchange(value, new_value) std::atomic_exchange(value, (__pyx_nonatomic_ptr_type)new_value) - #if defined(__PYX_DEBUG_ATOMICS) && defined(_MSC_VER) - #pragma message ("Using standard C++ atomics") - #elif defined(__PYX_DEBUG_ATOMICS) - #warning "Using standard C++ atomics" - #endif -#elif CYTHON_ATOMICS && (__GNUC__ >= 5 || (__GNUC__ == 4 &&\ - (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ >= 2)))) - #define __pyx_atomic_ptr_type void* - #define __pyx_atomic_incr_relaxed(value) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_incr_acq_rel(value) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_acq_rel(value) __sync_fetch_and_sub(value, 1) - #define __pyx_atomic_sub(value, arg) __sync_fetch_and_sub(value, arg) - static CYTHON_INLINE int __pyx_atomic_int_cmp_exchange(__pyx_atomic_int_type* value, __pyx_nonatomic_int_type* expected, __pyx_nonatomic_int_type desired) { - __pyx_nonatomic_int_type old = __sync_val_compare_and_swap(value, *expected, desired); - int result = old == *expected; - *expected = old; - return result; - } - #define __pyx_atomic_load(value) __sync_fetch_and_add(value, 0) - #define __pyx_atomic_store(value, new_value) __sync_lock_test_and_set(value, new_value) - #define __pyx_atomic_pointer_load_relaxed(value) __sync_fetch_and_add(value, 0) - #define __pyx_atomic_pointer_load_acquire(value) __sync_fetch_and_add(value, 0) - #define __pyx_atomic_pointer_exchange(value, new_value) __sync_lock_test_and_set(value, (__pyx_atomic_ptr_type)new_value) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type long - #define __pyx_atomic_ptr_type void* - #undef __pyx_nonatomic_int_type - #define __pyx_nonatomic_int_type long - #pragma intrinsic (_InterlockedExchangeAdd, _InterlockedExchange, _InterlockedCompareExchange, _InterlockedCompareExchangePointer, _InterlockedExchangePointer) - #define __pyx_atomic_incr_relaxed(value) _InterlockedExchangeAdd(value, 1) - #define __pyx_atomic_incr_acq_rel(value) _InterlockedExchangeAdd(value, 1) - #define __pyx_atomic_decr_acq_rel(value) _InterlockedExchangeAdd(value, -1) - #define __pyx_atomic_sub(value, arg) _InterlockedExchangeAdd(value, -arg) - static CYTHON_INLINE int __pyx_atomic_int_cmp_exchange(__pyx_atomic_int_type* value, __pyx_nonatomic_int_type* expected, __pyx_nonatomic_int_type desired) { - __pyx_nonatomic_int_type old = _InterlockedCompareExchange(value, desired, *expected); - int result = old == *expected; - *expected = old; - return result; - } - #define __pyx_atomic_load(value) _InterlockedExchangeAdd(value, 0) - #define __pyx_atomic_store(value, new_value) _InterlockedExchange(value, new_value) - #define __pyx_atomic_pointer_load_relaxed(value) *(void * volatile *)value - #define __pyx_atomic_pointer_load_acquire(value) _InterlockedCompareExchangePointer(value, 0, 0) - #define __pyx_atomic_pointer_exchange(value, new_value) _InterlockedExchangePointer(value, (__pyx_atomic_ptr_type)new_value) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_relaxed(__pyx_get_slice_count_pointer(memview)) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_acq_rel(__pyx_get_slice_count_pointer(memview)) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* IncludeStructmemberH.proto */ -#include - -/* CriticalSections.proto */ -#if !CYTHON_COMPILING_IN_CPYTHON_FREETHREADING -#define __Pyx_PyCriticalSection void* -#define __Pyx_PyCriticalSection2 void* -#define __Pyx_PyCriticalSection_Begin1(cs, arg) (void)cs -#define __Pyx_PyCriticalSection_Begin2(cs, arg1, arg2) (void)cs -#define __Pyx_PyCriticalSection_End1(cs) -#define __Pyx_PyCriticalSection_End2(cs) -#else -#define __Pyx_PyCriticalSection PyCriticalSection -#define __Pyx_PyCriticalSection2 PyCriticalSection2 -#define __Pyx_PyCriticalSection_Begin1 PyCriticalSection_Begin -#define __Pyx_PyCriticalSection_Begin2 PyCriticalSection2_Begin -#define __Pyx_PyCriticalSection_End1 PyCriticalSection_End -#define __Pyx_PyCriticalSection_End2 PyCriticalSection2_End -#endif -#if PY_VERSION_HEX < 0x030d0000 || CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_BEGIN_CRITICAL_SECTION(o) { -#define __Pyx_END_CRITICAL_SECTION() } -#else -#define __Pyx_BEGIN_CRITICAL_SECTION Py_BEGIN_CRITICAL_SECTION -#define __Pyx_END_CRITICAL_SECTION Py_END_CRITICAL_SECTION -#endif - -/* #### Code section: numeric_typedefs ### */ -/* #### Code section: complex_type_declarations ### */ -/* Declarations.proto */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) - #ifdef __cplusplus - typedef ::std::complex< double > __pyx_t_double_complex; - #else - typedef double _Complex __pyx_t_double_complex; - #endif -#else - typedef struct { double real, imag; } __pyx_t_double_complex; -#endif -static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double); - -/* #### Code section: type_declarations ### */ - -/*--- Type declarations ---*/ -struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen; - -/* "fontTools/cu2qu/cu2qu.py":150 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * p0=cython.complex, - * p1=cython.complex, -*/ -struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen { - PyObject_HEAD - __pyx_t_double_complex __pyx_v_a; - __pyx_t_double_complex __pyx_v_a1; - __pyx_t_double_complex __pyx_v_b; - __pyx_t_double_complex __pyx_v_b1; - __pyx_t_double_complex __pyx_v_c; - __pyx_t_double_complex __pyx_v_c1; - __pyx_t_double_complex __pyx_v_d; - __pyx_t_double_complex __pyx_v_d1; - double __pyx_v_delta_2; - double __pyx_v_delta_3; - double __pyx_v_dt; - int __pyx_v_i; - int __pyx_v_n; - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - double __pyx_v_t1; - double __pyx_v_t1_2; - int __pyx_t_0; - int __pyx_t_1; - int __pyx_t_2; -}; - -/* #### Code section: utility_code_proto ### */ - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, Py_ssize_t); - void (*DECREF)(void*, PyObject*, Py_ssize_t); - void (*GOTREF)(void*, PyObject*, Py_ssize_t); - void (*GIVEREF)(void*, PyObject*, Py_ssize_t); - void* (*SetupContext)(const char*, Py_ssize_t, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), (__LINE__), (__FILE__));\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } - #define __Pyx_RefNannyFinishContextNogil() {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __Pyx_RefNannyFinishContext();\ - PyGILState_Release(__pyx_gilstate_save);\ - } - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), (__LINE__)) - #define __Pyx_XINCREF(r) do { if((r) == NULL); else {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) == NULL); else {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) == NULL); else {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) == NULL); else {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContextNogil() - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_Py_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; Py_XDECREF(tmp);\ - } while (0) -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#if PY_VERSION_HEX >= 0x030C00A6 -#define __Pyx_PyErr_Occurred() (__pyx_tstate->current_exception != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->current_exception ? (PyObject*) Py_TYPE(__pyx_tstate->current_exception) : (PyObject*) NULL) -#else -#define __Pyx_PyErr_Occurred() (__pyx_tstate->curexc_type != NULL) -#define __Pyx_PyErr_CurrentExceptionType() (__pyx_tstate->curexc_type) -#endif -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() (PyErr_Occurred() != NULL) -#define __Pyx_PyErr_CurrentExceptionType() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A6 -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* IncludeStdlibH.proto */ -#include - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#if !CYTHON_VECTORCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject *const *args, Py_ssize_t nargs, PyObject *kwargs); -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if !CYTHON_VECTORCALL -#if PY_VERSION_HEX >= 0x03080000 - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets() - #define __Pyx_PyFrame_GetLocalsplus(frame) ((frame)->f_localsplus) -#else - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif -#endif -#endif - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectFastCall.proto */ -#define __Pyx_PyObject_FastCall(func, args, nargs) __Pyx_PyObject_FastCallDict(func, args, (size_t)(nargs), NULL) -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject * const*args, size_t nargs, PyObject *kwargs); - -/* PyLongCompare.proto */ -static CYTHON_INLINE int __Pyx_PyLong_BoolEqObjC(PyObject *op1, PyObject *op2, long intval, long inplace); - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* IterFinish.proto */ -static CYTHON_INLINE int __Pyx_IterFinish(void); - -/* UnpackItemEndCheck.proto */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck, has_gil)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck, has_gil)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck, has_gil)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) do {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_mstate_global->__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -#define __Pyx_GetModuleGlobalNameUncached(var, name) do {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} while(0) -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* TupleAndListFromArray.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n); -#endif -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject* __Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n); -#endif - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* fastcall.proto */ -#if CYTHON_AVOID_BORROWED_REFS - #define __Pyx_ArgRef_VARARGS(args, i) __Pyx_PySequence_ITEM(args, i) -#elif CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_ArgRef_VARARGS(args, i) __Pyx_NewRef(__Pyx_PyTuple_GET_ITEM(args, i)) -#else - #define __Pyx_ArgRef_VARARGS(args, i) __Pyx_XNewRef(PyTuple_GetItem(args, i)) -#endif -#define __Pyx_NumKwargs_VARARGS(kwds) PyDict_Size(kwds) -#define __Pyx_KwValues_VARARGS(args, nargs) NULL -#define __Pyx_GetKwValue_VARARGS(kw, kwvalues, s) __Pyx_PyDict_GetItemStrWithError(kw, s) -#define __Pyx_KwargsAsDict_VARARGS(kw, kwvalues) PyDict_Copy(kw) -#if CYTHON_METH_FASTCALL - #define __Pyx_ArgRef_FASTCALL(args, i) __Pyx_NewRef(args[i]) - #define __Pyx_NumKwargs_FASTCALL(kwds) __Pyx_PyTuple_GET_SIZE(kwds) - #define __Pyx_KwValues_FASTCALL(args, nargs) ((args) + (nargs)) - static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s); - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030d0000 || CYTHON_COMPILING_IN_LIMITED_API - CYTHON_UNUSED static PyObject *__Pyx_KwargsAsDict_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues); - #else - #define __Pyx_KwargsAsDict_FASTCALL(kw, kwvalues) _PyStack_AsDict(kwvalues, kw) - #endif -#else - #define __Pyx_ArgRef_FASTCALL __Pyx_ArgRef_VARARGS - #define __Pyx_NumKwargs_FASTCALL __Pyx_NumKwargs_VARARGS - #define __Pyx_KwValues_FASTCALL __Pyx_KwValues_VARARGS - #define __Pyx_GetKwValue_FASTCALL __Pyx_GetKwValue_VARARGS - #define __Pyx_KwargsAsDict_FASTCALL __Pyx_KwargsAsDict_VARARGS -#endif -#define __Pyx_ArgsSlice_VARARGS(args, start, stop) PyTuple_GetSlice(args, start, stop) -#if CYTHON_METH_FASTCALL || (CYTHON_COMPILING_IN_CPYTHON && CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) __Pyx_PyTuple_FromArray(args + start, stop - start) -#else -#define __Pyx_ArgsSlice_FASTCALL(args, start, stop) PyTuple_GetSlice(args, start, stop) -#endif - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static CYTHON_INLINE int __Pyx_ParseKeywords( - PyObject *kwds, PyObject *const *kwvalues, PyObject ** const argnames[], - PyObject *kwds2, PyObject *values[], - Py_ssize_t num_pos_args, Py_ssize_t num_kwargs, - const char* function_name, - int ignore_unknown_kwargs -); - -/* CallCFunction.proto */ -#define __Pyx_CallCFunction(cfunc, self, args)\ - ((PyCFunction)(void(*)(void))(cfunc)->func)(self, args) -#define __Pyx_CallCFunctionWithKeywords(cfunc, self, args, kwargs)\ - ((PyCFunctionWithKeywords)(void(*)(void))(cfunc)->func)(self, args, kwargs) -#define __Pyx_CallCFunctionFast(cfunc, self, args, nargs)\ - ((__Pyx_PyCFunctionFast)(void(*)(void))(PyCFunction)(cfunc)->func)(self, args, nargs) -#define __Pyx_CallCFunctionFastWithKeywords(cfunc, self, args, nargs, kwnames)\ - ((__Pyx_PyCFunctionFastWithKeywords)(void(*)(void))(PyCFunction)(cfunc)->func)(self, args, nargs, kwnames) - -/* UnpackUnboundCMethod.proto */ -typedef struct { - PyObject *type; - PyObject **method_name; -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING && CYTHON_ATOMICS - __pyx_atomic_int_type initialized; -#endif - PyCFunction func; - PyObject *method; - int flag; -} __Pyx_CachedCFunction; -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING -static CYTHON_INLINE int __Pyx_CachedCFunction_GetAndSetInitializing(__Pyx_CachedCFunction *cfunc) { -#if !CYTHON_ATOMICS - return 1; -#else - __pyx_nonatomic_int_type expected = 0; - if (__pyx_atomic_int_cmp_exchange(&cfunc->initialized, &expected, 1)) { - return 0; - } - return expected; -#endif -} -static CYTHON_INLINE void __Pyx_CachedCFunction_SetFinishedInitializing(__Pyx_CachedCFunction *cfunc) { -#if CYTHON_ATOMICS - __pyx_atomic_store(&cfunc->initialized, 2); -#endif -} -#else -#define __Pyx_CachedCFunction_GetAndSetInitializing(cfunc) 2 -#define __Pyx_CachedCFunction_SetFinishedInitializing(cfunc) -#endif - -/* CallUnboundCMethod2.proto */ -CYTHON_UNUSED -static PyObject* __Pyx__CallUnboundCMethod2(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg1, PyObject* arg2); -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject *__Pyx_CallUnboundCMethod2(__Pyx_CachedCFunction *cfunc, PyObject *self, PyObject *arg1, PyObject *arg2); -#else -#define __Pyx_CallUnboundCMethod2(cfunc, self, arg1, arg2) __Pyx__CallUnboundCMethod2(cfunc, self, arg1, arg2) -#endif - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* pep479.proto */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* IterNextPlain.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyIter_Next_Plain(PyObject *iterator); -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x030A0000 -static PyObject *__Pyx_GetBuiltinNext_LimitedAPI(void); -#endif - -/* IterNext.proto */ -#define __Pyx_PyIter_Next(obj) __Pyx_PyIter_Next2(obj, NULL) -static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject *, PyObject *); - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030d0000 - L->ob_item[len] = x; - #else - PyList_SET_ITEM(list, len, x); - #endif - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030d0000 - L->ob_item[len] = x; - #else - PyList_SET_ITEM(list, len, x); - #endif - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyLongBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static CYTHON_INLINE PyObject* __Pyx_PyLong_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyLong_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* AssertionsEnabled.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API || (CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030C0000) - static int __pyx_assertions_enabled_flag; - #define __pyx_assertions_enabled() (__pyx_assertions_enabled_flag) - static int __Pyx_init_assertions_enabled(void) { - PyObject *builtins, *debug, *debug_str; - int flag; - builtins = PyEval_GetBuiltins(); - if (!builtins) goto bad; - debug_str = PyUnicode_FromStringAndSize("__debug__", 9); - if (!debug_str) goto bad; - debug = PyObject_GetItem(builtins, debug_str); - Py_DECREF(debug_str); - if (!debug) goto bad; - flag = PyObject_IsTrue(debug); - Py_DECREF(debug); - if (flag == -1) goto bad; - __pyx_assertions_enabled_flag = flag; - return 0; - bad: - __pyx_assertions_enabled_flag = 1; - return -1; - } -#else - #define __Pyx_init_assertions_enabled() (0) - #define __pyx_assertions_enabled() (!Py_OptimizeFlag) -#endif - -/* SetItemInt.proto */ -#define __Pyx_SetItemInt(o, i, v, type, is_signed, to_py_func, is_list, wraparound, boundscheck, has_gil)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_SetItemInt_Fast(o, (Py_ssize_t)i, v, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list assignment index out of range"), -1) :\ - __Pyx_SetItemInt_Generic(o, to_py_func(i), v))) -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v); -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, - int is_list, int wraparound, int boundscheck); - -/* ModInt[long].proto */ -static CYTHON_INLINE long __Pyx_mod_long(long, long, int b_is_constant); - -/* LimitedApiGetTypeDict.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_GetTypeDict(PyTypeObject *tp); -#endif - -/* SetItemOnTypeDict.proto */ -static int __Pyx__SetItemOnTypeDict(PyTypeObject *tp, PyObject *k, PyObject *v); -#define __Pyx_SetItemOnTypeDict(tp, k, v) __Pyx__SetItemOnTypeDict((PyTypeObject*)tp, k, v) - -/* FixUpExtensionType.proto */ -static CYTHON_INLINE int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type); - -/* PyObjectCallNoArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func); - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* PyObjectGetMethod.proto */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method); - -/* PyObjectCallMethod0.proto */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name); - -/* ValidateBasesTuple.proto */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases); -#endif - -/* PyType_Ready.proto */ -CYTHON_UNUSED static int __Pyx_PyType_Ready(PyTypeObject *t); - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* ImportDottedModule.proto */ -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple); -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple); - -/* ListPack.proto */ -static PyObject *__Pyx_PyList_Pack(Py_ssize_t n, ...); - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* pybytes_as_double.proto */ -static double __Pyx_SlowPyString_AsDouble(PyObject *obj); -static double __Pyx__PyBytes_AsDouble(PyObject *obj, const char* start, Py_ssize_t length); -static CYTHON_INLINE double __Pyx_PyBytes_AsDouble(PyObject *obj) { - char* as_c_string; - Py_ssize_t size; -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE - as_c_string = PyBytes_AS_STRING(obj); - size = PyBytes_GET_SIZE(obj); -#else - if (PyBytes_AsStringAndSize(obj, &as_c_string, &size) < 0) { - return (double)-1; - } -#endif - return __Pyx__PyBytes_AsDouble(obj, as_c_string, size); -} -static CYTHON_INLINE double __Pyx_PyByteArray_AsDouble(PyObject *obj) { - char* as_c_string; - Py_ssize_t size; -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE - as_c_string = PyByteArray_AS_STRING(obj); - size = PyByteArray_GET_SIZE(obj); -#else - as_c_string = PyByteArray_AsString(obj); - if (as_c_string == NULL) { - return (double)-1; - } - size = PyByteArray_Size(obj); -#endif - return __Pyx__PyBytes_AsDouble(obj, as_c_string, size); -} - -/* pyunicode_as_double.proto */ -#if !CYTHON_COMPILING_IN_PYPY && CYTHON_ASSUME_SAFE_MACROS -static const char* __Pyx__PyUnicode_AsDouble_Copy(const void* data, const int kind, char* buffer, Py_ssize_t start, Py_ssize_t end) { - int last_was_punctuation; - Py_ssize_t i; - last_was_punctuation = 1; - for (i=start; i <= end; i++) { - Py_UCS4 chr = PyUnicode_READ(kind, data, i); - int is_punctuation = (chr == '_') | (chr == '.'); - *buffer = (char)chr; - buffer += (chr != '_'); - if (unlikely(chr > 127)) goto parse_failure; - if (unlikely(last_was_punctuation & is_punctuation)) goto parse_failure; - last_was_punctuation = is_punctuation; - } - if (unlikely(last_was_punctuation)) goto parse_failure; - *buffer = '\0'; - return buffer; -parse_failure: - return NULL; -} -static double __Pyx__PyUnicode_AsDouble_inf_nan(const void* data, int kind, Py_ssize_t start, Py_ssize_t length) { - int matches = 1; - Py_UCS4 chr; - Py_UCS4 sign = PyUnicode_READ(kind, data, start); - int is_signed = (sign == '-') | (sign == '+'); - start += is_signed; - length -= is_signed; - switch (PyUnicode_READ(kind, data, start)) { - #ifdef Py_NAN - case 'n': - case 'N': - if (unlikely(length != 3)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+1); - matches &= (chr == 'a') | (chr == 'A'); - chr = PyUnicode_READ(kind, data, start+2); - matches &= (chr == 'n') | (chr == 'N'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_NAN : Py_NAN; - #endif - case 'i': - case 'I': - if (unlikely(length < 3)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+1); - matches &= (chr == 'n') | (chr == 'N'); - chr = PyUnicode_READ(kind, data, start+2); - matches &= (chr == 'f') | (chr == 'F'); - if (likely(length == 3 && matches)) - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - if (unlikely(length != 8)) goto parse_failure; - chr = PyUnicode_READ(kind, data, start+3); - matches &= (chr == 'i') | (chr == 'I'); - chr = PyUnicode_READ(kind, data, start+4); - matches &= (chr == 'n') | (chr == 'N'); - chr = PyUnicode_READ(kind, data, start+5); - matches &= (chr == 'i') | (chr == 'I'); - chr = PyUnicode_READ(kind, data, start+6); - matches &= (chr == 't') | (chr == 'T'); - chr = PyUnicode_READ(kind, data, start+7); - matches &= (chr == 'y') | (chr == 'Y'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - case '.': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': - break; - default: - goto parse_failure; - } - return 0.0; -parse_failure: - return -1.0; -} -static double __Pyx_PyUnicode_AsDouble_WithSpaces(PyObject *obj) { - double value; - const char *last; - char *end; - Py_ssize_t start, length = PyUnicode_GET_LENGTH(obj); - const int kind = PyUnicode_KIND(obj); - const void* data = PyUnicode_DATA(obj); - start = 0; - while (Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, start))) - start++; - while (start < length - 1 && Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, length - 1))) - length--; - length -= start; - if (unlikely(length <= 0)) goto fallback; - value = __Pyx__PyUnicode_AsDouble_inf_nan(data, kind, start, length); - if (unlikely(value == -1.0)) goto fallback; - if (value != 0.0) return value; - if (length < 40) { - char number[40]; - last = __Pyx__PyUnicode_AsDouble_Copy(data, kind, number, start, start + length); - if (unlikely(!last)) goto fallback; - value = PyOS_string_to_double(number, &end, NULL); - } else { - char *number = (char*) PyMem_Malloc((length + 1) * sizeof(char)); - if (unlikely(!number)) goto fallback; - last = __Pyx__PyUnicode_AsDouble_Copy(data, kind, number, start, start + length); - if (unlikely(!last)) { - PyMem_Free(number); - goto fallback; - } - value = PyOS_string_to_double(number, &end, NULL); - PyMem_Free(number); - } - if (likely(end == last) || (value == (double)-1 && PyErr_Occurred())) { - return value; - } -fallback: - return __Pyx_SlowPyString_AsDouble(obj); -} -#endif -static CYTHON_INLINE double __Pyx_PyUnicode_AsDouble(PyObject *obj) { -#if !CYTHON_COMPILING_IN_PYPY && CYTHON_ASSUME_SAFE_MACROS - if (unlikely(__Pyx_PyUnicode_READY(obj) == -1)) - return (double)-1; - if (likely(PyUnicode_IS_ASCII(obj))) { - const char *s; - Py_ssize_t length; - s = PyUnicode_AsUTF8AndSize(obj, &length); - return __Pyx__PyBytes_AsDouble(obj, s, length); - } - return __Pyx_PyUnicode_AsDouble_WithSpaces(obj); -#else - return __Pyx_SlowPyString_AsDouble(obj); -#endif -} - -/* FetchSharedCythonModule.proto */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void); - -/* dict_setdefault.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyDict_SetDefault(PyObject *d, PyObject *key, PyObject *default_value, int is_safe_type); - -/* FetchCommonType.proto */ -static PyTypeObject* __Pyx_FetchCommonTypeFromSpec(PyTypeObject *metaclass, PyObject *module, PyType_Spec *spec, PyObject *bases); - -/* CommonTypesMetaclass.proto */ -static int __pyx_CommonTypesMetaclass_init(PyObject *module); -#define __Pyx_CommonTypesMetaclass_USED - -/* CallTypeTraverse.proto */ -#if !CYTHON_USE_TYPE_SPECS || (!CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x03090000) -#define __Pyx_call_type_traverse(o, always_call, visit, arg) 0 -#else -static int __Pyx_call_type_traverse(PyObject *o, int always_call, visitproc visit, void *arg); -#endif - -/* PyMethodNew.proto */ -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ); - -/* PyVectorcallFastCallDict.proto */ -#if CYTHON_METH_FASTCALL && (CYTHON_VECTORCALL || CYTHON_BACKPORT_VECTORCALL) -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw); -#endif - -/* CythonFunctionShared.proto */ -#define __Pyx_CyFunction_USED -#define __Pyx_CYFUNCTION_STATICMETHOD 0x01 -#define __Pyx_CYFUNCTION_CLASSMETHOD 0x02 -#define __Pyx_CYFUNCTION_CCLASS 0x04 -#define __Pyx_CYFUNCTION_COROUTINE 0x08 -#define __Pyx_CyFunction_GetClosure(f)\ - (((__pyx_CyFunctionObject *) (f))->func_closure) -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - #define __Pyx_CyFunction_GetClassObj(f)\ - (((__pyx_CyFunctionObject *) (f))->func_classobj) -#else - #define __Pyx_CyFunction_GetClassObj(f)\ - ((PyObject*) ((PyCMethodObject *) (f))->mm_class) -#endif -#define __Pyx_CyFunction_SetClassObj(f, classobj)\ - __Pyx__CyFunction_SetClassObj((__pyx_CyFunctionObject *) (f), (classobj)) -#define __Pyx_CyFunction_Defaults(type, f)\ - ((type *)(((__pyx_CyFunctionObject *) (f))->defaults)) -#define __Pyx_CyFunction_SetDefaultsGetter(f, g)\ - ((__pyx_CyFunctionObject *) (f))->defaults_getter = (g) -typedef struct { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject_HEAD - PyObject *func; -#elif PY_VERSION_HEX < 0x030900B1 - PyCFunctionObject func; -#else - PyCMethodObject func; -#endif -#if CYTHON_BACKPORT_VECTORCALL ||\ - (CYTHON_COMPILING_IN_LIMITED_API && CYTHON_METH_FASTCALL) - __pyx_vectorcallfunc func_vectorcall; -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *func_weakreflist; -#endif - PyObject *func_dict; - PyObject *func_name; - PyObject *func_qualname; - PyObject *func_doc; - PyObject *func_globals; - PyObject *func_code; - PyObject *func_closure; -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - PyObject *func_classobj; -#endif - PyObject *defaults; - int flags; - PyObject *defaults_tuple; - PyObject *defaults_kwdict; - PyObject *(*defaults_getter)(PyObject *); - PyObject *func_annotations; - PyObject *func_is_coroutine; -} __pyx_CyFunctionObject; -#undef __Pyx_CyOrPyCFunction_Check -#define __Pyx_CyFunction_Check(obj) __Pyx_TypeCheck(obj, __pyx_mstate_global->__pyx_CyFunctionType) -#define __Pyx_CyOrPyCFunction_Check(obj) __Pyx_TypeCheck2(obj, __pyx_mstate_global->__pyx_CyFunctionType, &PyCFunction_Type) -#define __Pyx_CyFunction_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_mstate_global->__pyx_CyFunctionType) -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunction(PyObject *func, void (*cfunc)(void)); -#undef __Pyx_IsSameCFunction -#define __Pyx_IsSameCFunction(func, cfunc) __Pyx__IsSameCyOrCFunction(func, cfunc) -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject* op, PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj); -static CYTHON_INLINE PyObject *__Pyx_CyFunction_InitDefaults(PyObject *func, - PyTypeObject *defaults_type); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *m, - PyObject *tuple); -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *m, - PyObject *dict); -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *m, - PyObject *dict); -static int __pyx_CyFunction_init(PyObject *module); -#if CYTHON_METH_FASTCALL -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames); -#if CYTHON_BACKPORT_VECTORCALL || CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_CyFunction_func_vectorcall(f) (((__pyx_CyFunctionObject*)f)->func_vectorcall) -#else -#define __Pyx_CyFunction_func_vectorcall(f) (((PyCFunctionObject*)f)->vectorcall) -#endif -#endif - -/* CythonFunction.proto */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, - int flags, PyObject* qualname, - PyObject *closure, - PyObject *module, PyObject *globals, - PyObject* code); - -/* CLineInTraceback.proto */ -#if CYTHON_CLINE_IN_TRACEBACK && CYTHON_CLINE_IN_TRACEBACK_RUNTIME -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#else -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#endif - -/* CodeObjectCache.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject __Pyx_CachedCodeObjectType; -#else -typedef PyCodeObject __Pyx_CachedCodeObjectType; -#endif -typedef struct { - __Pyx_CachedCodeObjectType* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - __pyx_atomic_int_type accessor_count; - #endif -}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static __Pyx_CachedCodeObjectType *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, __Pyx_CachedCodeObjectType* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -/* RealImag.proto */ -#if CYTHON_CCOMPLEX - #ifdef __cplusplus - #define __Pyx_CREAL(z) ((z).real()) - #define __Pyx_CIMAG(z) ((z).imag()) - #else - #define __Pyx_CREAL(z) (__real__(z)) - #define __Pyx_CIMAG(z) (__imag__(z)) - #endif -#else - #define __Pyx_CREAL(z) ((z).real) - #define __Pyx_CIMAG(z) ((z).imag) -#endif -#if defined(__cplusplus) && CYTHON_CCOMPLEX\ - && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103) - #define __Pyx_SET_CREAL(z,x) ((z).real(x)) - #define __Pyx_SET_CIMAG(z,y) ((z).imag(y)) -#else - #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x) - #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y) -#endif - -/* Arithmetic.proto */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) - #define __Pyx_c_eq_double(a, b) ((a)==(b)) - #define __Pyx_c_sum_double(a, b) ((a)+(b)) - #define __Pyx_c_diff_double(a, b) ((a)-(b)) - #define __Pyx_c_prod_double(a, b) ((a)*(b)) - #define __Pyx_c_quot_double(a, b) ((a)/(b)) - #define __Pyx_c_neg_double(a) (-(a)) - #ifdef __cplusplus - #define __Pyx_c_is_zero_double(z) ((z)==(double)0) - #define __Pyx_c_conj_double(z) (::std::conj(z)) - #if 1 - #define __Pyx_c_abs_double(z) (::std::abs(z)) - #define __Pyx_c_pow_double(a, b) (::std::pow(a, b)) - #endif - #else - #define __Pyx_c_is_zero_double(z) ((z)==0) - #define __Pyx_c_conj_double(z) (conj(z)) - #if 1 - #define __Pyx_c_abs_double(z) (cabs(z)) - #define __Pyx_c_pow_double(a, b) (cpow(a, b)) - #endif - #endif -#else - static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex); - static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex); - #if 1 - static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex); - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex); - #endif -#endif - -/* FromPy.proto */ -static __pyx_t_double_complex __Pyx_PyComplex_As___pyx_t_double_complex(PyObject*); - -/* GCCDiagnostics.proto */ -#if !defined(__INTEL_COMPILER) && defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* ToPy.proto */ -#define __pyx_PyComplex_FromComplex(z)\ - PyComplex_FromDoubles((double)__Pyx_CREAL(z),\ - (double)__Pyx_CIMAG(z)) - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyLong_As_int(PyObject *); - -/* PyObjectVectorCallKwBuilder.proto */ -CYTHON_UNUSED static int __Pyx_VectorcallBuilder_AddArg_Check(PyObject *key, PyObject *value, PyObject *builder, PyObject **args, int n); -#if CYTHON_VECTORCALL -#if PY_VERSION_HEX >= 0x03090000 -#define __Pyx_Object_Vectorcall_CallFromBuilder PyObject_Vectorcall -#else -#define __Pyx_Object_Vectorcall_CallFromBuilder _PyObject_Vectorcall -#endif -#define __Pyx_MakeVectorcallBuilderKwds(n) PyTuple_New(n) -static int __Pyx_VectorcallBuilder_AddArg(PyObject *key, PyObject *value, PyObject *builder, PyObject **args, int n); -static int __Pyx_VectorcallBuilder_AddArgStr(const char *key, PyObject *value, PyObject *builder, PyObject **args, int n); -#else -#define __Pyx_Object_Vectorcall_CallFromBuilder __Pyx_PyObject_FastCallDict -#define __Pyx_MakeVectorcallBuilderKwds(n) __Pyx_PyDict_NewPresized(n) -#define __Pyx_VectorcallBuilder_AddArg(key, value, builder, args, n) PyDict_SetItem(builder, key, value) -#define __Pyx_VectorcallBuilder_AddArgStr(key, value, builder, args, n) PyDict_SetItemString(builder, key, value) -#endif - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyLong_From_long(long value); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyLong_From_int(int value); - -/* FormatTypeName.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API -typedef PyObject *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%U" -#define __Pyx_DECREF_TypeName(obj) Py_XDECREF(obj) -#if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 -#define __Pyx_PyType_GetFullyQualifiedName PyType_GetFullyQualifiedName -#else -static __Pyx_TypeName __Pyx_PyType_GetFullyQualifiedName(PyTypeObject* tp); -#endif -#else // !LIMITED_API -typedef const char *__Pyx_TypeName; -#define __Pyx_FMT_TYPENAME "%.200s" -#define __Pyx_PyType_GetFullyQualifiedName(tp) ((tp)->tp_name) -#define __Pyx_DECREF_TypeName(obj) -#endif - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyLong_As_long(PyObject *); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) __Pyx_IsAnySubtype2(Py_TYPE(obj), (PyTypeObject *)type1, (PyTypeObject *)type2) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_TypeCheck2(obj, type1, type2) (PyObject_TypeCheck(obj, (PyTypeObject *)type1) || PyObject_TypeCheck(obj, (PyTypeObject *)type2)) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2) { - return PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2); -} -#endif -#define __Pyx_PyErr_ExceptionMatches2(err1, err2) __Pyx_PyErr_GivenExceptionMatches2(__Pyx_PyErr_CurrentExceptionType(), err1, err2) -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) -#ifdef PyExceptionInstance_Check - #define __Pyx_PyBaseException_Check(obj) PyExceptionInstance_Check(obj) -#else - #define __Pyx_PyBaseException_Check(obj) __Pyx_TypeCheck(obj, PyExc_BaseException) -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethod1.proto */ -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg); - -/* ReturnWithStopIteration.proto */ -static CYTHON_INLINE void __Pyx_ReturnWithStopIteration(PyObject* value, int async, int iternext); - -/* CoroutineBase.proto */ -struct __pyx_CoroutineObject; -typedef PyObject *(*__pyx_coroutine_body_t)(struct __pyx_CoroutineObject *, PyThreadState *, PyObject *); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_ExcInfoStruct _PyErr_StackItem -#else -typedef struct { - PyObject *exc_type; - PyObject *exc_value; - PyObject *exc_traceback; -} __Pyx_ExcInfoStruct; -#endif -typedef struct __pyx_CoroutineObject { - PyObject_HEAD - __pyx_coroutine_body_t body; - PyObject *closure; - __Pyx_ExcInfoStruct gi_exc_state; - PyObject *gi_weakreflist; - PyObject *classobj; - PyObject *yieldfrom; - __Pyx_pyiter_sendfunc yieldfrom_am_send; - PyObject *gi_name; - PyObject *gi_qualname; - PyObject *gi_modulename; - PyObject *gi_code; - PyObject *gi_frame; -#if CYTHON_USE_SYS_MONITORING && (CYTHON_PROFILE || CYTHON_TRACE) - PyMonitoringState __pyx_pymonitoring_state[__Pyx_MonitoringEventTypes_CyGen_count]; - uint64_t __pyx_pymonitoring_version; -#endif - int resume_label; - char is_running; -} __pyx_CoroutineObject; -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject *type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name); -static CYTHON_INLINE void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *self); -static int __Pyx_Coroutine_clear(PyObject *self); -static __Pyx_PySendResult __Pyx_Coroutine_AmSend(PyObject *self, PyObject *value, PyObject **retval); -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value); -static __Pyx_PySendResult __Pyx_Coroutine_Close(PyObject *self, PyObject **retval); -static PyObject *__Pyx_Coroutine_Throw(PyObject *gen, PyObject *args); -#if CYTHON_USE_EXC_INFO_STACK -#define __Pyx_Coroutine_SwapException(self) -#define __Pyx_Coroutine_ResetAndClearException(self) __Pyx_Coroutine_ExceptionClear(&(self)->gi_exc_state) -#else -#define __Pyx_Coroutine_SwapException(self) {\ - __Pyx_ExceptionSwap(&(self)->gi_exc_state.exc_type, &(self)->gi_exc_state.exc_value, &(self)->gi_exc_state.exc_traceback);\ - __Pyx_Coroutine_ResetFrameBackpointer(&(self)->gi_exc_state);\ - } -#define __Pyx_Coroutine_ResetAndClearException(self) {\ - __Pyx_ExceptionReset((self)->gi_exc_state.exc_type, (self)->gi_exc_state.exc_value, (self)->gi_exc_state.exc_traceback);\ - (self)->gi_exc_state.exc_type = (self)->gi_exc_state.exc_value = (self)->gi_exc_state.exc_traceback = NULL;\ - } -#endif -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__pyx_tstate, pvalue) -#else -#define __Pyx_PyGen_FetchStopIterationValue(pvalue)\ - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, pvalue) -#endif -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *tstate, PyObject **pvalue); -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state); -static char __Pyx_Coroutine_test_and_set_is_running(__pyx_CoroutineObject *gen); -static void __Pyx_Coroutine_unset_is_running(__pyx_CoroutineObject *gen); -static char __Pyx_Coroutine_get_is_running(__pyx_CoroutineObject *gen); -static PyObject *__Pyx_Coroutine_get_is_running_getter(PyObject *gen, void *closure); -#if __PYX_HAS_PY_AM_SEND == 2 -static void __Pyx_SetBackportTypeAmSend(PyTypeObject *type, __Pyx_PyAsyncMethodsStruct *static_amsend_methods, __Pyx_pyiter_sendfunc am_send); -#endif -static PyObject *__Pyx_Coroutine_fail_reduce_ex(PyObject *self, PyObject *arg); - -/* Generator.proto */ -#define __Pyx_Generator_USED -#define __Pyx_Generator_CheckExact(obj) __Pyx_IS_TYPE(obj, __pyx_mstate_global->__pyx_GeneratorType) -#define __Pyx_Generator_New(body, code, closure, name, qualname, module_name)\ - __Pyx__Coroutine_New(__pyx_mstate_global->__pyx_GeneratorType, body, code, closure, name, qualname, module_name) -static PyObject *__Pyx_Generator_Next(PyObject *self); -static int __pyx_Generator_init(PyObject *module); -static CYTHON_INLINE PyObject *__Pyx_Generator_GetInlinedResult(PyObject *self); - -/* GetRuntimeVersion.proto */ -static unsigned long __Pyx_get_runtime_version(void); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(unsigned long ct_version, unsigned long rt_version, int allow_newer); - -/* MultiPhaseInitModuleState.proto */ -#if CYTHON_PEP489_MULTI_PHASE_INIT && CYTHON_USE_MODULE_STATE -static PyObject *__Pyx_State_FindModule(void*); -static int __Pyx_State_AddModule(PyObject* module, void*); -static int __Pyx_State_RemoveModule(void*); -#elif CYTHON_USE_MODULE_STATE -#define __Pyx_State_FindModule PyState_FindModule -#define __Pyx_State_AddModule PyState_AddModule -#define __Pyx_State_RemoveModule PyState_RemoveModule -#endif - -/* #### Code section: module_declarations ### */ -/* CythonABIVersion.proto */ -#if CYTHON_COMPILING_IN_LIMITED_API - #if CYTHON_METH_FASTCALL - #define __PYX_FASTCALL_ABI_SUFFIX "_fastcall" - #else - #define __PYX_FASTCALL_ABI_SUFFIX - #endif - #define __PYX_LIMITED_ABI_SUFFIX "limited" __PYX_FASTCALL_ABI_SUFFIX __PYX_AM_SEND_ABI_SUFFIX -#else - #define __PYX_LIMITED_ABI_SUFFIX -#endif -#if __PYX_HAS_PY_AM_SEND == 1 - #define __PYX_AM_SEND_ABI_SUFFIX -#elif __PYX_HAS_PY_AM_SEND == 2 - #define __PYX_AM_SEND_ABI_SUFFIX "amsendbackport" -#else - #define __PYX_AM_SEND_ABI_SUFFIX "noamsend" -#endif -#ifndef __PYX_MONITORING_ABI_SUFFIX - #define __PYX_MONITORING_ABI_SUFFIX -#endif -#if CYTHON_USE_TP_FINALIZE - #define __PYX_TP_FINALIZE_ABI_SUFFIX -#else - #define __PYX_TP_FINALIZE_ABI_SUFFIX "nofinalize" -#endif -#if CYTHON_USE_FREELISTS || !defined(__Pyx_AsyncGen_USED) - #define __PYX_FREELISTS_ABI_SUFFIX -#else - #define __PYX_FREELISTS_ABI_SUFFIX "nofreelists" -#endif -#define CYTHON_ABI __PYX_ABI_VERSION __PYX_LIMITED_ABI_SUFFIX __PYX_MONITORING_ABI_SUFFIX __PYX_TP_FINALIZE_ABI_SUFFIX __PYX_FREELISTS_ABI_SUFFIX __PYX_AM_SEND_ABI_SUFFIX -#define __PYX_ABI_MODULE_NAME "_cython_" CYTHON_ABI -#define __PYX_TYPE_MODULE_PREFIX __PYX_ABI_MODULE_NAME "." - - -/* Module declarations from "cython" */ - -/* Module declarations from "fontTools.cu2qu.cu2qu" */ -static CYTHON_INLINE double __pyx_f_9fontTools_5cu2qu_5cu2qu_dot(__pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu__complex_div_by_real(__pyx_t_double_complex, double); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_calc_cubic_points(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_calc_cubic_parameters(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_n_iter(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, PyObject *); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_three(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE __pyx_t_double_complex __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_control(double, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static CYTHON_INLINE __pyx_t_double_complex __pyx_f_9fontTools_5cu2qu_5cu2qu_calc_intersect(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex); /*proto*/ -static int __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_farthest_fit_inside(__pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, __pyx_t_double_complex, double); /*proto*/ -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_quadratic(PyObject *, double); /*proto*/ -static PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_spline(PyObject *, int, double, int); /*proto*/ -/* #### Code section: typeinfo ### */ -/* #### Code section: before_global_var ### */ -#define __Pyx_MODULE_NAME "fontTools.cu2qu.cu2qu" -extern int __pyx_module_is_main_fontTools__cu2qu__cu2qu; -int __pyx_module_is_main_fontTools__cu2qu__cu2qu = 0; - -/* Implementation of "fontTools.cu2qu.cu2qu" */ -/* #### Code section: global_var ### */ -static PyObject *__pyx_builtin_AttributeError; -static PyObject *__pyx_builtin_ImportError; -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ZeroDivisionError; -static PyObject *__pyx_builtin_AssertionError; -/* #### Code section: string_decls ### */ -static const char __pyx_k_[] = "."; -static const char __pyx_k_a[] = "a"; -static const char __pyx_k_b[] = "b"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_d[] = "d"; -static const char __pyx_k_i[] = "i"; -static const char __pyx_k_l[] = "l"; -static const char __pyx_k_n[] = "n"; -static const char __pyx_k_p[] = "p"; -static const char __pyx_k_s[] = "s"; -static const char __pyx_k__2[] = "?"; -static const char __pyx_k__3[] = "\200\001"; -static const char __pyx_k_a1[] = "a1"; -static const char __pyx_k_b1[] = "b1"; -static const char __pyx_k_c1[] = "c1"; -static const char __pyx_k_d1[] = "d1"; -static const char __pyx_k_dt[] = "dt"; -static const char __pyx_k_gc[] = "gc"; -static const char __pyx_k_p0[] = "p0"; -static const char __pyx_k_p1[] = "p1"; -static const char __pyx_k_p2[] = "p2"; -static const char __pyx_k_p3[] = "p3"; -static const char __pyx_k_t1[] = "t1"; -static const char __pyx_k_NAN[] = "NAN"; -static const char __pyx_k_NaN[] = "NaN"; -static const char __pyx_k_all[] = "__all__"; -static const char __pyx_k_pop[] = "pop"; -static const char __pyx_k_func[] = "__func__"; -static const char __pyx_k_imag[] = "imag"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_math[] = "math"; -static const char __pyx_k_name[] = "__name__"; -static const char __pyx_k_next[] = "next"; -static const char __pyx_k_real[] = "real"; -static const char __pyx_k_send[] = "send"; -static const char __pyx_k_spec[] = "__spec__"; -static const char __pyx_k_t1_2[] = "t1_2"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_Error[] = "Error"; -static const char __pyx_k_MAX_N[] = "MAX_N"; -static const char __pyx_k_close[] = "close"; -static const char __pyx_k_curve[] = "curve"; -static const char __pyx_k_isnan[] = "isnan"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_throw[] = "throw"; -static const char __pyx_k_value[] = "value"; -static const char __pyx_k_curves[] = "curves"; -static const char __pyx_k_enable[] = "enable"; -static const char __pyx_k_errors[] = "errors"; -static const char __pyx_k_last_i[] = "last_i"; -static const char __pyx_k_module[] = "__module__"; -static const char __pyx_k_spline[] = "spline"; -static const char __pyx_k_delta_2[] = "delta_2"; -static const char __pyx_k_delta_3[] = "delta_3"; -static const char __pyx_k_disable[] = "disable"; -static const char __pyx_k_max_err[] = "max_err"; -static const char __pyx_k_splines[] = "splines"; -static const char __pyx_k_COMPILED[] = "COMPILED"; -static const char __pyx_k_qualname[] = "__qualname__"; -static const char __pyx_k_set_name[] = "__set_name__"; -static const char __pyx_k_isenabled[] = "isenabled"; -static const char __pyx_k_Cu2QuError[] = "Cu2QuError"; -static const char __pyx_k_max_errors[] = "max_errors"; -static const char __pyx_k_ImportError[] = "ImportError"; -static const char __pyx_k_initializing[] = "_initializing"; -static const char __pyx_k_is_coroutine[] = "_is_coroutine"; -static const char __pyx_k_all_quadratic[] = "all_quadratic"; -static const char __pyx_k_AssertionError[] = "AssertionError"; -static const char __pyx_k_AttributeError[] = "AttributeError"; -static const char __pyx_k_ZeroDivisionError[] = "ZeroDivisionError"; -static const char __pyx_k_asyncio_coroutines[] = "asyncio.coroutines"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_curve_to_quadratic[] = "curve_to_quadratic"; -static const char __pyx_k_ApproxNotFoundError[] = "ApproxNotFoundError"; -static const char __pyx_k_curves_to_quadratic[] = "curves_to_quadratic"; -static const char __pyx_k_fontTools_cu2qu_cu2qu[] = "fontTools.cu2qu.cu2qu"; -static const char __pyx_k_split_cubic_into_n_gen[] = "_split_cubic_into_n_gen"; -static const char __pyx_k_Lib_fontTools_cu2qu_cu2qu_py[] = "Lib/fontTools/cu2qu/cu2qu.py"; -static const char __pyx_k_curves_to_quadratic_line_503[] = "curves_to_quadratic (line 503)"; -static const char __pyx_k_AWBc_U_U_3fBa_AWCy_7_2QgQgT_a_Q[] = "\200\001\360\006\000()\360*\000\005\r\210A\210W\220B\220c\230\024\230U\240!\340\004\010\210\005\210U\220!\2203\220f\230B\230a\330\010\021\320\021$\240A\240W\250C\250y\270\001\330\010\013\2107\220'\230\021\340\014\023\2202\220Q\220g\230Q\230g\240T\250\025\250a\340\004\n\320\n\035\230Q\230a"; -static const char __pyx_k_J_Qawb_4uG4y_3a_3c_1A_avRq_T_AV[] = "\200\001\340,-\360J\001\000\005\016\210Q\210a\210w\220b\230\003\2304\230u\240G\2504\250y\270\001\330\004\013\2103\210a\210|\2303\230c\240\021\240!\340\004\010\210\003\2101\210A\330\004\016\210a\210v\220R\220q\330\004\r\210T\220\021\330\004\010\210\001\330\004\005\330\010\021\320\021$\240A\240V\2501\250D\260\003\260:\270Q\270d\300!\330\010\013\2107\220#\220Q\330\014\017\210r\220\023\220A\330\020\021\330\014\021\220\021\330\014\025\220Q\330\014\r\330\010\017\210q\220\005\220Q\330\010\r\210R\210r\220\023\220B\220a\330\010\013\2102\210S\220\001\340\014\023\2201\220B\220a\220w\230a\230w\240d\250%\250x\260t\270:\300Q\340\004\n\320\n\035\230Q\230a"; -static const char __pyx_k_Return_quadratic_Bezier_splines[] = "Return quadratic Bezier splines approximating the input cubic Beziers.\n\n Args:\n curves: A sequence of *n* curves, each curve being a sequence of four\n 2D tuples.\n max_errors: A sequence of *n* floats representing the maximum permissible\n deviation from each of the cubic Bezier curves.\n all_quadratic (bool): If True (default) returned values are a\n quadratic spline. If False, they are either a single quadratic\n curve or a single cubic curve.\n\n Example::\n\n >>> curves_to_quadratic( [\n ... [ (50,50), (100,100), (150,100), (200,50) ],\n ... [ (75,50), (120,100), (150,75), (200,60) ]\n ... ], [1,1] )\n [[(50.0, 50.0), (75.0, 75.0), (125.0, 91.66666666666666), (175.0, 75.0), (200.0, 50.0)], [(75.0, 50.0), (97.5, 75.0), (135.41666666666666, 82.08333333333333), (175.0, 67.5), (200.0, 60.0)]]\n\n The returned splines have \"implied oncurve points\" suitable for use in\n TrueType ``glif`` outlines - i.e. in the first spline returned above,\n the first quadratic segment runs from (50,50) to\n ( (75 + 125)/2 , (120 + 91.666..)/2 ) = (100, 83.333...).\n\n Returns:\n If all_quadratic is True, a list of splines, each spline being a list\n of 2D tuples.\n\n If all_quadratic is False, a list of curves, each curve being a quadratic\n (length 3), or cubic (length 4).\n\n Raises:\n fontTools.cu2qu.Errors.ApproxNotFoundError: if no suitable approximation\n can be found for all curves with the given parameters.\n "; -/* #### Code section: decls ### */ -static PyObject *__pyx_pf_9fontTools_5cu2qu_5cu2qu__split_cubic_into_n_gen(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3, int __pyx_v_n); /* proto */ -static PyObject *__pyx_pf_9fontTools_5cu2qu_5cu2qu_3curve_to_quadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve, double __pyx_v_max_err, int __pyx_v_all_quadratic); /* proto */ -static PyObject *__pyx_pf_9fontTools_5cu2qu_5cu2qu_5curves_to_quadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curves, PyObject *__pyx_v_max_errors, int __pyx_v_all_quadratic); /* proto */ -static PyObject *__pyx_tp_new_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -/* #### Code section: late_includes ### */ -/* #### Code section: module_state ### */ -/* SmallCodeConfig */ -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -typedef struct { - PyObject *__pyx_d; - PyObject *__pyx_b; - PyObject *__pyx_cython_runtime; - PyObject *__pyx_empty_tuple; - PyObject *__pyx_empty_bytes; - PyObject *__pyx_empty_unicode; - #ifdef __Pyx_CyFunction_USED - PyTypeObject *__pyx_CyFunctionType; - #endif - #ifdef __Pyx_FusedFunction_USED - PyTypeObject *__pyx_FusedFunctionType; - #endif - #ifdef __Pyx_Generator_USED - PyTypeObject *__pyx_GeneratorType; - #endif - #ifdef __Pyx_IterableCoroutine_USED - PyTypeObject *__pyx_IterableCoroutineType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineAwaitType; - #endif - #ifdef __Pyx_Coroutine_USED - PyTypeObject *__pyx_CoroutineType; - #endif - PyObject *__pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen; - PyTypeObject *__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen; - __Pyx_CachedCFunction __pyx_umethod_PyDict_Type_pop; - PyObject *__pyx_codeobj_tab[3]; - PyObject *__pyx_string_tab[79]; - PyObject *__pyx_int_1; - PyObject *__pyx_int_2; - PyObject *__pyx_int_3; - PyObject *__pyx_int_4; - PyObject *__pyx_int_6; - PyObject *__pyx_int_100; -/* #### Code section: module_state_contents ### */ -/* IterNextPlain.module_state_decls */ -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x030A0000 -PyObject *__Pyx_GetBuiltinNext_LimitedAPI_cache; -#endif - - -#if CYTHON_USE_FREELISTS -struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *__pyx_freelist_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen[8]; -int __pyx_freecount_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen; -#endif -/* CommonTypesMetaclass.module_state_decls */ -PyTypeObject *__pyx_CommonTypesMetaclassType; - -/* CachedMethodType.module_state_decls */ -#if CYTHON_COMPILING_IN_LIMITED_API -PyObject *__Pyx_CachedMethodType; -#endif - -/* CodeObjectCache.module_state_decls */ -struct __Pyx_CodeObjectCache __pyx_code_cache; - -/* #### Code section: module_state_end ### */ -} __pyx_mstatetype; - -#if CYTHON_USE_MODULE_STATE -#ifdef __cplusplus -namespace { -extern struct PyModuleDef __pyx_moduledef; -} /* anonymous namespace */ -#else -static struct PyModuleDef __pyx_moduledef; -#endif - -#define __pyx_mstate_global (__Pyx_PyModule_GetState(__Pyx_State_FindModule(&__pyx_moduledef))) - -#define __pyx_m (__Pyx_State_FindModule(&__pyx_moduledef)) -#else -static __pyx_mstatetype __pyx_mstate_global_static = -#ifdef __cplusplus - {}; -#else - {0}; -#endif -static __pyx_mstatetype * const __pyx_mstate_global = &__pyx_mstate_global_static; -#endif -/* #### Code section: constant_name_defines ### */ -#define __pyx_kp_u_ __pyx_string_tab[0] -#define __pyx_n_u_ApproxNotFoundError __pyx_string_tab[1] -#define __pyx_n_u_AssertionError __pyx_string_tab[2] -#define __pyx_n_u_AttributeError __pyx_string_tab[3] -#define __pyx_n_u_COMPILED __pyx_string_tab[4] -#define __pyx_n_u_Cu2QuError __pyx_string_tab[5] -#define __pyx_n_u_Error __pyx_string_tab[6] -#define __pyx_n_u_ImportError __pyx_string_tab[7] -#define __pyx_kp_u_Lib_fontTools_cu2qu_cu2qu_py __pyx_string_tab[8] -#define __pyx_n_u_MAX_N __pyx_string_tab[9] -#define __pyx_n_u_NAN __pyx_string_tab[10] -#define __pyx_n_u_NaN __pyx_string_tab[11] -#define __pyx_kp_u_Return_quadratic_Bezier_splines __pyx_string_tab[12] -#define __pyx_n_u_ZeroDivisionError __pyx_string_tab[13] -#define __pyx_kp_u__2 __pyx_string_tab[14] -#define __pyx_n_u_a __pyx_string_tab[15] -#define __pyx_n_u_a1 __pyx_string_tab[16] -#define __pyx_n_u_all __pyx_string_tab[17] -#define __pyx_n_u_all_quadratic __pyx_string_tab[18] -#define __pyx_n_u_asyncio_coroutines __pyx_string_tab[19] -#define __pyx_n_u_b __pyx_string_tab[20] -#define __pyx_n_u_b1 __pyx_string_tab[21] -#define __pyx_n_u_c __pyx_string_tab[22] -#define __pyx_n_u_c1 __pyx_string_tab[23] -#define __pyx_n_u_cline_in_traceback __pyx_string_tab[24] -#define __pyx_n_u_close __pyx_string_tab[25] -#define __pyx_n_u_curve __pyx_string_tab[26] -#define __pyx_n_u_curve_to_quadratic __pyx_string_tab[27] -#define __pyx_n_u_curves __pyx_string_tab[28] -#define __pyx_n_u_curves_to_quadratic __pyx_string_tab[29] -#define __pyx_kp_u_curves_to_quadratic_line_503 __pyx_string_tab[30] -#define __pyx_n_u_d __pyx_string_tab[31] -#define __pyx_n_u_d1 __pyx_string_tab[32] -#define __pyx_n_u_delta_2 __pyx_string_tab[33] -#define __pyx_n_u_delta_3 __pyx_string_tab[34] -#define __pyx_kp_u_disable __pyx_string_tab[35] -#define __pyx_n_u_dt __pyx_string_tab[36] -#define __pyx_kp_u_enable __pyx_string_tab[37] -#define __pyx_n_u_errors __pyx_string_tab[38] -#define __pyx_n_u_fontTools_cu2qu_cu2qu __pyx_string_tab[39] -#define __pyx_n_u_func __pyx_string_tab[40] -#define __pyx_kp_u_gc __pyx_string_tab[41] -#define __pyx_n_u_i __pyx_string_tab[42] -#define __pyx_n_u_imag __pyx_string_tab[43] -#define __pyx_n_u_initializing __pyx_string_tab[44] -#define __pyx_n_u_is_coroutine __pyx_string_tab[45] -#define __pyx_kp_u_isenabled __pyx_string_tab[46] -#define __pyx_n_u_isnan __pyx_string_tab[47] -#define __pyx_n_u_l __pyx_string_tab[48] -#define __pyx_n_u_last_i __pyx_string_tab[49] -#define __pyx_n_u_main __pyx_string_tab[50] -#define __pyx_n_u_math __pyx_string_tab[51] -#define __pyx_n_u_max_err __pyx_string_tab[52] -#define __pyx_n_u_max_errors __pyx_string_tab[53] -#define __pyx_n_u_module __pyx_string_tab[54] -#define __pyx_n_u_n __pyx_string_tab[55] -#define __pyx_n_u_name __pyx_string_tab[56] -#define __pyx_n_u_next __pyx_string_tab[57] -#define __pyx_n_u_p __pyx_string_tab[58] -#define __pyx_n_u_p0 __pyx_string_tab[59] -#define __pyx_n_u_p1 __pyx_string_tab[60] -#define __pyx_n_u_p2 __pyx_string_tab[61] -#define __pyx_n_u_p3 __pyx_string_tab[62] -#define __pyx_n_u_pop __pyx_string_tab[63] -#define __pyx_n_u_qualname __pyx_string_tab[64] -#define __pyx_n_u_range __pyx_string_tab[65] -#define __pyx_n_u_real __pyx_string_tab[66] -#define __pyx_n_u_s __pyx_string_tab[67] -#define __pyx_n_u_send __pyx_string_tab[68] -#define __pyx_n_u_set_name __pyx_string_tab[69] -#define __pyx_n_u_spec __pyx_string_tab[70] -#define __pyx_n_u_spline __pyx_string_tab[71] -#define __pyx_n_u_splines __pyx_string_tab[72] -#define __pyx_n_u_split_cubic_into_n_gen __pyx_string_tab[73] -#define __pyx_n_u_t1 __pyx_string_tab[74] -#define __pyx_n_u_t1_2 __pyx_string_tab[75] -#define __pyx_n_u_test __pyx_string_tab[76] -#define __pyx_n_u_throw __pyx_string_tab[77] -#define __pyx_n_u_value __pyx_string_tab[78] -/* #### Code section: module_state_clear ### */ -#if CYTHON_USE_MODULE_STATE -static CYTHON_SMALL_CODE int __pyx_m_clear(PyObject *m) { - __pyx_mstatetype *clear_module_state = __Pyx_PyModule_GetState(m); - if (!clear_module_state) return 0; - Py_CLEAR(clear_module_state->__pyx_d); - Py_CLEAR(clear_module_state->__pyx_b); - Py_CLEAR(clear_module_state->__pyx_cython_runtime); - Py_CLEAR(clear_module_state->__pyx_empty_tuple); - Py_CLEAR(clear_module_state->__pyx_empty_bytes); - Py_CLEAR(clear_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_CLEAR(clear_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_CLEAR(clear_module_state->__pyx_FusedFunctionType); - #endif - #if CYTHON_PEP489_MULTI_PHASE_INIT - __Pyx_State_RemoveModule(NULL); - #endif - Py_CLEAR(clear_module_state->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen); - Py_CLEAR(clear_module_state->__pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen); - for (int i=0; i<3; ++i) { Py_CLEAR(clear_module_state->__pyx_codeobj_tab[i]); } - for (int i=0; i<79; ++i) { Py_CLEAR(clear_module_state->__pyx_string_tab[i]); } - Py_CLEAR(clear_module_state->__pyx_int_1); - Py_CLEAR(clear_module_state->__pyx_int_2); - Py_CLEAR(clear_module_state->__pyx_int_3); - Py_CLEAR(clear_module_state->__pyx_int_4); - Py_CLEAR(clear_module_state->__pyx_int_6); - Py_CLEAR(clear_module_state->__pyx_int_100); - return 0; -} -#endif -/* #### Code section: module_state_traverse ### */ -#if CYTHON_USE_MODULE_STATE -static CYTHON_SMALL_CODE int __pyx_m_traverse(PyObject *m, visitproc visit, void *arg) { - __pyx_mstatetype *traverse_module_state = __Pyx_PyModule_GetState(m); - if (!traverse_module_state) return 0; - Py_VISIT(traverse_module_state->__pyx_d); - Py_VISIT(traverse_module_state->__pyx_b); - Py_VISIT(traverse_module_state->__pyx_cython_runtime); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_empty_tuple); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_empty_bytes); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_empty_unicode); - #ifdef __Pyx_CyFunction_USED - Py_VISIT(traverse_module_state->__pyx_CyFunctionType); - #endif - #ifdef __Pyx_FusedFunction_USED - Py_VISIT(traverse_module_state->__pyx_FusedFunctionType); - #endif - Py_VISIT(traverse_module_state->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen); - Py_VISIT(traverse_module_state->__pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen); - for (int i=0; i<3; ++i) { __Pyx_VISIT_CONST(traverse_module_state->__pyx_codeobj_tab[i]); } - for (int i=0; i<79; ++i) { __Pyx_VISIT_CONST(traverse_module_state->__pyx_string_tab[i]); } - __Pyx_VISIT_CONST(traverse_module_state->__pyx_int_1); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_int_2); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_int_3); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_int_4); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_int_6); - __Pyx_VISIT_CONST(traverse_module_state->__pyx_int_100); - return 0; -} -#endif -/* #### Code section: module_code ### */ - -/* "fontTools/cu2qu/cu2qu.py":37 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.double) -*/ - -static CYTHON_INLINE double __pyx_f_9fontTools_5cu2qu_5cu2qu_dot(__pyx_t_double_complex __pyx_v_v1, __pyx_t_double_complex __pyx_v_v2) { - double __pyx_v_result; - double __pyx_r; - double __pyx_t_1; - int __pyx_t_2; - - /* "fontTools/cu2qu/cu2qu.py":51 - * double: Dot product. - * """ - * result = (v1 * v2.conjugate()).real # <<<<<<<<<<<<<< - * # When vectors are perpendicular (i.e. dot product is 0), the above expression may - * # yield slightly different results when running in pure Python vs C/Cython, -*/ - __pyx_t_1 = __Pyx_CREAL(__Pyx_c_prod_double(__pyx_v_v1, __Pyx_c_conj_double(__pyx_v_v2))); - __pyx_v_result = __pyx_t_1; - - /* "fontTools/cu2qu/cu2qu.py":58 - * # implementation. Because we are using the result in a denominator and catching - * # ZeroDivisionError (see `calc_intersect`), it's best to normalize the result here. - * if abs(result) < 1e-15: # <<<<<<<<<<<<<< - * result = 0.0 - * return result -*/ - __pyx_t_1 = fabs(__pyx_v_result); - __pyx_t_2 = (__pyx_t_1 < 1e-15); - if (__pyx_t_2) { - - /* "fontTools/cu2qu/cu2qu.py":59 - * # ZeroDivisionError (see `calc_intersect`), it's best to normalize the result here. - * if abs(result) < 1e-15: - * result = 0.0 # <<<<<<<<<<<<<< - * return result - * -*/ - __pyx_v_result = 0.0; - - /* "fontTools/cu2qu/cu2qu.py":58 - * # implementation. Because we are using the result in a denominator and catching - * # ZeroDivisionError (see `calc_intersect`), it's best to normalize the result here. - * if abs(result) < 1e-15: # <<<<<<<<<<<<<< - * result = 0.0 - * return result -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":60 - * if abs(result) < 1e-15: - * result = 0.0 - * return result # <<<<<<<<<<<<<< - * - * -*/ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":37 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.double) -*/ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":63 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.locals(z=cython.complex, den=cython.double) - * @cython.locals(zr=cython.double, zi=cython.double) -*/ - -static PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu__complex_div_by_real(__pyx_t_double_complex __pyx_v_z, double __pyx_v_den) { - double __pyx_v_zr; - double __pyx_v_zi; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - double __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - size_t __pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_complex_div_by_real", 0); - - /* "fontTools/cu2qu/cu2qu.py":75 - * https://github.com/fonttools/fonttools/issues/3928 - * """ - * zr = z.real # <<<<<<<<<<<<<< - * zi = z.imag - * return complex(zr / den, zi / den) -*/ - __pyx_t_1 = __Pyx_CREAL(__pyx_v_z); - __pyx_v_zr = __pyx_t_1; - - /* "fontTools/cu2qu/cu2qu.py":76 - * """ - * zr = z.real - * zi = z.imag # <<<<<<<<<<<<<< - * return complex(zr / den, zi / den) - * -*/ - __pyx_t_1 = __Pyx_CIMAG(__pyx_v_z); - __pyx_v_zi = __pyx_t_1; - - /* "fontTools/cu2qu/cu2qu.py":77 - * zr = z.real - * zi = z.imag - * return complex(zr / den, zi / den) # <<<<<<<<<<<<<< - * - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = NULL; - __Pyx_INCREF((PyObject *)(&PyComplex_Type)); - __pyx_t_4 = ((PyObject *)(&PyComplex_Type)); - if (unlikely(__pyx_v_den == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 77, __pyx_L1_error) - } - __pyx_t_5 = PyFloat_FromDouble((__pyx_v_zr / __pyx_v_den)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__pyx_v_den == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 77, __pyx_L1_error) - } - __pyx_t_6 = PyFloat_FromDouble((__pyx_v_zi / __pyx_v_den)); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = 1; - { - PyObject *__pyx_callargs[3] = {__pyx_t_3, __pyx_t_5, __pyx_t_6}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+__pyx_t_7, (3-__pyx_t_7) | (__pyx_t_7*__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 77, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - } - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":63 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.locals(z=cython.complex, den=cython.double) - * @cython.locals(zr=cython.double, zi=cython.double) -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu._complex_div_by_real", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":80 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -*/ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_calc_cubic_points(__pyx_t_double_complex __pyx_v_a, __pyx_t_double_complex __pyx_v_b, __pyx_t_double_complex __pyx_v_c, __pyx_t_double_complex __pyx_v_d) { - __pyx_t_double_complex __pyx_v__1; - __pyx_t_double_complex __pyx_v__2; - __pyx_t_double_complex __pyx_v__3; - __pyx_t_double_complex __pyx_v__4; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __pyx_t_double_complex __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calc_cubic_points", 0); - - /* "fontTools/cu2qu/cu2qu.py":87 - * ) - * def calc_cubic_points(a, b, c, d): - * _1 = d # <<<<<<<<<<<<<< - * _2 = _complex_div_by_real(c, 3.0) + d - * _3 = _complex_div_by_real(b + c, 3.0) + _2 -*/ - __pyx_v__1 = __pyx_v_d; - - /* "fontTools/cu2qu/cu2qu.py":88 - * def calc_cubic_points(a, b, c, d): - * _1 = d - * _2 = _complex_div_by_real(c, 3.0) + d # <<<<<<<<<<<<<< - * _3 = _complex_div_by_real(b + c, 3.0) + _2 - * _4 = a + d + c + b -*/ - __pyx_t_1 = __pyx_f_9fontTools_5cu2qu_5cu2qu__complex_div_by_real(__pyx_v_c, 3.0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_d); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_3); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 88, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v__2 = __pyx_t_4; - - /* "fontTools/cu2qu/cu2qu.py":89 - * _1 = d - * _2 = _complex_div_by_real(c, 3.0) + d - * _3 = _complex_div_by_real(b + c, 3.0) + _2 # <<<<<<<<<<<<<< - * _4 = a + d + c + b - * return _1, _2, _3, _4 -*/ - __pyx_t_3 = __pyx_f_9fontTools_5cu2qu_5cu2qu__complex_div_by_real(__Pyx_c_sum_double(__pyx_v_b, __pyx_v_c), 3.0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v__2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyNumber_Add(__pyx_t_3, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 89, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v__3 = __pyx_t_4; - - /* "fontTools/cu2qu/cu2qu.py":90 - * _2 = _complex_div_by_real(c, 3.0) + d - * _3 = _complex_div_by_real(b + c, 3.0) + _2 - * _4 = a + d + c + b # <<<<<<<<<<<<<< - * return _1, _2, _3, _4 - * -*/ - __pyx_v__4 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__pyx_v_a, __pyx_v_d), __pyx_v_c), __pyx_v_b); - - /* "fontTools/cu2qu/cu2qu.py":91 - * _3 = _complex_div_by_real(b + c, 3.0) + _2 - * _4 = a + d + c + b - * return _1, _2, _3, _4 # <<<<<<<<<<<<<< - * - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v__1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v__2); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_v__3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v__4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 91, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1) != (0)) __PYX_ERR(0, 91, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_2) != (0)) __PYX_ERR(0, 91, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_3) != (0)) __PYX_ERR(0, 91, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_5) != (0)) __PYX_ERR(0, 91, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_5 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":80 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.calc_cubic_points", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":94 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_calc_cubic_parameters(__pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3) { - __pyx_t_double_complex __pyx_v_a; - __pyx_t_double_complex __pyx_v_b; - __pyx_t_double_complex __pyx_v_c; - __pyx_t_double_complex __pyx_v_d; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calc_cubic_parameters", 0); - - /* "fontTools/cu2qu/cu2qu.py":101 - * @cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) - * def calc_cubic_parameters(p0, p1, p2, p3): - * c = (p1 - p0) * 3.0 # <<<<<<<<<<<<<< - * b = (p2 - p1) * 3.0 - c - * d = p0 -*/ - __pyx_v_c = __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_p1, __pyx_v_p0), __pyx_t_double_complex_from_parts(3.0, 0)); - - /* "fontTools/cu2qu/cu2qu.py":102 - * def calc_cubic_parameters(p0, p1, p2, p3): - * c = (p1 - p0) * 3.0 - * b = (p2 - p1) * 3.0 - c # <<<<<<<<<<<<<< - * d = p0 - * a = p3 - d - c - b -*/ - __pyx_v_b = __Pyx_c_diff_double(__Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_p2, __pyx_v_p1), __pyx_t_double_complex_from_parts(3.0, 0)), __pyx_v_c); - - /* "fontTools/cu2qu/cu2qu.py":103 - * c = (p1 - p0) * 3.0 - * b = (p2 - p1) * 3.0 - c - * d = p0 # <<<<<<<<<<<<<< - * a = p3 - d - c - b - * return a, b, c, d -*/ - __pyx_v_d = __pyx_v_p0; - - /* "fontTools/cu2qu/cu2qu.py":104 - * b = (p2 - p1) * 3.0 - c - * d = p0 - * a = p3 - d - c - b # <<<<<<<<<<<<<< - * return a, b, c, d - * -*/ - __pyx_v_a = __Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__pyx_v_p3, __pyx_v_d), __pyx_v_c), __pyx_v_b); - - /* "fontTools/cu2qu/cu2qu.py":105 - * d = p0 - * a = p3 - d - c - b - * return a, b, c, d # <<<<<<<<<<<<<< - * - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_a); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_b); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_v_c); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_v_d); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 105, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_1) != (0)) __PYX_ERR(0, 105, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2) != (0)) __PYX_ERR(0, 105, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3) != (0)) __PYX_ERR(0, 105, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4) != (0)) __PYX_ERR(0, 105, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":94 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.calc_cubic_parameters", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":108 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_n_iter(__pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3, PyObject *__pyx_v_n) { - PyObject *__pyx_v_a = NULL; - PyObject *__pyx_v_b = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *(*__pyx_t_6)(PyObject *); - __pyx_t_double_complex __pyx_t_7; - __pyx_t_double_complex __pyx_t_8; - __pyx_t_double_complex __pyx_t_9; - __pyx_t_double_complex __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - size_t __pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("split_cubic_into_n_iter", 0); - - /* "fontTools/cu2qu/cu2qu.py":130 - * """ - * # Hand-coded special-cases - * if n == 2: # <<<<<<<<<<<<<< - * return iter(split_cubic_into_two(p0, p1, p2, p3)) - * if n == 3: -*/ - __pyx_t_1 = (__Pyx_PyLong_BoolEqObjC(__pyx_v_n, __pyx_mstate_global->__pyx_int_2, 2, 0)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(0, 130, __pyx_L1_error) - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":131 - * # Hand-coded special-cases - * if n == 2: - * return iter(split_cubic_into_two(p0, p1, p2, p3)) # <<<<<<<<<<<<<< - * if n == 3: - * return iter(split_cubic_into_three(p0, p1, p2, p3)) -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 131, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":130 - * """ - * # Hand-coded special-cases - * if n == 2: # <<<<<<<<<<<<<< - * return iter(split_cubic_into_two(p0, p1, p2, p3)) - * if n == 3: -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":132 - * if n == 2: - * return iter(split_cubic_into_two(p0, p1, p2, p3)) - * if n == 3: # <<<<<<<<<<<<<< - * return iter(split_cubic_into_three(p0, p1, p2, p3)) - * if n == 4: -*/ - __pyx_t_1 = (__Pyx_PyLong_BoolEqObjC(__pyx_v_n, __pyx_mstate_global->__pyx_int_3, 3, 0)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(0, 132, __pyx_L1_error) - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":133 - * return iter(split_cubic_into_two(p0, p1, p2, p3)) - * if n == 3: - * return iter(split_cubic_into_three(p0, p1, p2, p3)) # <<<<<<<<<<<<<< - * if n == 4: - * a, b = split_cubic_into_two(p0, p1, p2, p3) -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_three(__pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 133, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":132 - * if n == 2: - * return iter(split_cubic_into_two(p0, p1, p2, p3)) - * if n == 3: # <<<<<<<<<<<<<< - * return iter(split_cubic_into_three(p0, p1, p2, p3)) - * if n == 4: -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":134 - * if n == 3: - * return iter(split_cubic_into_three(p0, p1, p2, p3)) - * if n == 4: # <<<<<<<<<<<<<< - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( -*/ - __pyx_t_1 = (__Pyx_PyLong_BoolEqObjC(__pyx_v_n, __pyx_mstate_global->__pyx_int_4, 4, 0)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(0, 134, __pyx_L1_error) - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":135 - * return iter(split_cubic_into_three(p0, p1, p2, p3)) - * if n == 4: - * a, b = split_cubic_into_two(p0, p1, p2, p3) # <<<<<<<<<<<<<< - * return iter( - * split_cubic_into_two(a[0], a[1], a[2], a[3]) -*/ - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if ((likely(PyTuple_CheckExact(__pyx_t_2))) || (PyList_CheckExact(__pyx_t_2))) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 135, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __Pyx_INCREF(__pyx_t_3); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - } else { - __pyx_t_3 = __Pyx_PyList_GetItemRef(sequence, 0); - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyList_GetItemRef(sequence, 1); - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_4); - } - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_5 = PyObject_GetIter(__pyx_t_2); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 135, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_6 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_5); - index = 0; __pyx_t_3 = __pyx_t_6(__pyx_t_5); if (unlikely(!__pyx_t_3)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_4 = __pyx_t_6(__pyx_t_5); if (unlikely(!__pyx_t_4)) goto __pyx_L6_unpacking_failed; - __Pyx_GOTREF(__pyx_t_4); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_6(__pyx_t_5), 2) < 0) __PYX_ERR(0, 135, __pyx_L1_error) - __pyx_t_6 = NULL; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L7_unpacking_done; - __pyx_L6_unpacking_failed:; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 135, __pyx_L1_error) - __pyx_L7_unpacking_done:; - } - __pyx_v_a = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_b = __pyx_t_4; - __pyx_t_4 = 0; - - /* "fontTools/cu2qu/cu2qu.py":136 - * if n == 4: - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( # <<<<<<<<<<<<<< - * split_cubic_into_two(a[0], a[1], a[2], a[3]) - * + split_cubic_into_two(b[0], b[1], b[2], b[3]) -*/ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/cu2qu/cu2qu.py":137 - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( - * split_cubic_into_two(a[0], a[1], a[2], a[3]) # <<<<<<<<<<<<<< - * + split_cubic_into_two(b[0], b[1], b[2], b[3]) - * ) -*/ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_a, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_a, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_a, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_a, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_t_7, __pyx_t_8, __pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "fontTools/cu2qu/cu2qu.py":138 - * return iter( - * split_cubic_into_two(a[0], a[1], a[2], a[3]) - * + split_cubic_into_two(b[0], b[1], b[2], b[3]) # <<<<<<<<<<<<<< - * ) - * if n == 6: -*/ - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_b, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_b, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_b, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_8 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_b, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_t_10, __pyx_t_9, __pyx_t_8, __pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = PyNumber_Add(__pyx_t_2, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 138, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/cu2qu/cu2qu.py":136 - * if n == 4: - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( # <<<<<<<<<<<<<< - * split_cubic_into_two(a[0], a[1], a[2], a[3]) - * + split_cubic_into_two(b[0], b[1], b[2], b[3]) -*/ - __pyx_t_4 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 136, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":134 - * if n == 3: - * return iter(split_cubic_into_three(p0, p1, p2, p3)) - * if n == 4: # <<<<<<<<<<<<<< - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":140 - * + split_cubic_into_two(b[0], b[1], b[2], b[3]) - * ) - * if n == 6: # <<<<<<<<<<<<<< - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( -*/ - __pyx_t_1 = (__Pyx_PyLong_BoolEqObjC(__pyx_v_n, __pyx_mstate_global->__pyx_int_6, 6, 0)); if (unlikely((__pyx_t_1 < 0))) __PYX_ERR(0, 140, __pyx_L1_error) - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":141 - * ) - * if n == 6: - * a, b = split_cubic_into_two(p0, p1, p2, p3) # <<<<<<<<<<<<<< - * return iter( - * split_cubic_into_three(a[0], a[1], a[2], a[3]) -*/ - __pyx_t_4 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if ((likely(PyTuple_CheckExact(__pyx_t_4))) || (PyList_CheckExact(__pyx_t_4))) { - PyObject* sequence = __pyx_t_4; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 141, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __Pyx_INCREF(__pyx_t_3); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_2); - } else { - __pyx_t_3 = __Pyx_PyList_GetItemRef(sequence, 0); - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PyList_GetItemRef(sequence, 1); - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_2); - } - #else - __pyx_t_3 = __Pyx_PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - #endif - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - } else { - Py_ssize_t index = -1; - __pyx_t_5 = PyObject_GetIter(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 141, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_6 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_5); - index = 0; __pyx_t_3 = __pyx_t_6(__pyx_t_5); if (unlikely(!__pyx_t_3)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_3); - index = 1; __pyx_t_2 = __pyx_t_6(__pyx_t_5); if (unlikely(!__pyx_t_2)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(__pyx_t_2); - if (__Pyx_IternextUnpackEndCheck(__pyx_t_6(__pyx_t_5), 2) < 0) __PYX_ERR(0, 141, __pyx_L1_error) - __pyx_t_6 = NULL; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 141, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_v_a = __pyx_t_3; - __pyx_t_3 = 0; - __pyx_v_b = __pyx_t_2; - __pyx_t_2 = 0; - - /* "fontTools/cu2qu/cu2qu.py":142 - * if n == 6: - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( # <<<<<<<<<<<<<< - * split_cubic_into_three(a[0], a[1], a[2], a[3]) - * + split_cubic_into_three(b[0], b[1], b[2], b[3]) -*/ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/cu2qu/cu2qu.py":143 - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( - * split_cubic_into_three(a[0], a[1], a[2], a[3]) # <<<<<<<<<<<<<< - * + split_cubic_into_three(b[0], b[1], b[2], b[3]) - * ) -*/ - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_a, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_7 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_a, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_8 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_a, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_GetItemInt(__pyx_v_a, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_three(__pyx_t_7, __pyx_t_8, __pyx_t_9, __pyx_t_10); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 143, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - - /* "fontTools/cu2qu/cu2qu.py":144 - * return iter( - * split_cubic_into_three(a[0], a[1], a[2], a[3]) - * + split_cubic_into_three(b[0], b[1], b[2], b[3]) # <<<<<<<<<<<<<< - * ) - * -*/ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_b, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_b, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_b, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_b, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_three(__pyx_t_10, __pyx_t_9, __pyx_t_8, __pyx_t_7); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Add(__pyx_t_4, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 144, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "fontTools/cu2qu/cu2qu.py":142 - * if n == 6: - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( # <<<<<<<<<<<<<< - * split_cubic_into_three(a[0], a[1], a[2], a[3]) - * + split_cubic_into_three(b[0], b[1], b[2], b[3]) -*/ - __pyx_t_2 = PyObject_GetIter(__pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 142, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":140 - * + split_cubic_into_two(b[0], b[1], b[2], b[3]) - * ) - * if n == 6: # <<<<<<<<<<<<<< - * a, b = split_cubic_into_two(p0, p1, p2, p3) - * return iter( -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":147 - * ) - * - * return _split_cubic_into_n_gen(p0, p1, p2, p3, n) # <<<<<<<<<<<<<< - * - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = NULL; - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_mstate_global->__pyx_n_u_split_cubic_into_n_gen); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_11 = __pyx_PyComplex_FromComplex(__pyx_v_p1); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = __pyx_PyComplex_FromComplex(__pyx_v_p2); if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_12); - __pyx_t_13 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_13)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_13); - __pyx_t_14 = 1; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_3 = PyMethod_GET_SELF(__pyx_t_4); - assert(__pyx_t_3); - PyObject* __pyx__function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx__function); - __Pyx_DECREF_SET(__pyx_t_4, __pyx__function); - __pyx_t_14 = 0; - } - #endif - { - PyObject *__pyx_callargs[6] = {__pyx_t_3, __pyx_t_5, __pyx_t_11, __pyx_t_12, __pyx_t_13, __pyx_v_n}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_4, __pyx_callargs+__pyx_t_14, (6-__pyx_t_14) | (__pyx_t_14*__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)); - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __Pyx_DECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 147, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - } - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":108 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.split_cubic_into_n_iter", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_a); - __Pyx_XDECREF(__pyx_v_b); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static PyObject *__pyx_gb_9fontTools_5cu2qu_5cu2qu_2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value); /* proto */ - -/* "fontTools/cu2qu/cu2qu.py":150 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * p0=cython.complex, - * p1=cython.complex, -*/ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5cu2qu_5cu2qu_1_split_cubic_into_n_gen(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_5cu2qu_5cu2qu__split_cubic_into_n_gen, "_split_cubic_into_n_gen(double complex p0, double complex p1, double complex p2, double complex p3, int n)"); -static PyMethodDef __pyx_mdef_9fontTools_5cu2qu_5cu2qu_1_split_cubic_into_n_gen = {"_split_cubic_into_n_gen", (PyCFunction)(void(*)(void))(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_5cu2qu_5cu2qu_1_split_cubic_into_n_gen, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_5cu2qu_5cu2qu__split_cubic_into_n_gen}; -static PyObject *__pyx_pw_9fontTools_5cu2qu_5cu2qu_1_split_cubic_into_n_gen(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - __pyx_t_double_complex __pyx_v_p0; - __pyx_t_double_complex __pyx_v_p1; - __pyx_t_double_complex __pyx_v_p2; - __pyx_t_double_complex __pyx_v_p3; - int __pyx_v_n; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[5] = {0,0,0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_split_cubic_into_n_gen (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_SIZE - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject ** const __pyx_pyargnames[] = {&__pyx_mstate_global->__pyx_n_u_p0,&__pyx_mstate_global->__pyx_n_u_p1,&__pyx_mstate_global->__pyx_n_u_p2,&__pyx_mstate_global->__pyx_n_u_p3,&__pyx_mstate_global->__pyx_n_u_n,0}; - const Py_ssize_t __pyx_kwds_len = (__pyx_kwds) ? __Pyx_NumKwargs_FASTCALL(__pyx_kwds) : 0; - if (unlikely(__pyx_kwds_len) < 0) __PYX_ERR(0, 150, __pyx_L3_error) - if (__pyx_kwds_len > 0) { - switch (__pyx_nargs) { - case 5: - values[4] = __Pyx_ArgRef_FASTCALL(__pyx_args, 4); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[4])) __PYX_ERR(0, 150, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 4: - values[3] = __Pyx_ArgRef_FASTCALL(__pyx_args, 3); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[3])) __PYX_ERR(0, 150, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 3: - values[2] = __Pyx_ArgRef_FASTCALL(__pyx_args, 2); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[2])) __PYX_ERR(0, 150, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 2: - values[1] = __Pyx_ArgRef_FASTCALL(__pyx_args, 1); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[1])) __PYX_ERR(0, 150, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 1: - values[0] = __Pyx_ArgRef_FASTCALL(__pyx_args, 0); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[0])) __PYX_ERR(0, 150, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (__Pyx_ParseKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values, kwd_pos_args, __pyx_kwds_len, "_split_cubic_into_n_gen", 0) < 0) __PYX_ERR(0, 150, __pyx_L3_error) - for (Py_ssize_t i = __pyx_nargs; i < 5; i++) { - if (unlikely(!values[i])) { __Pyx_RaiseArgtupleInvalid("_split_cubic_into_n_gen", 1, 5, 5, i); __PYX_ERR(0, 150, __pyx_L3_error) } - } - } else if (unlikely(__pyx_nargs != 5)) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = __Pyx_ArgRef_FASTCALL(__pyx_args, 0); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[0])) __PYX_ERR(0, 150, __pyx_L3_error) - values[1] = __Pyx_ArgRef_FASTCALL(__pyx_args, 1); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[1])) __PYX_ERR(0, 150, __pyx_L3_error) - values[2] = __Pyx_ArgRef_FASTCALL(__pyx_args, 2); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[2])) __PYX_ERR(0, 150, __pyx_L3_error) - values[3] = __Pyx_ArgRef_FASTCALL(__pyx_args, 3); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[3])) __PYX_ERR(0, 150, __pyx_L3_error) - values[4] = __Pyx_ArgRef_FASTCALL(__pyx_args, 4); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[4])) __PYX_ERR(0, 150, __pyx_L3_error) - } - __pyx_v_p0 = __Pyx_PyComplex_As___pyx_t_double_complex(values[0]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 164, __pyx_L3_error) - __pyx_v_p1 = __Pyx_PyComplex_As___pyx_t_double_complex(values[1]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 164, __pyx_L3_error) - __pyx_v_p2 = __Pyx_PyComplex_As___pyx_t_double_complex(values[2]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 164, __pyx_L3_error) - __pyx_v_p3 = __Pyx_PyComplex_As___pyx_t_double_complex(values[3]); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 164, __pyx_L3_error) - __pyx_v_n = __Pyx_PyLong_As_int(values[4]); if (unlikely((__pyx_v_n == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 164, __pyx_L3_error) - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("_split_cubic_into_n_gen", 1, 5, 5, __pyx_nargs); __PYX_ERR(0, 150, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - for (Py_ssize_t __pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - Py_XDECREF(values[__pyx_temp]); - } - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu._split_cubic_into_n_gen", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_5cu2qu_5cu2qu__split_cubic_into_n_gen(__pyx_self, __pyx_v_p0, __pyx_v_p1, __pyx_v_p2, __pyx_v_p3, __pyx_v_n); - - /* function exit code */ - for (Py_ssize_t __pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - Py_XDECREF(values[__pyx_temp]); - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_5cu2qu_5cu2qu__split_cubic_into_n_gen(CYTHON_UNUSED PyObject *__pyx_self, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3, int __pyx_v_n) { - struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *__pyx_cur_scope; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_split_cubic_into_n_gen", 0); - __pyx_cur_scope = (struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *)__pyx_tp_new_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen(__pyx_mstate_global->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen, __pyx_mstate_global->__pyx_empty_tuple, NULL); - if (unlikely(!__pyx_cur_scope)) { - __pyx_cur_scope = ((struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *)Py_None); - __Pyx_INCREF(Py_None); - __PYX_ERR(0, 150, __pyx_L1_error) - } else { - __Pyx_GOTREF((PyObject *)__pyx_cur_scope); - } - __pyx_cur_scope->__pyx_v_p0 = __pyx_v_p0; - __pyx_cur_scope->__pyx_v_p1 = __pyx_v_p1; - __pyx_cur_scope->__pyx_v_p2 = __pyx_v_p2; - __pyx_cur_scope->__pyx_v_p3 = __pyx_v_p3; - __pyx_cur_scope->__pyx_v_n = __pyx_v_n; - { - __pyx_CoroutineObject *gen = __Pyx_Generator_New((__pyx_coroutine_body_t) __pyx_gb_9fontTools_5cu2qu_5cu2qu_2generator, ((PyObject *)__pyx_mstate_global->__pyx_codeobj_tab[0]), (PyObject *) __pyx_cur_scope, __pyx_mstate_global->__pyx_n_u_split_cubic_into_n_gen, __pyx_mstate_global->__pyx_n_u_split_cubic_into_n_gen, __pyx_mstate_global->__pyx_n_u_fontTools_cu2qu_cu2qu); if (unlikely(!gen)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_DECREF(__pyx_cur_scope); - __Pyx_RefNannyFinishContext(); - return (PyObject *) gen; - } - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu._split_cubic_into_n_gen", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_DECREF((PyObject *)__pyx_cur_scope); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_gb_9fontTools_5cu2qu_5cu2qu_2generator(__pyx_CoroutineObject *__pyx_generator, CYTHON_UNUSED PyThreadState *__pyx_tstate, PyObject *__pyx_sent_value) /* generator body */ -{ - struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *__pyx_cur_scope = ((struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *)__pyx_generator->closure); - PyObject *__pyx_r = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *(*__pyx_t_7)(PyObject *); - __pyx_t_double_complex __pyx_t_8; - __pyx_t_double_complex __pyx_t_9; - __pyx_t_double_complex __pyx_t_10; - __pyx_t_double_complex __pyx_t_11; - int __pyx_t_12; - int __pyx_t_13; - int __pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("_split_cubic_into_n_gen", 0); - switch (__pyx_generator->resume_label) { - case 0: goto __pyx_L3_first_run; - case 1: goto __pyx_L8_resume_from_yield; - default: /* CPython raises the right error here */ - __Pyx_RefNannyFinishContext(); - return NULL; - } - __pyx_L3_first_run:; - if (unlikely(__pyx_sent_value != Py_None)) { - if (unlikely(__pyx_sent_value)) PyErr_SetString(PyExc_TypeError, "can't send non-None value to a just-started generator"); - __PYX_ERR(0, 150, __pyx_L1_error) - } - - /* "fontTools/cu2qu/cu2qu.py":165 - * ) - * def _split_cubic_into_n_gen(p0, p1, p2, p3, n): - * a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3) # <<<<<<<<<<<<<< - * dt = 1 / n - * delta_2 = dt * dt -*/ - __pyx_t_1 = __pyx_f_9fontTools_5cu2qu_5cu2qu_calc_cubic_parameters(__pyx_cur_scope->__pyx_v_p0, __pyx_cur_scope->__pyx_v_p1, __pyx_cur_scope->__pyx_v_p2, __pyx_cur_scope->__pyx_v_p3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if ((likely(PyTuple_CheckExact(__pyx_t_1))) || (PyList_CheckExact(__pyx_t_1))) { - PyObject* sequence = __pyx_t_1; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 165, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 0); - __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 2); - __Pyx_INCREF(__pyx_t_4); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 3); - __Pyx_INCREF(__pyx_t_5); - } else { - __pyx_t_2 = __Pyx_PyList_GetItemRef(sequence, 0); - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyList_GetItemRef(sequence, 1); - if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyList_GetItemRef(sequence, 2); - if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyList_GetItemRef(sequence, 3); - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_5); - } - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_4,&__pyx_t_5}; - for (i=0; i < 4; i++) { - PyObject* item = __Pyx_PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_2,&__pyx_t_3,&__pyx_t_4,&__pyx_t_5}; - __pyx_t_6 = PyObject_GetIter(__pyx_t_1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_7 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_7(__pyx_t_6); if (unlikely(!item)) goto __pyx_L4_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_7(__pyx_t_6), 4) < 0) __PYX_ERR(0, 165, __pyx_L1_error) - __pyx_t_7 = NULL; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - goto __pyx_L5_unpacking_done; - __pyx_L4_unpacking_failed:; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __pyx_t_7 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 165, __pyx_L1_error) - __pyx_L5_unpacking_done:; - } - __pyx_t_8 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_3); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_10 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_11 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_5); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 165, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_cur_scope->__pyx_v_a = __pyx_t_8; - __pyx_cur_scope->__pyx_v_b = __pyx_t_9; - __pyx_cur_scope->__pyx_v_c = __pyx_t_10; - __pyx_cur_scope->__pyx_v_d = __pyx_t_11; - - /* "fontTools/cu2qu/cu2qu.py":166 - * def _split_cubic_into_n_gen(p0, p1, p2, p3, n): - * a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3) - * dt = 1 / n # <<<<<<<<<<<<<< - * delta_2 = dt * dt - * delta_3 = dt * delta_2 -*/ - if (unlikely(__pyx_cur_scope->__pyx_v_n == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 166, __pyx_L1_error) - } - __pyx_cur_scope->__pyx_v_dt = (1.0 / ((double)__pyx_cur_scope->__pyx_v_n)); - - /* "fontTools/cu2qu/cu2qu.py":167 - * a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3) - * dt = 1 / n - * delta_2 = dt * dt # <<<<<<<<<<<<<< - * delta_3 = dt * delta_2 - * for i in range(n): -*/ - __pyx_cur_scope->__pyx_v_delta_2 = (__pyx_cur_scope->__pyx_v_dt * __pyx_cur_scope->__pyx_v_dt); - - /* "fontTools/cu2qu/cu2qu.py":168 - * dt = 1 / n - * delta_2 = dt * dt - * delta_3 = dt * delta_2 # <<<<<<<<<<<<<< - * for i in range(n): - * t1 = i * dt -*/ - __pyx_cur_scope->__pyx_v_delta_3 = (__pyx_cur_scope->__pyx_v_dt * __pyx_cur_scope->__pyx_v_delta_2); - - /* "fontTools/cu2qu/cu2qu.py":169 - * delta_2 = dt * dt - * delta_3 = dt * delta_2 - * for i in range(n): # <<<<<<<<<<<<<< - * t1 = i * dt - * t1_2 = t1 * t1 -*/ - __pyx_t_12 = __pyx_cur_scope->__pyx_v_n; - __pyx_t_13 = __pyx_t_12; - for (__pyx_t_14 = 0; __pyx_t_14 < __pyx_t_13; __pyx_t_14+=1) { - __pyx_cur_scope->__pyx_v_i = __pyx_t_14; - - /* "fontTools/cu2qu/cu2qu.py":170 - * delta_3 = dt * delta_2 - * for i in range(n): - * t1 = i * dt # <<<<<<<<<<<<<< - * t1_2 = t1 * t1 - * # calc new a, b, c and d -*/ - __pyx_cur_scope->__pyx_v_t1 = (__pyx_cur_scope->__pyx_v_i * __pyx_cur_scope->__pyx_v_dt); - - /* "fontTools/cu2qu/cu2qu.py":171 - * for i in range(n): - * t1 = i * dt - * t1_2 = t1 * t1 # <<<<<<<<<<<<<< - * # calc new a, b, c and d - * a1 = a * delta_3 -*/ - __pyx_cur_scope->__pyx_v_t1_2 = (__pyx_cur_scope->__pyx_v_t1 * __pyx_cur_scope->__pyx_v_t1); - - /* "fontTools/cu2qu/cu2qu.py":173 - * t1_2 = t1 * t1 - * # calc new a, b, c and d - * a1 = a * delta_3 # <<<<<<<<<<<<<< - * b1 = (3 * a * t1 + b) * delta_2 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt -*/ - __pyx_cur_scope->__pyx_v_a1 = __Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_a, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_delta_3, 0)); - - /* "fontTools/cu2qu/cu2qu.py":174 - * # calc new a, b, c and d - * a1 = a * delta_3 - * b1 = (3 * a * t1 + b) * delta_2 # <<<<<<<<<<<<<< - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt - * d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d -*/ - __pyx_cur_scope->__pyx_v_b1 = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __pyx_cur_scope->__pyx_v_a), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0)), __pyx_cur_scope->__pyx_v_b), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_delta_2, 0)); - - /* "fontTools/cu2qu/cu2qu.py":175 - * a1 = a * delta_3 - * b1 = (3 * a * t1 + b) * delta_2 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt # <<<<<<<<<<<<<< - * d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d - * yield calc_cubic_points(a1, b1, c1, d1) -*/ - __pyx_cur_scope->__pyx_v_c1 = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(2, 0), __pyx_cur_scope->__pyx_v_b), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0)), __pyx_cur_scope->__pyx_v_c), __Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __pyx_cur_scope->__pyx_v_a), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1_2, 0))), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_dt, 0)); - - /* "fontTools/cu2qu/cu2qu.py":176 - * b1 = (3 * a * t1 + b) * delta_2 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt - * d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d # <<<<<<<<<<<<<< - * yield calc_cubic_points(a1, b1, c1, d1) - * -*/ - __pyx_cur_scope->__pyx_v_d1 = __Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_a, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0)), __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1_2, 0)), __Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_b, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1_2, 0))), __Pyx_c_prod_double(__pyx_cur_scope->__pyx_v_c, __pyx_t_double_complex_from_parts(__pyx_cur_scope->__pyx_v_t1, 0))), __pyx_cur_scope->__pyx_v_d); - - /* "fontTools/cu2qu/cu2qu.py":177 - * c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt - * d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d - * yield calc_cubic_points(a1, b1, c1, d1) # <<<<<<<<<<<<<< - * - * -*/ - __pyx_t_1 = __pyx_f_9fontTools_5cu2qu_5cu2qu_calc_cubic_points(__pyx_cur_scope->__pyx_v_a1, __pyx_cur_scope->__pyx_v_b1, __pyx_cur_scope->__pyx_v_c1, __pyx_cur_scope->__pyx_v_d1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - __pyx_cur_scope->__pyx_t_0 = __pyx_t_12; - __pyx_cur_scope->__pyx_t_1 = __pyx_t_13; - __pyx_cur_scope->__pyx_t_2 = __pyx_t_14; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - /* return from generator, yielding value */ - __pyx_generator->resume_label = 1; - return __pyx_r; - __pyx_L8_resume_from_yield:; - __pyx_t_12 = __pyx_cur_scope->__pyx_t_0; - __pyx_t_13 = __pyx_cur_scope->__pyx_t_1; - __pyx_t_14 = __pyx_cur_scope->__pyx_t_2; - if (unlikely(!__pyx_sent_value)) __PYX_ERR(0, 177, __pyx_L1_error) - } - CYTHON_MAYBE_UNUSED_VAR(__pyx_cur_scope); - - /* "fontTools/cu2qu/cu2qu.py":150 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * p0=cython.complex, - * p1=cython.complex, -*/ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - if (__Pyx_PyErr_Occurred()) { - __Pyx_Generator_Replace_StopIteration(0); - __Pyx_AddTraceback("_split_cubic_into_n_gen", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - #if !CYTHON_USE_EXC_INFO_STACK - __Pyx_Coroutine_ResetAndClearException(__pyx_generator); - #endif - __pyx_generator->resume_label = -1; - __Pyx_Coroutine_clear((PyObject*)__pyx_generator); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":180 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_two(__pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3) { - __pyx_t_double_complex __pyx_v_mid; - __pyx_t_double_complex __pyx_v_deriv3; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - __pyx_t_double_complex __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("split_cubic_into_two", 0); - - /* "fontTools/cu2qu/cu2qu.py":201 - * values). - * """ - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 # <<<<<<<<<<<<<< - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( -*/ - __pyx_v_mid = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__pyx_v_p0, __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __Pyx_c_sum_double(__pyx_v_p1, __pyx_v_p2))), __pyx_v_p3), __pyx_t_double_complex_from_parts(0.125, 0)); - - /* "fontTools/cu2qu/cu2qu.py":202 - * """ - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 # <<<<<<<<<<<<<< - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), -*/ - __pyx_v_deriv3 = __Pyx_c_prod_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_sum_double(__pyx_v_p3, __pyx_v_p2), __pyx_v_p1), __pyx_v_p0), __pyx_t_double_complex_from_parts(0.125, 0)); - - /* "fontTools/cu2qu/cu2qu.py":203 - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( # <<<<<<<<<<<<<< - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), -*/ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/cu2qu/cu2qu.py":204 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), # <<<<<<<<<<<<<< - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - * ) -*/ - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_p0, __pyx_v_p1), __pyx_t_double_complex_from_parts(0.5, 0)); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = __Pyx_c_diff_double(__pyx_v_mid, __pyx_v_deriv3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v_mid); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyTuple_New(4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 0, __pyx_t_1) != (0)) __PYX_ERR(0, 204, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 1, __pyx_t_3) != (0)) __PYX_ERR(0, 204, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 2, __pyx_t_4) != (0)) __PYX_ERR(0, 204, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_6, 3, __pyx_t_5) != (0)) __PYX_ERR(0, 204, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":205 - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), # <<<<<<<<<<<<<< - * ) - * -*/ - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_v_mid); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_2 = __Pyx_c_sum_double(__pyx_v_mid, __pyx_v_deriv3); - __pyx_t_4 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_2 = __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_p2, __pyx_v_p3), __pyx_t_double_complex_from_parts(0.5, 0)); - __pyx_t_3 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_7 = PyTuple_New(4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 205, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_5) != (0)) __PYX_ERR(0, 205, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_4); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_4) != (0)) __PYX_ERR(0, 205, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_3); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_3) != (0)) __PYX_ERR(0, 205, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 3, __pyx_t_1) != (0)) __PYX_ERR(0, 205, __pyx_L1_error); - __pyx_t_5 = 0; - __pyx_t_4 = 0; - __pyx_t_3 = 0; - __pyx_t_1 = 0; - - /* "fontTools/cu2qu/cu2qu.py":204 - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return ( - * (p0, (p0 + p1) * 0.5, mid - deriv3, mid), # <<<<<<<<<<<<<< - * (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - * ) -*/ - __pyx_t_1 = PyTuple_New(2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_6) != (0)) __PYX_ERR(0, 204, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_t_7) != (0)) __PYX_ERR(0, 204, __pyx_L1_error); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":180 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.split_cubic_into_two", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":209 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_three(__pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3) { - __pyx_t_double_complex __pyx_v_mid1; - __pyx_t_double_complex __pyx_v_deriv1; - __pyx_t_double_complex __pyx_v_mid2; - __pyx_t_double_complex __pyx_v_deriv2; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - __pyx_t_double_complex __pyx_t_2; - __pyx_t_double_complex __pyx_t_3; - __pyx_t_double_complex __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("split_cubic_into_three", 0); - - /* "fontTools/cu2qu/cu2qu.py":238 - * values). - * """ - * mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27) # <<<<<<<<<<<<<< - * deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) - * mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) -*/ - __pyx_v_mid1 = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(8, 0), __pyx_v_p0), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(12, 0), __pyx_v_p1)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(6, 0), __pyx_v_p2)), __pyx_v_p3), __pyx_t_double_complex_from_parts((1.0 / 27.0), 0)); - - /* "fontTools/cu2qu/cu2qu.py":239 - * """ - * mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27) - * deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) # <<<<<<<<<<<<<< - * mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) - * deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) -*/ - __pyx_v_deriv1 = __Pyx_c_prod_double(__Pyx_c_diff_double(__Pyx_c_sum_double(__pyx_v_p3, __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __pyx_v_p2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(4, 0), __pyx_v_p0)), __pyx_t_double_complex_from_parts((1.0 / 27.0), 0)); - - /* "fontTools/cu2qu/cu2qu.py":240 - * mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27) - * deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) - * mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) # <<<<<<<<<<<<<< - * deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - * return ( -*/ - __pyx_v_mid2 = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__pyx_v_p0, __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(6, 0), __pyx_v_p1)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(12, 0), __pyx_v_p2)), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(8, 0), __pyx_v_p3)), __pyx_t_double_complex_from_parts((1.0 / 27.0), 0)); - - /* "fontTools/cu2qu/cu2qu.py":241 - * deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) - * mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) - * deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) # <<<<<<<<<<<<<< - * return ( - * (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), -*/ - __pyx_v_deriv2 = __Pyx_c_prod_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(4, 0), __pyx_v_p3), __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __pyx_v_p1)), __pyx_v_p0), __pyx_t_double_complex_from_parts((1.0 / 27.0), 0)); - - /* "fontTools/cu2qu/cu2qu.py":242 - * mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) - * deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - * return ( # <<<<<<<<<<<<<< - * (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), - * (mid1, mid1 + deriv1, mid2 - deriv2, mid2), -*/ - __Pyx_XDECREF(__pyx_r); - - /* "fontTools/cu2qu/cu2qu.py":243 - * deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - * return ( - * (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), # <<<<<<<<<<<<<< - * (mid1, mid1 + deriv1, mid2 - deriv2, mid2), - * (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), -*/ - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_p0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 243, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_c_sum_double(__Pyx_c_prod_double(__pyx_t_double_complex_from_parts(2, 0), __pyx_v_p0), __pyx_v_p1); - __pyx_t_3 = __pyx_t_double_complex_from_parts(3.0, 0); - if (unlikely(__Pyx_c_is_zero_double(__pyx_t_3))) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 243, __pyx_L1_error) - } - __pyx_t_4 = __Pyx_c_quot_double(__pyx_t_2, __pyx_t_3); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 243, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = __Pyx_c_diff_double(__pyx_v_mid1, __pyx_v_deriv1); - __pyx_t_6 = __pyx_PyComplex_FromComplex(__pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 243, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __pyx_PyComplex_FromComplex(__pyx_v_mid1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 243, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_8 = PyTuple_New(4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 243, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_1) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_5) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_6) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_8, 3, __pyx_t_7) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_7 = 0; - - /* "fontTools/cu2qu/cu2qu.py":244 - * return ( - * (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), - * (mid1, mid1 + deriv1, mid2 - deriv2, mid2), # <<<<<<<<<<<<<< - * (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), - * ) -*/ - __pyx_t_7 = __pyx_PyComplex_FromComplex(__pyx_v_mid1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_4 = __Pyx_c_sum_double(__pyx_v_mid1, __pyx_v_deriv1); - __pyx_t_6 = __pyx_PyComplex_FromComplex(__pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_4 = __Pyx_c_diff_double(__pyx_v_mid2, __pyx_v_deriv2); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_mid2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = PyTuple_New(4); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 244, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7) != (0)) __PYX_ERR(0, 244, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 1, __pyx_t_6) != (0)) __PYX_ERR(0, 244, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 2, __pyx_t_5) != (0)) __PYX_ERR(0, 244, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_9, 3, __pyx_t_1) != (0)) __PYX_ERR(0, 244, __pyx_L1_error); - __pyx_t_7 = 0; - __pyx_t_6 = 0; - __pyx_t_5 = 0; - __pyx_t_1 = 0; - - /* "fontTools/cu2qu/cu2qu.py":245 - * (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), - * (mid1, mid1 + deriv1, mid2 - deriv2, mid2), - * (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), # <<<<<<<<<<<<<< - * ) - * -*/ - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_mid2); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_c_sum_double(__pyx_v_mid2, __pyx_v_deriv2); - __pyx_t_5 = __pyx_PyComplex_FromComplex(__pyx_t_4); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_4 = __Pyx_c_sum_double(__pyx_v_p2, __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(2, 0), __pyx_v_p3)); - __pyx_t_3 = __pyx_t_double_complex_from_parts(3.0, 0); - if (unlikely(__Pyx_c_is_zero_double(__pyx_t_3))) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 245, __pyx_L1_error) - } - __pyx_t_2 = __Pyx_c_quot_double(__pyx_t_4, __pyx_t_3); - __pyx_t_6 = __pyx_PyComplex_FromComplex(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __pyx_PyComplex_FromComplex(__pyx_v_p3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_10 = PyTuple_New(4); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 245, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_1) != (0)) __PYX_ERR(0, 245, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_5) != (0)) __PYX_ERR(0, 245, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 2, __pyx_t_6) != (0)) __PYX_ERR(0, 245, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_7); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_10, 3, __pyx_t_7) != (0)) __PYX_ERR(0, 245, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_7 = 0; - - /* "fontTools/cu2qu/cu2qu.py":243 - * deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - * return ( - * (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), # <<<<<<<<<<<<<< - * (mid1, mid1 + deriv1, mid2 - deriv2, mid2), - * (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), -*/ - __pyx_t_7 = PyTuple_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 243, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_8); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_8) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_9); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_9) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_10); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_10) != (0)) __PYX_ERR(0, 243, __pyx_L1_error); - __pyx_t_8 = 0; - __pyx_t_9 = 0; - __pyx_t_10 = 0; - __pyx_r = __pyx_t_7; - __pyx_t_7 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":209 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals( -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.split_cubic_into_three", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":249 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.complex) -*/ - -static CYTHON_INLINE __pyx_t_double_complex __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_control(double __pyx_v_t, __pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3) { - __pyx_t_double_complex __pyx_v__p1; - __pyx_t_double_complex __pyx_v__p2; - __pyx_t_double_complex __pyx_r; - - /* "fontTools/cu2qu/cu2qu.py":273 - * complex: Location of candidate control point on quadratic curve. - * """ - * _p1 = p0 + (p1 - p0) * 1.5 # <<<<<<<<<<<<<< - * _p2 = p3 + (p2 - p3) * 1.5 - * return _p1 + (_p2 - _p1) * t -*/ - __pyx_v__p1 = __Pyx_c_sum_double(__pyx_v_p0, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_p1, __pyx_v_p0), __pyx_t_double_complex_from_parts(1.5, 0))); - - /* "fontTools/cu2qu/cu2qu.py":274 - * """ - * _p1 = p0 + (p1 - p0) * 1.5 - * _p2 = p3 + (p2 - p3) * 1.5 # <<<<<<<<<<<<<< - * return _p1 + (_p2 - _p1) * t - * -*/ - __pyx_v__p2 = __Pyx_c_sum_double(__pyx_v_p3, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_p2, __pyx_v_p3), __pyx_t_double_complex_from_parts(1.5, 0))); - - /* "fontTools/cu2qu/cu2qu.py":275 - * _p1 = p0 + (p1 - p0) * 1.5 - * _p2 = p3 + (p2 - p3) * 1.5 - * return _p1 + (_p2 - _p1) * t # <<<<<<<<<<<<<< - * - * -*/ - __pyx_r = __Pyx_c_sum_double(__pyx_v__p1, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v__p2, __pyx_v__p1), __pyx_t_double_complex_from_parts(__pyx_v_t, 0))); - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":249 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.complex) -*/ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":278 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.complex) -*/ - -static CYTHON_INLINE __pyx_t_double_complex __pyx_f_9fontTools_5cu2qu_5cu2qu_calc_intersect(__pyx_t_double_complex __pyx_v_a, __pyx_t_double_complex __pyx_v_b, __pyx_t_double_complex __pyx_v_c, __pyx_t_double_complex __pyx_v_d) { - __pyx_t_double_complex __pyx_v_ab; - __pyx_t_double_complex __pyx_v_cd; - __pyx_t_double_complex __pyx_v_p; - double __pyx_v_h; - __pyx_t_double_complex __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - double __pyx_t_4; - double __pyx_t_5; - int __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - int __pyx_t_10; - int __pyx_t_11; - PyObject *__pyx_t_12 = NULL; - PyObject *__pyx_t_13 = NULL; - PyObject *__pyx_t_14 = NULL; - PyObject *__pyx_t_15 = NULL; - PyObject *__pyx_t_16 = NULL; - size_t __pyx_t_17; - __pyx_t_double_complex __pyx_t_18; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("calc_intersect", 0); - - /* "fontTools/cu2qu/cu2qu.py":296 - * if no intersection was found. - * """ - * ab = b - a # <<<<<<<<<<<<<< - * cd = d - c - * p = ab * 1j -*/ - __pyx_v_ab = __Pyx_c_diff_double(__pyx_v_b, __pyx_v_a); - - /* "fontTools/cu2qu/cu2qu.py":297 - * """ - * ab = b - a - * cd = d - c # <<<<<<<<<<<<<< - * p = ab * 1j - * try: -*/ - __pyx_v_cd = __Pyx_c_diff_double(__pyx_v_d, __pyx_v_c); - - /* "fontTools/cu2qu/cu2qu.py":298 - * ab = b - a - * cd = d - c - * p = ab * 1j # <<<<<<<<<<<<<< - * try: - * h = dot(p, a - c) / dot(p, cd) -*/ - __pyx_v_p = __Pyx_c_prod_double(__pyx_v_ab, __pyx_t_double_complex_from_parts(0, 1.0)); - - /* "fontTools/cu2qu/cu2qu.py":299 - * cd = d - c - * p = ab * 1j - * try: # <<<<<<<<<<<<<< - * h = dot(p, a - c) / dot(p, cd) - * except ZeroDivisionError: -*/ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_1, &__pyx_t_2, &__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_1); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - /*try:*/ { - - /* "fontTools/cu2qu/cu2qu.py":300 - * p = ab * 1j - * try: - * h = dot(p, a - c) / dot(p, cd) # <<<<<<<<<<<<<< - * except ZeroDivisionError: - * # if 3 or 4 points are equal, we do have an intersection despite the zero-div: -*/ - __pyx_t_4 = __pyx_f_9fontTools_5cu2qu_5cu2qu_dot(__pyx_v_p, __Pyx_c_diff_double(__pyx_v_a, __pyx_v_c)); if (unlikely(__pyx_t_4 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 300, __pyx_L3_error) - __pyx_t_5 = __pyx_f_9fontTools_5cu2qu_5cu2qu_dot(__pyx_v_p, __pyx_v_cd); if (unlikely(__pyx_t_5 == ((double)-1) && PyErr_Occurred())) __PYX_ERR(0, 300, __pyx_L3_error) - if (unlikely(__pyx_t_5 == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 300, __pyx_L3_error) - } - __pyx_v_h = (__pyx_t_4 / __pyx_t_5); - - /* "fontTools/cu2qu/cu2qu.py":299 - * cd = d - c - * p = ab * 1j - * try: # <<<<<<<<<<<<<< - * h = dot(p, a - c) / dot(p, cd) - * except ZeroDivisionError: -*/ - } - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - goto __pyx_L8_try_end; - __pyx_L3_error:; - - /* "fontTools/cu2qu/cu2qu.py":301 - * try: - * h = dot(p, a - c) / dot(p, cd) - * except ZeroDivisionError: # <<<<<<<<<<<<<< - * # if 3 or 4 points are equal, we do have an intersection despite the zero-div: - * # return one of the off-curves so that the algorithm can attempt a one-curve -*/ - __pyx_t_6 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_ZeroDivisionError); - if (__pyx_t_6) { - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.calc_intersect", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0) __PYX_ERR(0, 301, __pyx_L5_except_error) - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - - /* "fontTools/cu2qu/cu2qu.py":306 - * # solution if it's within tolerance: - * # https://github.com/linebender/kurbo/pull/484 - * if b == c and (a == b or c == d): # <<<<<<<<<<<<<< - * return b - * return complex(NAN, NAN) -*/ - __pyx_t_11 = (__Pyx_c_eq_double(__pyx_v_b, __pyx_v_c)); - if (__pyx_t_11) { - } else { - __pyx_t_10 = __pyx_t_11; - goto __pyx_L12_bool_binop_done; - } - __pyx_t_11 = (__Pyx_c_eq_double(__pyx_v_a, __pyx_v_b)); - if (!__pyx_t_11) { - } else { - __pyx_t_10 = __pyx_t_11; - goto __pyx_L12_bool_binop_done; - } - __pyx_t_11 = (__Pyx_c_eq_double(__pyx_v_c, __pyx_v_d)); - __pyx_t_10 = __pyx_t_11; - __pyx_L12_bool_binop_done:; - if (__pyx_t_10) { - - /* "fontTools/cu2qu/cu2qu.py":307 - * # https://github.com/linebender/kurbo/pull/484 - * if b == c and (a == b or c == d): - * return b # <<<<<<<<<<<<<< - * return complex(NAN, NAN) - * return c + cd * h -*/ - __pyx_r = __pyx_v_b; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L6_except_return; - - /* "fontTools/cu2qu/cu2qu.py":306 - * # solution if it's within tolerance: - * # https://github.com/linebender/kurbo/pull/484 - * if b == c and (a == b or c == d): # <<<<<<<<<<<<<< - * return b - * return complex(NAN, NAN) -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":308 - * if b == c and (a == b or c == d): - * return b - * return complex(NAN, NAN) # <<<<<<<<<<<<<< - * return c + cd * h - * -*/ - __pyx_t_13 = NULL; - __Pyx_INCREF((PyObject *)(&PyComplex_Type)); - __pyx_t_14 = ((PyObject *)(&PyComplex_Type)); - __Pyx_GetModuleGlobalName(__pyx_t_15, __pyx_mstate_global->__pyx_n_u_NAN); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 308, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_15); - __Pyx_GetModuleGlobalName(__pyx_t_16, __pyx_mstate_global->__pyx_n_u_NAN); if (unlikely(!__pyx_t_16)) __PYX_ERR(0, 308, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_16); - __pyx_t_17 = 1; - { - PyObject *__pyx_callargs[3] = {__pyx_t_13, __pyx_t_15, __pyx_t_16}; - __pyx_t_12 = __Pyx_PyObject_FastCall(__pyx_t_14, __pyx_callargs+__pyx_t_17, (3-__pyx_t_17) | (__pyx_t_17*__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)); - __Pyx_XDECREF(__pyx_t_13); __pyx_t_13 = 0; - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - __Pyx_DECREF(__pyx_t_16); __pyx_t_16 = 0; - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - if (unlikely(!__pyx_t_12)) __PYX_ERR(0, 308, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_12); - } - __pyx_t_18 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_12); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 308, __pyx_L5_except_error) - __Pyx_DECREF(__pyx_t_12); __pyx_t_12 = 0; - __pyx_r = __pyx_t_18; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L6_except_return; - } - goto __pyx_L5_except_error; - - /* "fontTools/cu2qu/cu2qu.py":299 - * cd = d - c - * p = ab * 1j - * try: # <<<<<<<<<<<<<< - * h = dot(p, a - c) / dot(p, cd) - * except ZeroDivisionError: -*/ - __pyx_L5_except_error:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_1); - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_ExceptionReset(__pyx_t_1, __pyx_t_2, __pyx_t_3); - goto __pyx_L0; - __pyx_L8_try_end:; - } - - /* "fontTools/cu2qu/cu2qu.py":309 - * return b - * return complex(NAN, NAN) - * return c + cd * h # <<<<<<<<<<<<<< - * - * -*/ - __pyx_r = __Pyx_c_sum_double(__pyx_v_c, __Pyx_c_prod_double(__pyx_v_cd, __pyx_t_double_complex_from_parts(__pyx_v_h, 0))); - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":278 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.returns(cython.complex) -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_12); - __Pyx_XDECREF(__pyx_t_13); - __Pyx_XDECREF(__pyx_t_14); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_XDECREF(__pyx_t_16); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.calc_intersect", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = __pyx_t_double_complex_from_parts(0, 0); - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":312 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.returns(cython.int) - * @cython.locals( -*/ - -static int __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_farthest_fit_inside(__pyx_t_double_complex __pyx_v_p0, __pyx_t_double_complex __pyx_v_p1, __pyx_t_double_complex __pyx_v_p2, __pyx_t_double_complex __pyx_v_p3, double __pyx_v_tolerance) { - __pyx_t_double_complex __pyx_v_mid; - __pyx_t_double_complex __pyx_v_deriv3; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "fontTools/cu2qu/cu2qu.py":341 - * """ - * # First check p2 then p1, as p2 has higher error early on. - * if abs(p2) <= tolerance and abs(p1) <= tolerance: # <<<<<<<<<<<<<< - * return True - * -*/ - __pyx_t_2 = (__Pyx_c_abs_double(__pyx_v_p2) <= __pyx_v_tolerance); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__Pyx_c_abs_double(__pyx_v_p1) <= __pyx_v_tolerance); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":342 - * # First check p2 then p1, as p2 has higher error early on. - * if abs(p2) <= tolerance and abs(p1) <= tolerance: - * return True # <<<<<<<<<<<<<< - * - * # Split. -*/ - __pyx_r = 1; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":341 - * """ - * # First check p2 then p1, as p2 has higher error early on. - * if abs(p2) <= tolerance and abs(p1) <= tolerance: # <<<<<<<<<<<<<< - * return True - * -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":345 - * - * # Split. - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 # <<<<<<<<<<<<<< - * if abs(mid) > tolerance: - * return False -*/ - __pyx_v_mid = __Pyx_c_prod_double(__Pyx_c_sum_double(__Pyx_c_sum_double(__pyx_v_p0, __Pyx_c_prod_double(__pyx_t_double_complex_from_parts(3, 0), __Pyx_c_sum_double(__pyx_v_p1, __pyx_v_p2))), __pyx_v_p3), __pyx_t_double_complex_from_parts(0.125, 0)); - - /* "fontTools/cu2qu/cu2qu.py":346 - * # Split. - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * if abs(mid) > tolerance: # <<<<<<<<<<<<<< - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 -*/ - __pyx_t_1 = (__Pyx_c_abs_double(__pyx_v_mid) > __pyx_v_tolerance); - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":347 - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * if abs(mid) > tolerance: - * return False # <<<<<<<<<<<<<< - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return cubic_farthest_fit_inside( -*/ - __pyx_r = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":346 - * # Split. - * mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - * if abs(mid) > tolerance: # <<<<<<<<<<<<<< - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":348 - * if abs(mid) > tolerance: - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 # <<<<<<<<<<<<<< - * return cubic_farthest_fit_inside( - * p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance -*/ - __pyx_v_deriv3 = __Pyx_c_prod_double(__Pyx_c_diff_double(__Pyx_c_diff_double(__Pyx_c_sum_double(__pyx_v_p3, __pyx_v_p2), __pyx_v_p1), __pyx_v_p0), __pyx_t_double_complex_from_parts(0.125, 0)); - - /* "fontTools/cu2qu/cu2qu.py":349 - * return False - * deriv3 = (p3 + p2 - p1 - p0) * 0.125 - * return cubic_farthest_fit_inside( # <<<<<<<<<<<<<< - * p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - * ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) -*/ - __pyx_t_4 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_farthest_fit_inside(__pyx_v_p0, __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_p0, __pyx_v_p1), __pyx_t_double_complex_from_parts(0.5, 0)), __Pyx_c_diff_double(__pyx_v_mid, __pyx_v_deriv3), __pyx_v_mid, __pyx_v_tolerance); if (unlikely(__pyx_t_4 == ((int)-1) && PyErr_Occurred())) __PYX_ERR(0, 349, __pyx_L1_error) - if (__pyx_t_4) { - } else { - __pyx_t_3 = __pyx_t_4; - goto __pyx_L7_bool_binop_done; - } - - /* "fontTools/cu2qu/cu2qu.py":351 - * return cubic_farthest_fit_inside( - * p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - * ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) # <<<<<<<<<<<<<< - * - * -*/ - __pyx_t_4 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_farthest_fit_inside(__pyx_v_mid, __Pyx_c_sum_double(__pyx_v_mid, __pyx_v_deriv3), __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_p2, __pyx_v_p3), __pyx_t_double_complex_from_parts(0.5, 0)), __pyx_v_p3, __pyx_v_tolerance); if (unlikely(__pyx_t_4 == ((int)-1) && PyErr_Occurred())) __PYX_ERR(0, 351, __pyx_L1_error) - __pyx_t_3 = __pyx_t_4; - __pyx_L7_bool_binop_done:; - __pyx_r = __pyx_t_3; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":312 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.returns(cython.int) - * @cython.locals( -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.cubic_farthest_fit_inside", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":354 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals(tolerance=cython.double) -*/ - -static CYTHON_INLINE PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_quadratic(PyObject *__pyx_v_cubic, double __pyx_v_tolerance) { - __pyx_t_double_complex __pyx_v_q1; - __pyx_t_double_complex __pyx_v_c0; - __pyx_t_double_complex __pyx_v_c1; - __pyx_t_double_complex __pyx_v_c2; - __pyx_t_double_complex __pyx_v_c3; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - __pyx_t_double_complex __pyx_t_2; - __pyx_t_double_complex __pyx_t_3; - __pyx_t_double_complex __pyx_t_4; - __pyx_t_double_complex __pyx_t_5; - __pyx_t_double_complex __pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("cubic_approx_quadratic", 0); - - /* "fontTools/cu2qu/cu2qu.py":378 - * """ - * - * q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) # <<<<<<<<<<<<<< - * if math.isnan(q1.imag): - * return None -*/ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_cubic, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_cubic, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_cubic, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_6 = __pyx_f_9fontTools_5cu2qu_5cu2qu_calc_intersect(__pyx_t_2, __pyx_t_3, __pyx_t_4, __pyx_t_5); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 378, __pyx_L1_error) - __pyx_v_q1 = __pyx_t_6; - - /* "fontTools/cu2qu/cu2qu.py":379 - * - * q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) - * if math.isnan(q1.imag): # <<<<<<<<<<<<<< - * return None - * c0 = cubic[0] -*/ - __pyx_t_7 = NULL; - __Pyx_GetModuleGlobalName(__pyx_t_8, __pyx_mstate_global->__pyx_n_u_math); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_mstate_global->__pyx_n_u_isnan); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = PyFloat_FromDouble(__Pyx_CIMAG(__pyx_v_q1)); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_10 = 1; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_9))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_9); - assert(__pyx_t_7); - PyObject* __pyx__function = PyMethod_GET_FUNCTION(__pyx_t_9); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(__pyx__function); - __Pyx_DECREF_SET(__pyx_t_9, __pyx__function); - __pyx_t_10 = 0; - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_7, __pyx_t_8}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_9, __pyx_callargs+__pyx_t_10, (2-__pyx_t_10) | (__pyx_t_10*__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)); - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - } - __pyx_t_11 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_11 < 0))) __PYX_ERR(0, 379, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_11) { - - /* "fontTools/cu2qu/cu2qu.py":380 - * q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) - * if math.isnan(q1.imag): - * return None # <<<<<<<<<<<<<< - * c0 = cubic[0] - * c3 = cubic[3] -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":379 - * - * q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) - * if math.isnan(q1.imag): # <<<<<<<<<<<<<< - * return None - * c0 = cubic[0] -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":381 - * if math.isnan(q1.imag): - * return None - * c0 = cubic[0] # <<<<<<<<<<<<<< - * c3 = cubic[3] - * c1 = c0 + (q1 - c0) * (2 / 3) -*/ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 381, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_c0 = __pyx_t_6; - - /* "fontTools/cu2qu/cu2qu.py":382 - * return None - * c0 = cubic[0] - * c3 = cubic[3] # <<<<<<<<<<<<<< - * c1 = c0 + (q1 - c0) * (2 / 3) - * c2 = c3 + (q1 - c3) * (2 / 3) -*/ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_cubic, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 382, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v_c3 = __pyx_t_6; - - /* "fontTools/cu2qu/cu2qu.py":383 - * c0 = cubic[0] - * c3 = cubic[3] - * c1 = c0 + (q1 - c0) * (2 / 3) # <<<<<<<<<<<<<< - * c2 = c3 + (q1 - c3) * (2 / 3) - * if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): -*/ - __pyx_v_c1 = __Pyx_c_sum_double(__pyx_v_c0, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_q1, __pyx_v_c0), __pyx_t_double_complex_from_parts((2.0 / 3.0), 0))); - - /* "fontTools/cu2qu/cu2qu.py":384 - * c3 = cubic[3] - * c1 = c0 + (q1 - c0) * (2 / 3) - * c2 = c3 + (q1 - c3) * (2 / 3) # <<<<<<<<<<<<<< - * if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): - * return None -*/ - __pyx_v_c2 = __Pyx_c_sum_double(__pyx_v_c3, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_q1, __pyx_v_c3), __pyx_t_double_complex_from_parts((2.0 / 3.0), 0))); - - /* "fontTools/cu2qu/cu2qu.py":385 - * c1 = c0 + (q1 - c0) * (2 / 3) - * c2 = c3 + (q1 - c3) * (2 / 3) - * if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): # <<<<<<<<<<<<<< - * return None - * return c0, q1, c3 -*/ - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_c1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = __Pyx_GetItemInt(__pyx_v_cubic, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_8 = PyNumber_Subtract(__pyx_t_1, __pyx_t_9); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __pyx_PyComplex_FromComplex(__pyx_v_c2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = __Pyx_GetItemInt(__pyx_v_cubic, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = PyNumber_Subtract(__pyx_t_8, __pyx_t_9); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_t_5 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_1); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 385, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_12 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_farthest_fit_inside(__pyx_t_double_complex_from_parts(0, 0), __pyx_t_6, __pyx_t_5, __pyx_t_double_complex_from_parts(0, 0), __pyx_v_tolerance); if (unlikely(__pyx_t_12 == ((int)-1) && PyErr_Occurred())) __PYX_ERR(0, 385, __pyx_L1_error) - __pyx_t_11 = (!(__pyx_t_12 != 0)); - if (__pyx_t_11) { - - /* "fontTools/cu2qu/cu2qu.py":386 - * c2 = c3 + (q1 - c3) * (2 / 3) - * if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): - * return None # <<<<<<<<<<<<<< - * return c0, q1, c3 - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":385 - * c1 = c0 + (q1 - c0) * (2 / 3) - * c2 = c3 + (q1 - c3) * (2 / 3) - * if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): # <<<<<<<<<<<<<< - * return None - * return c0, q1, c3 -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":387 - * if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): - * return None - * return c0, q1, c3 # <<<<<<<<<<<<<< - * - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_PyComplex_FromComplex(__pyx_v_c0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 387, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_9 = __pyx_PyComplex_FromComplex(__pyx_v_q1); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 387, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_8 = __pyx_PyComplex_FromComplex(__pyx_v_c3); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 387, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = PyTuple_New(3); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 387, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GIVEREF(__pyx_t_1); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1) != (0)) __PYX_ERR(0, 387, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_9); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 1, __pyx_t_9) != (0)) __PYX_ERR(0, 387, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_8); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_7, 2, __pyx_t_8) != (0)) __PYX_ERR(0, 387, __pyx_L1_error); - __pyx_t_1 = 0; - __pyx_t_9 = 0; - __pyx_t_8 = 0; - __pyx_r = __pyx_t_7; - __pyx_t_7 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":354 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.inline - * @cython.locals(tolerance=cython.double) -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.cubic_approx_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":390 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.locals(n=cython.int, tolerance=cython.double) - * @cython.locals(i=cython.int) -*/ - -static PyObject *__pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_spline(PyObject *__pyx_v_cubic, int __pyx_v_n, double __pyx_v_tolerance, int __pyx_v_all_quadratic) { - __pyx_t_double_complex __pyx_v_q0; - __pyx_t_double_complex __pyx_v_q1; - __pyx_t_double_complex __pyx_v_next_q1; - __pyx_t_double_complex __pyx_v_q2; - __pyx_t_double_complex __pyx_v_d1; - CYTHON_UNUSED __pyx_t_double_complex __pyx_v_c0; - __pyx_t_double_complex __pyx_v_c1; - __pyx_t_double_complex __pyx_v_c2; - __pyx_t_double_complex __pyx_v_c3; - int __pyx_v_i; - PyObject *__pyx_v_cubics = NULL; - PyObject *__pyx_v_next_cubic = NULL; - PyObject *__pyx_v_spline = NULL; - __pyx_t_double_complex __pyx_v_d0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_t_3; - __pyx_t_double_complex __pyx_t_4; - __pyx_t_double_complex __pyx_t_5; - __pyx_t_double_complex __pyx_t_6; - __pyx_t_double_complex __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - __pyx_t_double_complex __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - long __pyx_t_11; - long __pyx_t_12; - int __pyx_t_13; - PyObject *__pyx_t_14 = NULL; - PyObject *__pyx_t_15 = NULL; - PyObject *(*__pyx_t_16)(PyObject *); - long __pyx_t_17; - int __pyx_t_18; - int __pyx_t_19; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("cubic_approx_spline", 0); - - /* "fontTools/cu2qu/cu2qu.py":419 - * """ - * - * if n == 1: # <<<<<<<<<<<<<< - * return cubic_approx_quadratic(cubic, tolerance) - * if n == 2 and all_quadratic == False: -*/ - __pyx_t_1 = (__pyx_v_n == 1); - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":420 - * - * if n == 1: - * return cubic_approx_quadratic(cubic, tolerance) # <<<<<<<<<<<<<< - * if n == 2 and all_quadratic == False: - * return cubic -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_quadratic(__pyx_v_cubic, __pyx_v_tolerance); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":419 - * """ - * - * if n == 1: # <<<<<<<<<<<<<< - * return cubic_approx_quadratic(cubic, tolerance) - * if n == 2 and all_quadratic == False: -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":421 - * if n == 1: - * return cubic_approx_quadratic(cubic, tolerance) - * if n == 2 and all_quadratic == False: # <<<<<<<<<<<<<< - * return cubic - * -*/ - __pyx_t_3 = (__pyx_v_n == 2); - if (__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_all_quadratic == 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":422 - * return cubic_approx_quadratic(cubic, tolerance) - * if n == 2 and all_quadratic == False: - * return cubic # <<<<<<<<<<<<<< - * - * cubics = split_cubic_into_n_iter(cubic[0], cubic[1], cubic[2], cubic[3], n) -*/ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_cubic); - __pyx_r = __pyx_v_cubic; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":421 - * if n == 1: - * return cubic_approx_quadratic(cubic, tolerance) - * if n == 2 and all_quadratic == False: # <<<<<<<<<<<<<< - * return cubic - * -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":424 - * return cubic - * - * cubics = split_cubic_into_n_iter(cubic[0], cubic[1], cubic[2], cubic[3], n) # <<<<<<<<<<<<<< - * - * # calculate the spline of quadratics and check errors at the same time. -*/ - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_cubic, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_5 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_cubic, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_cubic, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_7 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyLong_From_int(__pyx_v_n); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_8 = __pyx_f_9fontTools_5cu2qu_5cu2qu_split_cubic_into_n_iter(__pyx_t_4, __pyx_t_5, __pyx_t_6, __pyx_t_7, __pyx_t_2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 424, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_v_cubics = __pyx_t_8; - __pyx_t_8 = 0; - - /* "fontTools/cu2qu/cu2qu.py":427 - * - * # calculate the spline of quadratics and check errors at the same time. - * next_cubic = next(cubics) # <<<<<<<<<<<<<< - * next_q1 = cubic_approx_control( - * 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] -*/ - __pyx_t_8 = __Pyx_PyIter_Next(__pyx_v_cubics); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_v_next_cubic = __pyx_t_8; - __pyx_t_8 = 0; - - /* "fontTools/cu2qu/cu2qu.py":429 - * next_cubic = next(cubics) - * next_q1 = cubic_approx_control( - * 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] # <<<<<<<<<<<<<< - * ) - * q2 = cubic[0] -*/ - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_next_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_7 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_next_cubic, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_next_cubic, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_5 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_next_cubic, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 429, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "fontTools/cu2qu/cu2qu.py":428 - * # calculate the spline of quadratics and check errors at the same time. - * next_cubic = next(cubics) - * next_q1 = cubic_approx_control( # <<<<<<<<<<<<<< - * 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - * ) -*/ - __pyx_t_9 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_control(0.0, __pyx_t_7, __pyx_t_6, __pyx_t_5, __pyx_t_4); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 428, __pyx_L1_error) - __pyx_v_next_q1 = __pyx_t_9; - - /* "fontTools/cu2qu/cu2qu.py":431 - * 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - * ) - * q2 = cubic[0] # <<<<<<<<<<<<<< - * d1 = 0j - * spline = [cubic[0], next_q1] -*/ - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 431, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_v_q2 = __pyx_t_9; - - /* "fontTools/cu2qu/cu2qu.py":432 - * ) - * q2 = cubic[0] - * d1 = 0j # <<<<<<<<<<<<<< - * spline = [cubic[0], next_q1] - * for i in range(1, n + 1): -*/ - __pyx_v_d1 = __pyx_t_double_complex_from_parts(0, 0.0); - - /* "fontTools/cu2qu/cu2qu.py":433 - * q2 = cubic[0] - * d1 = 0j - * spline = [cubic[0], next_q1] # <<<<<<<<<<<<<< - * for i in range(1, n + 1): - * # Current cubic to convert -*/ - __pyx_t_8 = __Pyx_GetItemInt(__pyx_v_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 433, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - __pyx_t_2 = __pyx_PyComplex_FromComplex(__pyx_v_next_q1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 433, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_10 = PyList_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 433, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_8); - if (__Pyx_PyList_SET_ITEM(__pyx_t_10, 0, __pyx_t_8) != (0)) __PYX_ERR(0, 433, __pyx_L1_error); - __Pyx_GIVEREF(__pyx_t_2); - if (__Pyx_PyList_SET_ITEM(__pyx_t_10, 1, __pyx_t_2) != (0)) __PYX_ERR(0, 433, __pyx_L1_error); - __pyx_t_8 = 0; - __pyx_t_2 = 0; - __pyx_v_spline = ((PyObject*)__pyx_t_10); - __pyx_t_10 = 0; - - /* "fontTools/cu2qu/cu2qu.py":434 - * d1 = 0j - * spline = [cubic[0], next_q1] - * for i in range(1, n + 1): # <<<<<<<<<<<<<< - * # Current cubic to convert - * c0, c1, c2, c3 = next_cubic -*/ - __pyx_t_11 = (__pyx_v_n + 1); - __pyx_t_12 = __pyx_t_11; - for (__pyx_t_13 = 1; __pyx_t_13 < __pyx_t_12; __pyx_t_13+=1) { - __pyx_v_i = __pyx_t_13; - - /* "fontTools/cu2qu/cu2qu.py":436 - * for i in range(1, n + 1): - * # Current cubic to convert - * c0, c1, c2, c3 = next_cubic # <<<<<<<<<<<<<< - * - * # Current quadratic approximation of current cubic -*/ - if ((likely(PyTuple_CheckExact(__pyx_v_next_cubic))) || (PyList_CheckExact(__pyx_v_next_cubic))) { - PyObject* sequence = __pyx_v_next_cubic; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 4)) { - if (size > 4) __Pyx_RaiseTooManyValuesError(4); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(0, 436, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - if (likely(PyTuple_CheckExact(sequence))) { - __pyx_t_10 = PyTuple_GET_ITEM(sequence, 0); - __Pyx_INCREF(__pyx_t_10); - __pyx_t_2 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_2); - __pyx_t_8 = PyTuple_GET_ITEM(sequence, 2); - __Pyx_INCREF(__pyx_t_8); - __pyx_t_14 = PyTuple_GET_ITEM(sequence, 3); - __Pyx_INCREF(__pyx_t_14); - } else { - __pyx_t_10 = __Pyx_PyList_GetItemRef(sequence, 0); - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_10); - __pyx_t_2 = __Pyx_PyList_GetItemRef(sequence, 1); - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_2); - __pyx_t_8 = __Pyx_PyList_GetItemRef(sequence, 2); - if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_8); - __pyx_t_14 = __Pyx_PyList_GetItemRef(sequence, 3); - if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_XGOTREF(__pyx_t_14); - } - #else - { - Py_ssize_t i; - PyObject** temps[4] = {&__pyx_t_10,&__pyx_t_2,&__pyx_t_8,&__pyx_t_14}; - for (i=0; i < 4; i++) { - PyObject* item = __Pyx_PySequence_ITEM(sequence, i); if (unlikely(!item)) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_GOTREF(item); - *(temps[i]) = item; - } - } - #endif - } else { - Py_ssize_t index = -1; - PyObject** temps[4] = {&__pyx_t_10,&__pyx_t_2,&__pyx_t_8,&__pyx_t_14}; - __pyx_t_15 = PyObject_GetIter(__pyx_v_next_cubic); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_15); - __pyx_t_16 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_15); - for (index=0; index < 4; index++) { - PyObject* item = __pyx_t_16(__pyx_t_15); if (unlikely(!item)) goto __pyx_L9_unpacking_failed; - __Pyx_GOTREF(item); - *(temps[index]) = item; - } - if (__Pyx_IternextUnpackEndCheck(__pyx_t_16(__pyx_t_15), 4) < 0) __PYX_ERR(0, 436, __pyx_L1_error) - __pyx_t_16 = NULL; - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - goto __pyx_L10_unpacking_done; - __pyx_L9_unpacking_failed:; - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - __pyx_t_16 = NULL; - if (__Pyx_IterFinish() == 0) __Pyx_RaiseNeedMoreValuesError(index); - __PYX_ERR(0, 436, __pyx_L1_error) - __pyx_L10_unpacking_done:; - } - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_10); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_2); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_5 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_8); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_14); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 436, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __pyx_v_c0 = __pyx_t_9; - __pyx_v_c1 = __pyx_t_4; - __pyx_v_c2 = __pyx_t_5; - __pyx_v_c3 = __pyx_t_6; - - /* "fontTools/cu2qu/cu2qu.py":439 - * - * # Current quadratic approximation of current cubic - * q0 = q2 # <<<<<<<<<<<<<< - * q1 = next_q1 - * if i < n: -*/ - __pyx_v_q0 = __pyx_v_q2; - - /* "fontTools/cu2qu/cu2qu.py":440 - * # Current quadratic approximation of current cubic - * q0 = q2 - * q1 = next_q1 # <<<<<<<<<<<<<< - * if i < n: - * next_cubic = next(cubics) -*/ - __pyx_v_q1 = __pyx_v_next_q1; - - /* "fontTools/cu2qu/cu2qu.py":441 - * q0 = q2 - * q1 = next_q1 - * if i < n: # <<<<<<<<<<<<<< - * next_cubic = next(cubics) - * next_q1 = cubic_approx_control( -*/ - __pyx_t_1 = (__pyx_v_i < __pyx_v_n); - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":442 - * q1 = next_q1 - * if i < n: - * next_cubic = next(cubics) # <<<<<<<<<<<<<< - * next_q1 = cubic_approx_control( - * i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] -*/ - __pyx_t_14 = __Pyx_PyIter_Next(__pyx_v_cubics); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 442, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __Pyx_DECREF_SET(__pyx_v_next_cubic, __pyx_t_14); - __pyx_t_14 = 0; - - /* "fontTools/cu2qu/cu2qu.py":444 - * next_cubic = next(cubics) - * next_q1 = cubic_approx_control( - * i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] # <<<<<<<<<<<<<< - * ) - * spline.append(next_q1) -*/ - __pyx_t_17 = (__pyx_v_n - 1); - if (unlikely(__pyx_t_17 == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "float division"); - __PYX_ERR(0, 444, __pyx_L1_error) - } - __pyx_t_14 = __Pyx_GetItemInt(__pyx_v_next_cubic, 0, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_6 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_14); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __pyx_t_14 = __Pyx_GetItemInt(__pyx_v_next_cubic, 1, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_5 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_14); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __pyx_t_14 = __Pyx_GetItemInt(__pyx_v_next_cubic, 2, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_4 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_14); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - __pyx_t_14 = __Pyx_GetItemInt(__pyx_v_next_cubic, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_9 = __Pyx_PyComplex_As___pyx_t_double_complex(__pyx_t_14); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 444, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - - /* "fontTools/cu2qu/cu2qu.py":443 - * if i < n: - * next_cubic = next(cubics) - * next_q1 = cubic_approx_control( # <<<<<<<<<<<<<< - * i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - * ) -*/ - __pyx_t_7 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_control((((double)__pyx_v_i) / ((double)__pyx_t_17)), __pyx_t_6, __pyx_t_5, __pyx_t_4, __pyx_t_9); if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 443, __pyx_L1_error) - __pyx_v_next_q1 = __pyx_t_7; - - /* "fontTools/cu2qu/cu2qu.py":446 - * i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - * ) - * spline.append(next_q1) # <<<<<<<<<<<<<< - * q2 = (q1 + next_q1) * 0.5 - * else: -*/ - __pyx_t_14 = __pyx_PyComplex_FromComplex(__pyx_v_next_q1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 446, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_18 = __Pyx_PyList_Append(__pyx_v_spline, __pyx_t_14); if (unlikely(__pyx_t_18 == ((int)-1))) __PYX_ERR(0, 446, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - - /* "fontTools/cu2qu/cu2qu.py":447 - * ) - * spline.append(next_q1) - * q2 = (q1 + next_q1) * 0.5 # <<<<<<<<<<<<<< - * else: - * q2 = c3 -*/ - __pyx_v_q2 = __Pyx_c_prod_double(__Pyx_c_sum_double(__pyx_v_q1, __pyx_v_next_q1), __pyx_t_double_complex_from_parts(0.5, 0)); - - /* "fontTools/cu2qu/cu2qu.py":441 - * q0 = q2 - * q1 = next_q1 - * if i < n: # <<<<<<<<<<<<<< - * next_cubic = next(cubics) - * next_q1 = cubic_approx_control( -*/ - goto __pyx_L11; - } - - /* "fontTools/cu2qu/cu2qu.py":449 - * q2 = (q1 + next_q1) * 0.5 - * else: - * q2 = c3 # <<<<<<<<<<<<<< - * - * # End-point deltas -*/ - /*else*/ { - __pyx_v_q2 = __pyx_v_c3; - } - __pyx_L11:; - - /* "fontTools/cu2qu/cu2qu.py":452 - * - * # End-point deltas - * d0 = d1 # <<<<<<<<<<<<<< - * d1 = q2 - c3 - * -*/ - __pyx_v_d0 = __pyx_v_d1; - - /* "fontTools/cu2qu/cu2qu.py":453 - * # End-point deltas - * d0 = d1 - * d1 = q2 - c3 # <<<<<<<<<<<<<< - * - * if abs(d1) > tolerance or not cubic_farthest_fit_inside( -*/ - __pyx_v_d1 = __Pyx_c_diff_double(__pyx_v_q2, __pyx_v_c3); - - /* "fontTools/cu2qu/cu2qu.py":455 - * d1 = q2 - c3 - * - * if abs(d1) > tolerance or not cubic_farthest_fit_inside( # <<<<<<<<<<<<<< - * d0, - * q0 + (q1 - q0) * (2 / 3) - c1, -*/ - __pyx_t_3 = (__Pyx_c_abs_double(__pyx_v_d1) > __pyx_v_tolerance); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L13_bool_binop_done; - } - - /* "fontTools/cu2qu/cu2qu.py":460 - * q2 + (q1 - q2) * (2 / 3) - c2, - * d1, - * tolerance, # <<<<<<<<<<<<<< - * ): - * return None -*/ - __pyx_t_19 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_farthest_fit_inside(__pyx_v_d0, __Pyx_c_diff_double(__Pyx_c_sum_double(__pyx_v_q0, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_q1, __pyx_v_q0), __pyx_t_double_complex_from_parts((2.0 / 3.0), 0))), __pyx_v_c1), __Pyx_c_diff_double(__Pyx_c_sum_double(__pyx_v_q2, __Pyx_c_prod_double(__Pyx_c_diff_double(__pyx_v_q1, __pyx_v_q2), __pyx_t_double_complex_from_parts((2.0 / 3.0), 0))), __pyx_v_c2), __pyx_v_d1, __pyx_v_tolerance); if (unlikely(__pyx_t_19 == ((int)-1) && PyErr_Occurred())) __PYX_ERR(0, 455, __pyx_L1_error) - - /* "fontTools/cu2qu/cu2qu.py":455 - * d1 = q2 - c3 - * - * if abs(d1) > tolerance or not cubic_farthest_fit_inside( # <<<<<<<<<<<<<< - * d0, - * q0 + (q1 - q0) * (2 / 3) - c1, -*/ - __pyx_t_3 = (!(__pyx_t_19 != 0)); - __pyx_t_1 = __pyx_t_3; - __pyx_L13_bool_binop_done:; - if (__pyx_t_1) { - - /* "fontTools/cu2qu/cu2qu.py":462 - * tolerance, - * ): - * return None # <<<<<<<<<<<<<< - * spline.append(cubic[3]) - * -*/ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":455 - * d1 = q2 - c3 - * - * if abs(d1) > tolerance or not cubic_farthest_fit_inside( # <<<<<<<<<<<<<< - * d0, - * q0 + (q1 - q0) * (2 / 3) - c1, -*/ - } - } - - /* "fontTools/cu2qu/cu2qu.py":463 - * ): - * return None - * spline.append(cubic[3]) # <<<<<<<<<<<<<< - * - * return spline -*/ - __pyx_t_14 = __Pyx_GetItemInt(__pyx_v_cubic, 3, long, 1, __Pyx_PyLong_From_long, 0, 0, 1, 1); if (unlikely(!__pyx_t_14)) __PYX_ERR(0, 463, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_14); - __pyx_t_18 = __Pyx_PyList_Append(__pyx_v_spline, __pyx_t_14); if (unlikely(__pyx_t_18 == ((int)-1))) __PYX_ERR(0, 463, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_14); __pyx_t_14 = 0; - - /* "fontTools/cu2qu/cu2qu.py":465 - * spline.append(cubic[3]) - * - * return spline # <<<<<<<<<<<<<< - * - * -*/ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_spline); - __pyx_r = __pyx_v_spline; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":390 - * - * - * @cython.cfunc # <<<<<<<<<<<<<< - * @cython.locals(n=cython.int, tolerance=cython.double) - * @cython.locals(i=cython.int) -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_14); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.cubic_approx_spline", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_cubics); - __Pyx_XDECREF(__pyx_v_next_cubic); - __Pyx_XDECREF(__pyx_v_spline); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":468 - * - * - * @cython.locals(max_err=cython.double) # <<<<<<<<<<<<<< - * @cython.locals(n=cython.int) - * @cython.locals(all_quadratic=cython.int) -*/ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5cu2qu_5cu2qu_4curve_to_quadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_5cu2qu_5cu2qu_3curve_to_quadratic, "curve_to_quadratic(curve, double max_err, int all_quadratic=True)\n\nApproximate a cubic Bezier curve with a spline of n quadratics.\n\nArgs:\n cubic (sequence): Four 2D tuples representing control points of\n the cubic Bezier curve.\n max_err (double): Permitted deviation from the original curve.\n all_quadratic (bool): If True (default) returned value is a\n quadratic spline. If False, it's either a single quadratic\n curve or a single cubic curve.\n\nReturns:\n If all_quadratic is True: A list of 2D tuples, representing\n control points of the quadratic spline if it fits within the\n given tolerance, or ``None`` if no suitable spline could be\n calculated.\n\n If all_quadratic is False: Either a quadratic curve (if length\n of output is 3), or a cubic curve (if length of output is 4)."); -static PyMethodDef __pyx_mdef_9fontTools_5cu2qu_5cu2qu_4curve_to_quadratic = {"curve_to_quadratic", (PyCFunction)(void(*)(void))(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_5cu2qu_5cu2qu_4curve_to_quadratic, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_5cu2qu_5cu2qu_3curve_to_quadratic}; -static PyObject *__pyx_pw_9fontTools_5cu2qu_5cu2qu_4curve_to_quadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_curve = 0; - double __pyx_v_max_err; - int __pyx_v_all_quadratic; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("curve_to_quadratic (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_SIZE - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject ** const __pyx_pyargnames[] = {&__pyx_mstate_global->__pyx_n_u_curve,&__pyx_mstate_global->__pyx_n_u_max_err,&__pyx_mstate_global->__pyx_n_u_all_quadratic,0}; - const Py_ssize_t __pyx_kwds_len = (__pyx_kwds) ? __Pyx_NumKwargs_FASTCALL(__pyx_kwds) : 0; - if (unlikely(__pyx_kwds_len) < 0) __PYX_ERR(0, 468, __pyx_L3_error) - if (__pyx_kwds_len > 0) { - switch (__pyx_nargs) { - case 3: - values[2] = __Pyx_ArgRef_FASTCALL(__pyx_args, 2); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[2])) __PYX_ERR(0, 468, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 2: - values[1] = __Pyx_ArgRef_FASTCALL(__pyx_args, 1); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[1])) __PYX_ERR(0, 468, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 1: - values[0] = __Pyx_ArgRef_FASTCALL(__pyx_args, 0); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[0])) __PYX_ERR(0, 468, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (__Pyx_ParseKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values, kwd_pos_args, __pyx_kwds_len, "curve_to_quadratic", 0) < 0) __PYX_ERR(0, 468, __pyx_L3_error) - for (Py_ssize_t i = __pyx_nargs; i < 2; i++) { - if (unlikely(!values[i])) { __Pyx_RaiseArgtupleInvalid("curve_to_quadratic", 0, 2, 3, i); __PYX_ERR(0, 468, __pyx_L3_error) } - } - } else { - switch (__pyx_nargs) { - case 3: - values[2] = __Pyx_ArgRef_FASTCALL(__pyx_args, 2); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[2])) __PYX_ERR(0, 468, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 2: - values[1] = __Pyx_ArgRef_FASTCALL(__pyx_args, 1); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[1])) __PYX_ERR(0, 468, __pyx_L3_error) - values[0] = __Pyx_ArgRef_FASTCALL(__pyx_args, 0); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[0])) __PYX_ERR(0, 468, __pyx_L3_error) - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_curve = values[0]; - __pyx_v_max_err = __Pyx_PyFloat_AsDouble(values[1]); if (unlikely((__pyx_v_max_err == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 471, __pyx_L3_error) - if (values[2]) { - __pyx_v_all_quadratic = __Pyx_PyLong_As_int(values[2]); if (unlikely((__pyx_v_all_quadratic == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 471, __pyx_L3_error) - } else { - - /* "fontTools/cu2qu/cu2qu.py":471 - * @cython.locals(n=cython.int) - * @cython.locals(all_quadratic=cython.int) - * def curve_to_quadratic(curve, max_err, all_quadratic=True): # <<<<<<<<<<<<<< - * """Approximate a cubic Bezier curve with a spline of n quadratics. - * -*/ - __pyx_v_all_quadratic = ((int)((int)1)); - } - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("curve_to_quadratic", 0, 2, 3, __pyx_nargs); __PYX_ERR(0, 468, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - for (Py_ssize_t __pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - Py_XDECREF(values[__pyx_temp]); - } - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.curve_to_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_5cu2qu_5cu2qu_3curve_to_quadratic(__pyx_self, __pyx_v_curve, __pyx_v_max_err, __pyx_v_all_quadratic); - - /* "fontTools/cu2qu/cu2qu.py":468 - * - * - * @cython.locals(max_err=cython.double) # <<<<<<<<<<<<<< - * @cython.locals(n=cython.int) - * @cython.locals(all_quadratic=cython.int) -*/ - - /* function exit code */ - for (Py_ssize_t __pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - Py_XDECREF(values[__pyx_temp]); - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_5cu2qu_5cu2qu_3curve_to_quadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curve, double __pyx_v_max_err, int __pyx_v_all_quadratic) { - int __pyx_v_n; - PyObject *__pyx_v_spline = NULL; - PyObject *__pyx_7genexpr__pyx_v_p = NULL; - PyObject *__pyx_8genexpr1__pyx_v_s = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - Py_ssize_t __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - Py_ssize_t __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - size_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("curve_to_quadratic", 0); - __Pyx_INCREF(__pyx_v_curve); - - /* "fontTools/cu2qu/cu2qu.py":492 - * """ - * - * curve = [complex(*p) for p in curve] # <<<<<<<<<<<<<< - * - * for n in range(1, MAX_N + 1): -*/ - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 492, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_v_curve)) || PyTuple_CheckExact(__pyx_v_curve)) { - __pyx_t_2 = __pyx_v_curve; __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_curve); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 492, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 492, __pyx_L5_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 492, __pyx_L5_error) - #endif - if (__pyx_t_3 >= __pyx_temp) break; - } - __pyx_t_5 = __Pyx_PyList_GetItemRef(__pyx_t_2, __pyx_t_3); - ++__pyx_t_3; - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 492, __pyx_L5_error) - #endif - if (__pyx_t_3 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = __Pyx_NewRef(PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3)); - #else - __pyx_t_5 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_3); - #endif - ++__pyx_t_3; - } - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 492, __pyx_L5_error) - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) __PYX_ERR(0, 492, __pyx_L5_error) - PyErr_Clear(); - } - break; - } - } - __Pyx_GOTREF(__pyx_t_5); - __Pyx_XDECREF_SET(__pyx_7genexpr__pyx_v_p, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_PySequence_Tuple(__pyx_7genexpr__pyx_v_p); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 492, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_5, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 492, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_6))) __PYX_ERR(0, 492, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_7genexpr__pyx_v_p); __pyx_7genexpr__pyx_v_p = 0; - goto __pyx_L9_exit_scope; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_7genexpr__pyx_v_p); __pyx_7genexpr__pyx_v_p = 0; - goto __pyx_L1_error; - __pyx_L9_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_curve, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/cu2qu/cu2qu.py":494 - * curve = [complex(*p) for p in curve] - * - * for n in range(1, MAX_N + 1): # <<<<<<<<<<<<<< - * spline = cubic_approx_spline(curve, n, max_err, all_quadratic) - * if spline is not None: -*/ - __Pyx_GetModuleGlobalName(__pyx_t_1, __pyx_mstate_global->__pyx_n_u_MAX_N); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 494, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyLong_AddObjC(__pyx_t_1, __pyx_mstate_global->__pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 494, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_3 = __Pyx_PyIndex_AsSsize_t(__pyx_t_2); if (unlikely((__pyx_t_3 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(0, 494, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_7 = __pyx_t_3; - for (__pyx_t_8 = 1; __pyx_t_8 < __pyx_t_7; __pyx_t_8+=1) { - __pyx_v_n = __pyx_t_8; - - /* "fontTools/cu2qu/cu2qu.py":495 - * - * for n in range(1, MAX_N + 1): - * spline = cubic_approx_spline(curve, n, max_err, all_quadratic) # <<<<<<<<<<<<<< - * if spline is not None: - * # done. go home -*/ - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_spline(__pyx_v_curve, __pyx_v_n, __pyx_v_max_err, __pyx_v_all_quadratic); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 495, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_spline, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/cu2qu/cu2qu.py":496 - * for n in range(1, MAX_N + 1): - * spline = cubic_approx_spline(curve, n, max_err, all_quadratic) - * if spline is not None: # <<<<<<<<<<<<<< - * # done. go home - * return [(s.real, s.imag) for s in spline] -*/ - __pyx_t_9 = (__pyx_v_spline != Py_None); - if (__pyx_t_9) { - - /* "fontTools/cu2qu/cu2qu.py":498 - * if spline is not None: - * # done. go home - * return [(s.real, s.imag) for s in spline] # <<<<<<<<<<<<<< - * - * raise ApproxNotFoundError(curve) -*/ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 498, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(PyList_CheckExact(__pyx_v_spline)) || PyTuple_CheckExact(__pyx_v_spline)) { - __pyx_t_1 = __pyx_v_spline; __Pyx_INCREF(__pyx_t_1); - __pyx_t_10 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_10 = -1; __pyx_t_1 = PyObject_GetIter(__pyx_v_spline); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 498, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 498, __pyx_L15_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_1))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 498, __pyx_L15_error) - #endif - if (__pyx_t_10 >= __pyx_temp) break; - } - __pyx_t_6 = __Pyx_PyList_GetItemRef(__pyx_t_1, __pyx_t_10); - ++__pyx_t_10; - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_1); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 498, __pyx_L15_error) - #endif - if (__pyx_t_10 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_6 = __Pyx_NewRef(PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_10)); - #else - __pyx_t_6 = __Pyx_PySequence_ITEM(__pyx_t_1, __pyx_t_10); - #endif - ++__pyx_t_10; - } - if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 498, __pyx_L15_error) - } else { - __pyx_t_6 = __pyx_t_4(__pyx_t_1); - if (unlikely(!__pyx_t_6)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) __PYX_ERR(0, 498, __pyx_L15_error) - PyErr_Clear(); - } - break; - } - } - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_8genexpr1__pyx_v_s, __pyx_t_6); - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr1__pyx_v_s, __pyx_mstate_global->__pyx_n_u_real); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 498, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr1__pyx_v_s, __pyx_mstate_global->__pyx_n_u_imag); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 498, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 498, __pyx_L15_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_6); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_6) != (0)) __PYX_ERR(0, 498, __pyx_L15_error); - __Pyx_GIVEREF(__pyx_t_5); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_5) != (0)) __PYX_ERR(0, 498, __pyx_L15_error); - __pyx_t_6 = 0; - __pyx_t_5 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_11))) __PYX_ERR(0, 498, __pyx_L15_error) - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - } - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_s); __pyx_8genexpr1__pyx_v_s = 0; - goto __pyx_L19_exit_scope; - __pyx_L15_error:; - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_s); __pyx_8genexpr1__pyx_v_s = 0; - goto __pyx_L1_error; - __pyx_L19_exit_scope:; - } /* exit inner scope */ - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":496 - * for n in range(1, MAX_N + 1): - * spline = cubic_approx_spline(curve, n, max_err, all_quadratic) - * if spline is not None: # <<<<<<<<<<<<<< - * # done. go home - * return [(s.real, s.imag) for s in spline] -*/ - } - } - - /* "fontTools/cu2qu/cu2qu.py":500 - * return [(s.real, s.imag) for s in spline] - * - * raise ApproxNotFoundError(curve) # <<<<<<<<<<<<<< - * - * -*/ - __pyx_t_1 = NULL; - __Pyx_GetModuleGlobalName(__pyx_t_11, __pyx_mstate_global->__pyx_n_u_ApproxNotFoundError); if (unlikely(!__pyx_t_11)) __PYX_ERR(0, 500, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __pyx_t_12 = 1; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_11))) { - __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_11); - assert(__pyx_t_1); - PyObject* __pyx__function = PyMethod_GET_FUNCTION(__pyx_t_11); - __Pyx_INCREF(__pyx_t_1); - __Pyx_INCREF(__pyx__function); - __Pyx_DECREF_SET(__pyx_t_11, __pyx__function); - __pyx_t_12 = 0; - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_1, __pyx_v_curve}; - __pyx_t_2 = __Pyx_PyObject_FastCall(__pyx_t_11, __pyx_callargs+__pyx_t_12, (2-__pyx_t_12) | (__pyx_t_12*__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)); - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 500, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - } - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(0, 500, __pyx_L1_error) - - /* "fontTools/cu2qu/cu2qu.py":468 - * - * - * @cython.locals(max_err=cython.double) # <<<<<<<<<<<<<< - * @cython.locals(n=cython.int) - * @cython.locals(all_quadratic=cython.int) -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.curve_to_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_spline); - __Pyx_XDECREF(__pyx_7genexpr__pyx_v_p); - __Pyx_XDECREF(__pyx_8genexpr1__pyx_v_s); - __Pyx_XDECREF(__pyx_v_curve); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "fontTools/cu2qu/cu2qu.py":503 - * - * - * @cython.locals(l=cython.int, last_i=cython.int, i=cython.int) # <<<<<<<<<<<<<< - * @cython.locals(all_quadratic=cython.int) - * def curves_to_quadratic(curves, max_errors, all_quadratic=True): -*/ - -/* Python wrapper */ -static PyObject *__pyx_pw_9fontTools_5cu2qu_5cu2qu_6curves_to_quadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -); /*proto*/ -PyDoc_STRVAR(__pyx_doc_9fontTools_5cu2qu_5cu2qu_5curves_to_quadratic, "curves_to_quadratic(curves, max_errors, int all_quadratic=True)\n\nReturn quadratic Bezier splines approximating the input cubic Beziers.\n\nArgs:\n curves: A sequence of *n* curves, each curve being a sequence of four\n 2D tuples.\n max_errors: A sequence of *n* floats representing the maximum permissible\n deviation from each of the cubic Bezier curves.\n all_quadratic (bool): If True (default) returned values are a\n quadratic spline. If False, they are either a single quadratic\n curve or a single cubic curve.\n\nExample::\n\n >>> curves_to_quadratic( [\n ... [ (50,50), (100,100), (150,100), (200,50) ],\n ... [ (75,50), (120,100), (150,75), (200,60) ]\n ... ], [1,1] )\n [[(50.0, 50.0), (75.0, 75.0), (125.0, 91.66666666666666), (175.0, 75.0), (200.0, 50.0)], [(75.0, 50.0), (97.5, 75.0), (135.41666666666666, 82.08333333333333), (175.0, 67.5), (200.0, 60.0)]]\n\nThe returned splines have \"implied oncurve points\" suitable for use in\nTrueType ``glif`` outlines - i.e. in the first spline returned above,\nthe first quadratic segment runs from (50,50) to\n( (75 + 125)/2 , (120 + 91.666..)/2 ) = (100, 83.333...).\n\nReturns:\n If all_quadratic is True, a list of splines, each spline being a list\n of 2D tuples.\n\n If all_quadratic is False, a list of curves, each curve being a quadratic\n (length 3), or cubic (length 4).\n\nRaises:\n fontTools.cu2qu.Errors.ApproxNotFoundError: if no suitable approximation\n can be found for all curves with the given parameters."); -static PyMethodDef __pyx_mdef_9fontTools_5cu2qu_5cu2qu_6curves_to_quadratic = {"curves_to_quadratic", (PyCFunction)(void(*)(void))(__Pyx_PyCFunction_FastCallWithKeywords)__pyx_pw_9fontTools_5cu2qu_5cu2qu_6curves_to_quadratic, __Pyx_METH_FASTCALL|METH_KEYWORDS, __pyx_doc_9fontTools_5cu2qu_5cu2qu_5curves_to_quadratic}; -static PyObject *__pyx_pw_9fontTools_5cu2qu_5cu2qu_6curves_to_quadratic(PyObject *__pyx_self, -#if CYTHON_METH_FASTCALL -PyObject *const *__pyx_args, Py_ssize_t __pyx_nargs, PyObject *__pyx_kwds -#else -PyObject *__pyx_args, PyObject *__pyx_kwds -#endif -) { - PyObject *__pyx_v_curves = 0; - PyObject *__pyx_v_max_errors = 0; - int __pyx_v_all_quadratic; - #if !CYTHON_METH_FASTCALL - CYTHON_UNUSED Py_ssize_t __pyx_nargs; - #endif - CYTHON_UNUSED PyObject *const *__pyx_kwvalues; - PyObject* values[3] = {0,0,0}; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("curves_to_quadratic (wrapper)", 0); - #if !CYTHON_METH_FASTCALL - #if CYTHON_ASSUME_SAFE_SIZE - __pyx_nargs = PyTuple_GET_SIZE(__pyx_args); - #else - __pyx_nargs = PyTuple_Size(__pyx_args); if (unlikely(__pyx_nargs < 0)) return NULL; - #endif - #endif - __pyx_kwvalues = __Pyx_KwValues_FASTCALL(__pyx_args, __pyx_nargs); - { - PyObject ** const __pyx_pyargnames[] = {&__pyx_mstate_global->__pyx_n_u_curves,&__pyx_mstate_global->__pyx_n_u_max_errors,&__pyx_mstate_global->__pyx_n_u_all_quadratic,0}; - const Py_ssize_t __pyx_kwds_len = (__pyx_kwds) ? __Pyx_NumKwargs_FASTCALL(__pyx_kwds) : 0; - if (unlikely(__pyx_kwds_len) < 0) __PYX_ERR(0, 503, __pyx_L3_error) - if (__pyx_kwds_len > 0) { - switch (__pyx_nargs) { - case 3: - values[2] = __Pyx_ArgRef_FASTCALL(__pyx_args, 2); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[2])) __PYX_ERR(0, 503, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 2: - values[1] = __Pyx_ArgRef_FASTCALL(__pyx_args, 1); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[1])) __PYX_ERR(0, 503, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 1: - values[0] = __Pyx_ArgRef_FASTCALL(__pyx_args, 0); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[0])) __PYX_ERR(0, 503, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - const Py_ssize_t kwd_pos_args = __pyx_nargs; - if (__Pyx_ParseKeywords(__pyx_kwds, __pyx_kwvalues, __pyx_pyargnames, 0, values, kwd_pos_args, __pyx_kwds_len, "curves_to_quadratic", 0) < 0) __PYX_ERR(0, 503, __pyx_L3_error) - for (Py_ssize_t i = __pyx_nargs; i < 2; i++) { - if (unlikely(!values[i])) { __Pyx_RaiseArgtupleInvalid("curves_to_quadratic", 0, 2, 3, i); __PYX_ERR(0, 503, __pyx_L3_error) } - } - } else { - switch (__pyx_nargs) { - case 3: - values[2] = __Pyx_ArgRef_FASTCALL(__pyx_args, 2); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[2])) __PYX_ERR(0, 503, __pyx_L3_error) - CYTHON_FALLTHROUGH; - case 2: - values[1] = __Pyx_ArgRef_FASTCALL(__pyx_args, 1); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[1])) __PYX_ERR(0, 503, __pyx_L3_error) - values[0] = __Pyx_ArgRef_FASTCALL(__pyx_args, 0); - if (!CYTHON_ASSUME_SAFE_MACROS && unlikely(!values[0])) __PYX_ERR(0, 503, __pyx_L3_error) - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_curves = values[0]; - __pyx_v_max_errors = values[1]; - if (values[2]) { - __pyx_v_all_quadratic = __Pyx_PyLong_As_int(values[2]); if (unlikely((__pyx_v_all_quadratic == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 505, __pyx_L3_error) - } else { - - /* "fontTools/cu2qu/cu2qu.py":505 - * @cython.locals(l=cython.int, last_i=cython.int, i=cython.int) - * @cython.locals(all_quadratic=cython.int) - * def curves_to_quadratic(curves, max_errors, all_quadratic=True): # <<<<<<<<<<<<<< - * """Return quadratic Bezier splines approximating the input cubic Beziers. - * -*/ - __pyx_v_all_quadratic = ((int)((int)1)); - } - } - goto __pyx_L6_skip; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("curves_to_quadratic", 0, 2, 3, __pyx_nargs); __PYX_ERR(0, 503, __pyx_L3_error) - __pyx_L6_skip:; - goto __pyx_L4_argument_unpacking_done; - __pyx_L3_error:; - for (Py_ssize_t __pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - Py_XDECREF(values[__pyx_temp]); - } - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.curves_to_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_9fontTools_5cu2qu_5cu2qu_5curves_to_quadratic(__pyx_self, __pyx_v_curves, __pyx_v_max_errors, __pyx_v_all_quadratic); - - /* "fontTools/cu2qu/cu2qu.py":503 - * - * - * @cython.locals(l=cython.int, last_i=cython.int, i=cython.int) # <<<<<<<<<<<<<< - * @cython.locals(all_quadratic=cython.int) - * def curves_to_quadratic(curves, max_errors, all_quadratic=True): -*/ - - /* function exit code */ - for (Py_ssize_t __pyx_temp=0; __pyx_temp < (Py_ssize_t)(sizeof(values)/sizeof(values[0])); ++__pyx_temp) { - Py_XDECREF(values[__pyx_temp]); - } - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_9fontTools_5cu2qu_5cu2qu_5curves_to_quadratic(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v_curves, PyObject *__pyx_v_max_errors, int __pyx_v_all_quadratic) { - int __pyx_v_l; - int __pyx_v_last_i; - int __pyx_v_i; - PyObject *__pyx_v_splines = NULL; - PyObject *__pyx_v_n = NULL; - PyObject *__pyx_v_spline = NULL; - PyObject *__pyx_8genexpr2__pyx_v_curve = NULL; - PyObject *__pyx_8genexpr3__pyx_v_p = NULL; - PyObject *__pyx_8genexpr4__pyx_v_spline = NULL; - PyObject *__pyx_8genexpr5__pyx_v_s = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - int __pyx_t_11; - int __pyx_t_12; - double __pyx_t_13; - long __pyx_t_14; - PyObject *__pyx_t_15 = NULL; - size_t __pyx_t_16; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("curves_to_quadratic", 0); - __Pyx_INCREF(__pyx_v_curves); - - /* "fontTools/cu2qu/cu2qu.py":542 - * """ - * - * curves = [[complex(*p) for p in curve] for curve in curves] # <<<<<<<<<<<<<< - * assert len(max_errors) == len(curves) - * -*/ - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 542, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_1); - if (likely(PyList_CheckExact(__pyx_v_curves)) || PyTuple_CheckExact(__pyx_v_curves)) { - __pyx_t_2 = __pyx_v_curves; __Pyx_INCREF(__pyx_t_2); - __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_curves); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 542, __pyx_L5_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_2); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 542, __pyx_L5_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 542, __pyx_L5_error) - #endif - if (__pyx_t_3 >= __pyx_temp) break; - } - __pyx_t_5 = __Pyx_PyList_GetItemRef(__pyx_t_2, __pyx_t_3); - ++__pyx_t_3; - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 542, __pyx_L5_error) - #endif - if (__pyx_t_3 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = __Pyx_NewRef(PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3)); - #else - __pyx_t_5 = __Pyx_PySequence_ITEM(__pyx_t_2, __pyx_t_3); - #endif - ++__pyx_t_3; - } - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 542, __pyx_L5_error) - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) __PYX_ERR(0, 542, __pyx_L5_error) - PyErr_Clear(); - } - break; - } - } - __Pyx_GOTREF(__pyx_t_5); - __Pyx_XDECREF_SET(__pyx_8genexpr2__pyx_v_curve, __pyx_t_5); - __pyx_t_5 = 0; - { /* enter inner scope */ - __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 542, __pyx_L10_error) - __Pyx_GOTREF(__pyx_t_5); - if (likely(PyList_CheckExact(__pyx_8genexpr2__pyx_v_curve)) || PyTuple_CheckExact(__pyx_8genexpr2__pyx_v_curve)) { - __pyx_t_6 = __pyx_8genexpr2__pyx_v_curve; __Pyx_INCREF(__pyx_t_6); - __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_8genexpr2__pyx_v_curve); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 542, __pyx_L10_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 542, __pyx_L10_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_6))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 542, __pyx_L10_error) - #endif - if (__pyx_t_7 >= __pyx_temp) break; - } - __pyx_t_9 = __Pyx_PyList_GetItemRef(__pyx_t_6, __pyx_t_7); - ++__pyx_t_7; - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 542, __pyx_L10_error) - #endif - if (__pyx_t_7 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = __Pyx_NewRef(PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_7)); - #else - __pyx_t_9 = __Pyx_PySequence_ITEM(__pyx_t_6, __pyx_t_7); - #endif - ++__pyx_t_7; - } - if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 542, __pyx_L10_error) - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_6); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) __PYX_ERR(0, 542, __pyx_L10_error) - PyErr_Clear(); - } - break; - } - } - __Pyx_GOTREF(__pyx_t_9); - __Pyx_XDECREF_SET(__pyx_8genexpr3__pyx_v_p, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_t_9 = __Pyx_PySequence_Tuple(__pyx_8genexpr3__pyx_v_p); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 542, __pyx_L10_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_10 = __Pyx_PyObject_Call(((PyObject *)(&PyComplex_Type)), __pyx_t_9, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 542, __pyx_L10_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_5, (PyObject*)__pyx_t_10))) __PYX_ERR(0, 542, __pyx_L10_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_p); __pyx_8genexpr3__pyx_v_p = 0; - goto __pyx_L14_exit_scope; - __pyx_L10_error:; - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_p); __pyx_8genexpr3__pyx_v_p = 0; - goto __pyx_L5_error; - __pyx_L14_exit_scope:; - } /* exit inner scope */ - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(0, 542, __pyx_L5_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_curve); __pyx_8genexpr2__pyx_v_curve = 0; - goto __pyx_L16_exit_scope; - __pyx_L5_error:; - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_curve); __pyx_8genexpr2__pyx_v_curve = 0; - goto __pyx_L1_error; - __pyx_L16_exit_scope:; - } /* exit inner scope */ - __Pyx_DECREF_SET(__pyx_v_curves, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/cu2qu/cu2qu.py":543 - * - * curves = [[complex(*p) for p in curve] for curve in curves] - * assert len(max_errors) == len(curves) # <<<<<<<<<<<<<< - * - * l = len(curves) -*/ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(__pyx_assertions_enabled())) { - __pyx_t_3 = PyObject_Length(__pyx_v_max_errors); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(0, 543, __pyx_L1_error) - __pyx_t_7 = PyObject_Length(__pyx_v_curves); if (unlikely(__pyx_t_7 == ((Py_ssize_t)-1))) __PYX_ERR(0, 543, __pyx_L1_error) - __pyx_t_11 = (__pyx_t_3 == __pyx_t_7); - if (unlikely(!__pyx_t_11)) { - __Pyx_Raise(__pyx_builtin_AssertionError, 0, 0, 0); - __PYX_ERR(0, 543, __pyx_L1_error) - } - } - #else - if ((1)); else __PYX_ERR(0, 543, __pyx_L1_error) - #endif - - /* "fontTools/cu2qu/cu2qu.py":545 - * assert len(max_errors) == len(curves) - * - * l = len(curves) # <<<<<<<<<<<<<< - * splines = [None] * l - * last_i = i = 0 -*/ - __pyx_t_7 = PyObject_Length(__pyx_v_curves); if (unlikely(__pyx_t_7 == ((Py_ssize_t)-1))) __PYX_ERR(0, 545, __pyx_L1_error) - __pyx_v_l = __pyx_t_7; - - /* "fontTools/cu2qu/cu2qu.py":546 - * - * l = len(curves) - * splines = [None] * l # <<<<<<<<<<<<<< - * last_i = i = 0 - * n = 1 -*/ - __pyx_t_1 = PyList_New(1 * ((__pyx_v_l<0) ? 0:__pyx_v_l)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 546, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_l; __pyx_temp++) { - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - if (__Pyx_PyList_SET_ITEM(__pyx_t_1, __pyx_temp, Py_None) != (0)) __PYX_ERR(0, 546, __pyx_L1_error); - } - } - __pyx_v_splines = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/cu2qu/cu2qu.py":547 - * l = len(curves) - * splines = [None] * l - * last_i = i = 0 # <<<<<<<<<<<<<< - * n = 1 - * while True: -*/ - __pyx_v_last_i = 0; - __pyx_v_i = 0; - - /* "fontTools/cu2qu/cu2qu.py":548 - * splines = [None] * l - * last_i = i = 0 - * n = 1 # <<<<<<<<<<<<<< - * while True: - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) -*/ - __Pyx_INCREF(__pyx_mstate_global->__pyx_int_1); - __pyx_v_n = __pyx_mstate_global->__pyx_int_1; - - /* "fontTools/cu2qu/cu2qu.py":549 - * last_i = i = 0 - * n = 1 - * while True: # <<<<<<<<<<<<<< - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - * if spline is None: -*/ - while (1) { - - /* "fontTools/cu2qu/cu2qu.py":550 - * n = 1 - * while True: - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) # <<<<<<<<<<<<<< - * if spline is None: - * if n == MAX_N: -*/ - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_curves, __pyx_v_i, int, 1, __Pyx_PyLong_From_int, 0, 1, 1, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 550, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_12 = __Pyx_PyLong_As_int(__pyx_v_n); if (unlikely((__pyx_t_12 == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 550, __pyx_L1_error) - __pyx_t_2 = __Pyx_GetItemInt(__pyx_v_max_errors, __pyx_v_i, int, 1, __Pyx_PyLong_From_int, 0, 1, 1, 1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 550, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_13 = __Pyx_PyFloat_AsDouble(__pyx_t_2); if (unlikely((__pyx_t_13 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 550, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = __pyx_f_9fontTools_5cu2qu_5cu2qu_cubic_approx_spline(__pyx_t_1, __pyx_t_12, __pyx_t_13, __pyx_v_all_quadratic); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 550, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF_SET(__pyx_v_spline, __pyx_t_2); - __pyx_t_2 = 0; - - /* "fontTools/cu2qu/cu2qu.py":551 - * while True: - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - * if spline is None: # <<<<<<<<<<<<<< - * if n == MAX_N: - * break -*/ - __pyx_t_11 = (__pyx_v_spline == Py_None); - if (__pyx_t_11) { - - /* "fontTools/cu2qu/cu2qu.py":552 - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - * if spline is None: - * if n == MAX_N: # <<<<<<<<<<<<<< - * break - * n += 1 -*/ - __Pyx_GetModuleGlobalName(__pyx_t_2, __pyx_mstate_global->__pyx_n_u_MAX_N); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 552, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_1 = PyObject_RichCompare(__pyx_v_n, __pyx_t_2, Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 552, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_11 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely((__pyx_t_11 < 0))) __PYX_ERR(0, 552, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - if (__pyx_t_11) { - - /* "fontTools/cu2qu/cu2qu.py":553 - * if spline is None: - * if n == MAX_N: - * break # <<<<<<<<<<<<<< - * n += 1 - * last_i = i -*/ - goto __pyx_L18_break; - - /* "fontTools/cu2qu/cu2qu.py":552 - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - * if spline is None: - * if n == MAX_N: # <<<<<<<<<<<<<< - * break - * n += 1 -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":554 - * if n == MAX_N: - * break - * n += 1 # <<<<<<<<<<<<<< - * last_i = i - * continue -*/ - __pyx_t_1 = __Pyx_PyLong_AddObjC(__pyx_v_n, __pyx_mstate_global->__pyx_int_1, 1, 1, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 554, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF_SET(__pyx_v_n, __pyx_t_1); - __pyx_t_1 = 0; - - /* "fontTools/cu2qu/cu2qu.py":555 - * break - * n += 1 - * last_i = i # <<<<<<<<<<<<<< - * continue - * splines[i] = spline -*/ - __pyx_v_last_i = __pyx_v_i; - - /* "fontTools/cu2qu/cu2qu.py":556 - * n += 1 - * last_i = i - * continue # <<<<<<<<<<<<<< - * splines[i] = spline - * i = (i + 1) % l -*/ - goto __pyx_L17_continue; - - /* "fontTools/cu2qu/cu2qu.py":551 - * while True: - * spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - * if spline is None: # <<<<<<<<<<<<<< - * if n == MAX_N: - * break -*/ - } - - /* "fontTools/cu2qu/cu2qu.py":557 - * last_i = i - * continue - * splines[i] = spline # <<<<<<<<<<<<<< - * i = (i + 1) % l - * if i == last_i: -*/ - if (unlikely((__Pyx_SetItemInt(__pyx_v_splines, __pyx_v_i, __pyx_v_spline, int, 1, __Pyx_PyLong_From_int, 1, 1, 1, 1) < 0))) __PYX_ERR(0, 557, __pyx_L1_error) - - /* "fontTools/cu2qu/cu2qu.py":558 - * continue - * splines[i] = spline - * i = (i + 1) % l # <<<<<<<<<<<<<< - * if i == last_i: - * # done. go home -*/ - __pyx_t_14 = (__pyx_v_i + 1); - if (unlikely(__pyx_v_l == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(0, 558, __pyx_L1_error) - } - __pyx_v_i = __Pyx_mod_long(__pyx_t_14, __pyx_v_l, 0); - - /* "fontTools/cu2qu/cu2qu.py":559 - * splines[i] = spline - * i = (i + 1) % l - * if i == last_i: # <<<<<<<<<<<<<< - * # done. go home - * return [[(s.real, s.imag) for s in spline] for spline in splines] -*/ - __pyx_t_11 = (__pyx_v_i == __pyx_v_last_i); - if (__pyx_t_11) { - - /* "fontTools/cu2qu/cu2qu.py":561 - * if i == last_i: - * # done. go home - * return [[(s.real, s.imag) for s in spline] for spline in splines] # <<<<<<<<<<<<<< - * - * raise ApproxNotFoundError(curves) -*/ - __Pyx_XDECREF(__pyx_r); - { /* enter inner scope */ - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 561, __pyx_L24_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __pyx_v_splines; __Pyx_INCREF(__pyx_t_2); - __pyx_t_7 = 0; - for (;;) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_2); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 561, __pyx_L24_error) - #endif - if (__pyx_t_7 >= __pyx_temp) break; - } - __pyx_t_5 = __Pyx_PyList_GetItemRef(__pyx_t_2, __pyx_t_7); - ++__pyx_t_7; - if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 561, __pyx_L24_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_XDECREF_SET(__pyx_8genexpr4__pyx_v_spline, __pyx_t_5); - __pyx_t_5 = 0; - { /* enter inner scope */ - __pyx_t_5 = PyList_New(0); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 561, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_5); - if (likely(PyList_CheckExact(__pyx_8genexpr4__pyx_v_spline)) || PyTuple_CheckExact(__pyx_8genexpr4__pyx_v_spline)) { - __pyx_t_6 = __pyx_8genexpr4__pyx_v_spline; __Pyx_INCREF(__pyx_t_6); - __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_6 = PyObject_GetIter(__pyx_8genexpr4__pyx_v_spline); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 561, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_4 = (CYTHON_COMPILING_IN_LIMITED_API) ? PyIter_Next : __Pyx_PyObject_GetIterNextFunc(__pyx_t_6); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 561, __pyx_L29_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_6))) { - { - Py_ssize_t __pyx_temp = __Pyx_PyList_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 561, __pyx_L29_error) - #endif - if (__pyx_t_3 >= __pyx_temp) break; - } - __pyx_t_10 = __Pyx_PyList_GetItemRef(__pyx_t_6, __pyx_t_3); - ++__pyx_t_3; - } else { - { - Py_ssize_t __pyx_temp = __Pyx_PyTuple_GET_SIZE(__pyx_t_6); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely((__pyx_temp < 0))) __PYX_ERR(0, 561, __pyx_L29_error) - #endif - if (__pyx_t_3 >= __pyx_temp) break; - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_10 = __Pyx_NewRef(PyTuple_GET_ITEM(__pyx_t_6, __pyx_t_3)); - #else - __pyx_t_10 = __Pyx_PySequence_ITEM(__pyx_t_6, __pyx_t_3); - #endif - ++__pyx_t_3; - } - if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 561, __pyx_L29_error) - } else { - __pyx_t_10 = __pyx_t_4(__pyx_t_6); - if (unlikely(!__pyx_t_10)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) __PYX_ERR(0, 561, __pyx_L29_error) - PyErr_Clear(); - } - break; - } - } - __Pyx_GOTREF(__pyx_t_10); - __Pyx_XDECREF_SET(__pyx_8genexpr5__pyx_v_s, __pyx_t_10); - __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr5__pyx_v_s, __pyx_mstate_global->__pyx_n_u_real); if (unlikely(!__pyx_t_10)) __PYX_ERR(0, 561, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_10); - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_8genexpr5__pyx_v_s, __pyx_mstate_global->__pyx_n_u_imag); if (unlikely(!__pyx_t_9)) __PYX_ERR(0, 561, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_15 = PyTuple_New(2); if (unlikely(!__pyx_t_15)) __PYX_ERR(0, 561, __pyx_L29_error) - __Pyx_GOTREF(__pyx_t_15); - __Pyx_GIVEREF(__pyx_t_10); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_15, 0, __pyx_t_10) != (0)) __PYX_ERR(0, 561, __pyx_L29_error); - __Pyx_GIVEREF(__pyx_t_9); - if (__Pyx_PyTuple_SET_ITEM(__pyx_t_15, 1, __pyx_t_9) != (0)) __PYX_ERR(0, 561, __pyx_L29_error); - __pyx_t_10 = 0; - __pyx_t_9 = 0; - if (unlikely(__Pyx_ListComp_Append(__pyx_t_5, (PyObject*)__pyx_t_15))) __PYX_ERR(0, 561, __pyx_L29_error) - __Pyx_DECREF(__pyx_t_15); __pyx_t_15 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_s); __pyx_8genexpr5__pyx_v_s = 0; - goto __pyx_L33_exit_scope; - __pyx_L29_error:; - __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_s); __pyx_8genexpr5__pyx_v_s = 0; - goto __pyx_L24_error; - __pyx_L33_exit_scope:; - } /* exit inner scope */ - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(0, 561, __pyx_L24_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_XDECREF(__pyx_8genexpr4__pyx_v_spline); __pyx_8genexpr4__pyx_v_spline = 0; - goto __pyx_L35_exit_scope; - __pyx_L24_error:; - __Pyx_XDECREF(__pyx_8genexpr4__pyx_v_spline); __pyx_8genexpr4__pyx_v_spline = 0; - goto __pyx_L1_error; - __pyx_L35_exit_scope:; - } /* exit inner scope */ - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "fontTools/cu2qu/cu2qu.py":559 - * splines[i] = spline - * i = (i + 1) % l - * if i == last_i: # <<<<<<<<<<<<<< - * # done. go home - * return [[(s.real, s.imag) for s in spline] for spline in splines] -*/ - } - __pyx_L17_continue:; - } - __pyx_L18_break:; - - /* "fontTools/cu2qu/cu2qu.py":563 - * return [[(s.real, s.imag) for s in spline] for spline in splines] - * - * raise ApproxNotFoundError(curves) # <<<<<<<<<<<<<< -*/ - __pyx_t_2 = NULL; - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_mstate_global->__pyx_n_u_ApproxNotFoundError); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 563, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_16 = 1; - #if CYTHON_UNPACK_METHODS - if (unlikely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_5); - assert(__pyx_t_2); - PyObject* __pyx__function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(__pyx__function); - __Pyx_DECREF_SET(__pyx_t_5, __pyx__function); - __pyx_t_16 = 0; - } - #endif - { - PyObject *__pyx_callargs[2] = {__pyx_t_2, __pyx_v_curves}; - __pyx_t_1 = __Pyx_PyObject_FastCall(__pyx_t_5, __pyx_callargs+__pyx_t_16, (2-__pyx_t_16) | (__pyx_t_16*__Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET)); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 563, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - } - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(0, 563, __pyx_L1_error) - - /* "fontTools/cu2qu/cu2qu.py":503 - * - * - * @cython.locals(l=cython.int, last_i=cython.int, i=cython.int) # <<<<<<<<<<<<<< - * @cython.locals(all_quadratic=cython.int) - * def curves_to_quadratic(curves, max_errors, all_quadratic=True): -*/ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_XDECREF(__pyx_t_15); - __Pyx_AddTraceback("fontTools.cu2qu.cu2qu.curves_to_quadratic", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_splines); - __Pyx_XDECREF(__pyx_v_n); - __Pyx_XDECREF(__pyx_v_spline); - __Pyx_XDECREF(__pyx_8genexpr2__pyx_v_curve); - __Pyx_XDECREF(__pyx_8genexpr3__pyx_v_p); - __Pyx_XDECREF(__pyx_8genexpr4__pyx_v_spline); - __Pyx_XDECREF(__pyx_8genexpr5__pyx_v_s); - __Pyx_XDECREF(__pyx_v_curves); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -/* #### Code section: module_exttypes ### */ - -static PyObject *__pyx_tp_new_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - PyObject *o; - #if CYTHON_COMPILING_IN_LIMITED_API - allocfunc alloc_func = (allocfunc)PyType_GetSlot(t, Py_tp_alloc); - o = alloc_func(t, 0); - #else - #if CYTHON_USE_FREELISTS - if (likely((int)(__pyx_mstate_global->__pyx_freecount_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen > 0) & (int)(t->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen)))) { - o = (PyObject*)__pyx_mstate_global->__pyx_freelist_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen[--__pyx_mstate_global->__pyx_freecount_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen]; - memset(o, 0, sizeof(struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen)); - (void) PyObject_INIT(o, t); - } else - #endif - { - o = (*t->tp_alloc)(t, 0); - if (unlikely(!o)) return 0; - } - #endif - return o; -} - -static void __pyx_tp_dealloc_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen(PyObject *o) { - #if CYTHON_USE_TP_FINALIZE - if (unlikely((PY_VERSION_HEX >= 0x03080000 || __Pyx_PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE)) && __Pyx_PyObject_GetSlot(o, tp_finalize, destructor)) && (!PyType_IS_GC(Py_TYPE(o)) || !__Pyx_PyObject_GC_IsFinalized(o))) { - if (__Pyx_PyObject_GetSlot(o, tp_dealloc, destructor) == __pyx_tp_dealloc_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - } - #endif - #if CYTHON_USE_FREELISTS - if (((int)(__pyx_mstate_global->__pyx_freecount_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen < 8) & (int)(Py_TYPE(o)->tp_basicsize == sizeof(struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen)))) { - __pyx_mstate_global->__pyx_freelist_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen[__pyx_mstate_global->__pyx_freecount_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen++] = ((struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen *)o); - } else - #endif - { - #if CYTHON_USE_TYPE_SLOTS - (*Py_TYPE(o)->tp_free)(o); - #else - { - freefunc tp_free = (freefunc)PyType_GetSlot(Py_TYPE(o), Py_tp_free); - if (tp_free) tp_free(o); - } - #endif - } -} -#if CYTHON_USE_TYPE_SPECS -static PyType_Slot __pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen_slots[] = { - {Py_tp_dealloc, (void *)__pyx_tp_dealloc_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen}, - {Py_tp_new, (void *)__pyx_tp_new_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen}, - {0, 0}, -}; -static PyType_Spec __pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen_spec = { - "fontTools.cu2qu.cu2qu.__pyx_scope_struct___split_cubic_into_n_gen", - sizeof(struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen), - 0, - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_FINALIZE, - __pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen_slots, -}; -#else - -static PyTypeObject __pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen = { - PyVarObject_HEAD_INIT(0, 0) - "fontTools.cu2qu.cu2qu.""__pyx_scope_struct___split_cubic_into_n_gen", /*tp_name*/ - sizeof(struct __pyx_obj_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - 0, /*tp_as_async*/ - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_HAVE_FINALIZE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - 0, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - #if !CYTHON_USE_TYPE_SPECS - 0, /*tp_dictoffset*/ - #endif - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if CYTHON_USE_TP_FINALIZE - 0, /*tp_finalize*/ - #else - NULL, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if __PYX_NEED_TP_PRINT_SLOT == 1 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030C0000 - 0, /*tp_watched*/ - #endif - #if PY_VERSION_HEX >= 0x030d00A4 - 0, /*tp_versions_used*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 && PY_VERSION_HEX < 0x030a0000 - 0, /*tp_pypy_flags*/ - #endif -}; -#endif - -static PyMethodDef __pyx_methods[] = { - {0, 0, 0, 0} -}; -/* #### Code section: initfunc_declarations ### */ -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_InitConstants(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(__pyx_mstatetype *__pyx_mstate); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_CreateCodeObjects(__pyx_mstatetype *__pyx_mstate); /*proto*/ -/* #### Code section: init_module ### */ - -static int __Pyx_modinit_global_init_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - #if CYTHON_USE_TYPE_SPECS - __pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen = (PyTypeObject *) __Pyx_PyType_FromModuleAndSpec(__pyx_m, &__pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen_spec, NULL); if (unlikely(!__pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen)) __PYX_ERR(0, 150, __pyx_L1_error) - if (__Pyx_fix_up_extension_type_from_spec(&__pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen_spec, __pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen) < 0) __PYX_ERR(0, 150, __pyx_L1_error) - #else - __pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen = &__pyx_type_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen; - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - #endif - #if !CYTHON_USE_TYPE_SPECS - if (__Pyx_PyType_Ready(__pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen) < 0) __PYX_ERR(0, 150, __pyx_L1_error) - #endif - #if !CYTHON_COMPILING_IN_LIMITED_API - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen->tp_dictoffset && __pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen->tp_getattro == PyObject_GenericGetAttr)) { - __pyx_mstate->__pyx_ptype_9fontTools_5cu2qu_5cu2qu___pyx_scope_struct___split_cubic_into_n_gen->tp_getattro = PyObject_GenericGetAttr; - } - #endif - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_cu2qu(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_cu2qu}, - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - {Py_mod_gil, Py_MOD_GIL_USED}, - #endif - #if PY_VERSION_HEX >= 0x030C0000 && CYTHON_USE_MODULE_STATE - {Py_mod_multiple_interpreters, Py_MOD_MULTIPLE_INTERPRETERS_NOT_SUPPORTED}, - #endif - {0, NULL} -}; -#endif - -#ifdef __cplusplus -namespace { - struct PyModuleDef __pyx_moduledef = - #else - static struct PyModuleDef __pyx_moduledef = - #endif - { - PyModuleDef_HEAD_INIT, - "cu2qu", - 0, /* m_doc */ - #if CYTHON_USE_MODULE_STATE - sizeof(__pyx_mstatetype), /* m_size */ - #else - (CYTHON_PEP489_MULTI_PHASE_INIT) ? 0 : -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - #if CYTHON_USE_MODULE_STATE - __pyx_m_traverse, /* m_traverse */ - __pyx_m_clear, /* m_clear */ - NULL /* m_free */ - #else - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ - #endif - }; - #ifdef __cplusplus -} /* anonymous namespace */ -#endif - -/* PyModInitFuncType */ -#ifndef CYTHON_NO_PYINIT_EXPORT - #define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#else - #ifdef __cplusplus - #define __Pyx_PyMODINIT_FUNC extern "C" PyObject * - #else - #define __Pyx_PyMODINIT_FUNC PyObject * - #endif -#endif - -__Pyx_PyMODINIT_FUNC PyInit_cu2qu(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_cu2qu(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -/* ModuleCreationPEP489 */ -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x03090000 -static PY_INT64_T __Pyx_GetCurrentInterpreterId(void) { - { - PyObject *module = PyImport_ImportModule("_interpreters"); // 3.13+ I think - if (!module) { - PyErr_Clear(); // just try the 3.8-3.12 version - module = PyImport_ImportModule("_xxsubinterpreters"); - if (!module) goto bad; - } - PyObject *current = PyObject_CallMethod(module, "get_current", NULL); - Py_DECREF(module); - if (!current) goto bad; - if (PyTuple_Check(current)) { - PyObject *new_current = PySequence_GetItem(current, 0); - Py_DECREF(current); - current = new_current; - if (!new_current) goto bad; - } - long long as_c_int = PyLong_AsLongLong(current); - Py_DECREF(current); - return as_c_int; - } - bad: - PySys_WriteStderr("__Pyx_GetCurrentInterpreterId failed. Try setting the C define CYTHON_PEP489_MULTI_PHASE_INIT=0\n"); - return -1; -} -#endif -#if !CYTHON_USE_MODULE_STATE -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - static PY_INT64_T main_interpreter_id = -1; -#if CYTHON_COMPILING_IN_GRAAL - PY_INT64_T current_id = PyInterpreterState_GetIDFromThreadState(PyThreadState_Get()); -#elif CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX >= 0x03090000 - PY_INT64_T current_id = PyInterpreterState_GetID(PyInterpreterState_Get()); -#elif CYTHON_COMPILING_IN_LIMITED_API - PY_INT64_T current_id = __Pyx_GetCurrentInterpreterId(); -#else - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); -#endif - if (unlikely(current_id == -1)) { - return -1; - } - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return 0; - } else if (unlikely(main_interpreter_id != current_id)) { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -#endif -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) -{ - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - CYTHON_UNUSED_VAR(def); - #if !CYTHON_USE_MODULE_STATE - if (__Pyx_check_single_interpreter()) - return NULL; - #endif - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_cu2qu(PyObject *__pyx_pyinit_module) -#endif -{ - int stringtab_initialized = 0; - #if CYTHON_USE_MODULE_STATE - int pystate_addmodule_run = 0; - #endif - __pyx_mstatetype *__pyx_mstate = NULL; - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - double __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'cu2qu' has already been imported. Re-initialisation is not supported."); - return -1; - } - #else - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_t_1 = __pyx_pyinit_module; - Py_INCREF(__pyx_t_1); - #else - __pyx_t_1 = PyModule_Create(&__pyx_moduledef); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #if CYTHON_USE_MODULE_STATE - { - int add_module_result = __Pyx_State_AddModule(__pyx_t_1, &__pyx_moduledef); - __pyx_t_1 = 0; /* transfer ownership from __pyx_t_1 to "cu2qu" pseudovariable */ - if (unlikely((add_module_result < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - pystate_addmodule_run = 1; - } - #else - __pyx_m = __pyx_t_1; - #endif - #if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - PyUnstable_Module_SetGIL(__pyx_m, Py_MOD_GIL_USED); - #endif - __pyx_mstate = __pyx_mstate_global; - CYTHON_UNUSED_VAR(__pyx_t_1); - __pyx_mstate->__pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_mstate->__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_mstate->__pyx_d); - __pyx_mstate->__pyx_b = __Pyx_PyImport_AddModuleRef(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_mstate->__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_cython_runtime = __Pyx_PyImport_AddModuleRef("cython_runtime"); if (unlikely(!__pyx_mstate->__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_mstate->__pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /* ImportRefnannyAPI */ - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - -__Pyx_RefNannySetupContext("PyInit_cu2qu", 0); - if (__Pyx_check_binary_version(__PYX_LIMITED_VERSION_HEX, __Pyx_get_runtime_version(), CYTHON_COMPILING_IN_LIMITED_API) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_mstate->__pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_mstate->__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_mstate->__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_mstate->__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitConstants(__pyx_mstate) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - stringtab_initialized = 1; - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if 0 || defined(__Pyx_CyFunction_USED) || defined(__Pyx_FusedFunction_USED) || defined(__Pyx_Coroutine_USED) || defined(__Pyx_Generator_USED) || defined(__Pyx_AsyncGen_USED) - if (__pyx_CommonTypesMetaclass_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init(__pyx_m) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - if (__pyx_module_is_main_fontTools__cu2qu__cu2qu) { - if (PyObject_SetAttr(__pyx_m, __pyx_mstate_global->__pyx_n_u_name, __pyx_mstate_global->__pyx_n_u_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "fontTools.cu2qu.cu2qu")) { - if (unlikely((PyDict_SetItemString(modules, "fontTools.cu2qu.cu2qu", __pyx_m) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins(__pyx_mstate) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants(__pyx_mstate) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (__Pyx_CreateCodeObjects(__pyx_mstate) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(__pyx_mstate); - (void)__Pyx_modinit_variable_export_code(__pyx_mstate); - (void)__Pyx_modinit_function_export_code(__pyx_mstate); - if (unlikely((__Pyx_modinit_type_init_code(__pyx_mstate) < 0))) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(__pyx_mstate); - (void)__Pyx_modinit_variable_import_code(__pyx_mstate); - (void)__Pyx_modinit_function_import_code(__pyx_mstate); - /*--- Execution code ---*/ - - /* "fontTools/cu2qu/cu2qu.py":18 - * # limitations under the License. - * - * try: # <<<<<<<<<<<<<< - * import cython - * except (AttributeError, ImportError): -*/ - { - (void)__pyx_t_1; (void)__pyx_t_2; (void)__pyx_t_3; /* mark used */ - /*try:*/ { - - /* "fontTools/cu2qu/cu2qu.py":19 - * - * try: - * import cython # <<<<<<<<<<<<<< - * except (AttributeError, ImportError): - * # if cython not installed, use mock module with no-op decorators and types -*/ - } - } - - /* "fontTools/cu2qu/cu2qu.py":23 - * # if cython not installed, use mock module with no-op decorators and types - * from fontTools.misc import cython - * COMPILED = cython.compiled # <<<<<<<<<<<<<< - * - * import math -*/ - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_COMPILED, Py_True) < 0) __PYX_ERR(0, 23, __pyx_L1_error) - - /* "fontTools/cu2qu/cu2qu.py":25 - * COMPILED = cython.compiled - * - * import math # <<<<<<<<<<<<<< - * - * from .errors import Error as Cu2QuError, ApproxNotFoundError -*/ - __pyx_t_4 = __Pyx_ImportDottedModule(__pyx_mstate_global->__pyx_n_u_math, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_math, __pyx_t_4) < 0) __PYX_ERR(0, 25, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "fontTools/cu2qu/cu2qu.py":27 - * import math - * - * from .errors import Error as Cu2QuError, ApproxNotFoundError # <<<<<<<<<<<<<< - * - * -*/ - __pyx_t_4 = __Pyx_PyList_Pack(2, __pyx_mstate_global->__pyx_n_u_Error, __pyx_mstate_global->__pyx_n_u_ApproxNotFoundError); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 27, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_Import(__pyx_mstate_global->__pyx_n_u_errors, __pyx_t_4, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 27, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_5, __pyx_mstate_global->__pyx_n_u_Error); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 27, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_Cu2QuError, __pyx_t_4) < 0) __PYX_ERR(0, 27, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_ImportFrom(__pyx_t_5, __pyx_mstate_global->__pyx_n_u_ApproxNotFoundError); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 27, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_ApproxNotFoundError, __pyx_t_4) < 0) __PYX_ERR(0, 27, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":30 - * - * - * __all__ = ["curve_to_quadratic", "curves_to_quadratic"] # <<<<<<<<<<<<<< - * - * MAX_N = 100 -*/ - __pyx_t_5 = __Pyx_PyList_Pack(2, __pyx_mstate_global->__pyx_n_u_curve_to_quadratic, __pyx_mstate_global->__pyx_n_u_curves_to_quadratic); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 30, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_all, __pyx_t_5) < 0) __PYX_ERR(0, 30, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":32 - * __all__ = ["curve_to_quadratic", "curves_to_quadratic"] - * - * MAX_N = 100 # <<<<<<<<<<<<<< - * - * NAN = float("NaN") -*/ - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_MAX_N, __pyx_mstate_global->__pyx_int_100) < 0) __PYX_ERR(0, 32, __pyx_L1_error) - - /* "fontTools/cu2qu/cu2qu.py":34 - * MAX_N = 100 - * - * NAN = float("NaN") # <<<<<<<<<<<<<< - * - * -*/ - __pyx_t_6 = __Pyx_PyUnicode_AsDouble(__pyx_mstate_global->__pyx_n_u_NaN); if (unlikely(__pyx_t_6 == ((double)((double)-1)) && PyErr_Occurred())) __PYX_ERR(0, 34, __pyx_L1_error) - __pyx_t_5 = PyFloat_FromDouble(__pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_NAN, __pyx_t_5) < 0) __PYX_ERR(0, 34, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":150 - * - * - * @cython.locals( # <<<<<<<<<<<<<< - * p0=cython.complex, - * p1=cython.complex, -*/ - __pyx_t_5 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5cu2qu_5cu2qu_1_split_cubic_into_n_gen, 0, __pyx_mstate_global->__pyx_n_u_split_cubic_into_n_gen, NULL, __pyx_mstate_global->__pyx_n_u_fontTools_cu2qu_cu2qu, __pyx_mstate_global->__pyx_d, ((PyObject *)__pyx_mstate_global->__pyx_codeobj_tab[0])); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_split_cubic_into_n_gen, __pyx_t_5) < 0) __PYX_ERR(0, 150, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":471 - * @cython.locals(n=cython.int) - * @cython.locals(all_quadratic=cython.int) - * def curve_to_quadratic(curve, max_err, all_quadratic=True): # <<<<<<<<<<<<<< - * """Approximate a cubic Bezier curve with a spline of n quadratics. - * -*/ - __pyx_t_5 = __Pyx_PyBool_FromLong(((int)1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 471, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - - /* "fontTools/cu2qu/cu2qu.py":468 - * - * - * @cython.locals(max_err=cython.double) # <<<<<<<<<<<<<< - * @cython.locals(n=cython.int) - * @cython.locals(all_quadratic=cython.int) -*/ - __pyx_t_4 = PyTuple_Pack(1, __pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 468, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5cu2qu_5cu2qu_4curve_to_quadratic, 0, __pyx_mstate_global->__pyx_n_u_curve_to_quadratic, NULL, __pyx_mstate_global->__pyx_n_u_fontTools_cu2qu_cu2qu, __pyx_mstate_global->__pyx_d, ((PyObject *)__pyx_mstate_global->__pyx_codeobj_tab[1])); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 468, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_5, __pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_curve_to_quadratic, __pyx_t_5) < 0) __PYX_ERR(0, 468, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":505 - * @cython.locals(l=cython.int, last_i=cython.int, i=cython.int) - * @cython.locals(all_quadratic=cython.int) - * def curves_to_quadratic(curves, max_errors, all_quadratic=True): # <<<<<<<<<<<<<< - * """Return quadratic Bezier splines approximating the input cubic Beziers. - * -*/ - __pyx_t_5 = __Pyx_PyBool_FromLong(((int)1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 505, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - - /* "fontTools/cu2qu/cu2qu.py":503 - * - * - * @cython.locals(l=cython.int, last_i=cython.int, i=cython.int) # <<<<<<<<<<<<<< - * @cython.locals(all_quadratic=cython.int) - * def curves_to_quadratic(curves, max_errors, all_quadratic=True): -*/ - __pyx_t_4 = PyTuple_Pack(1, __pyx_t_5); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 503, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_t_5 = __Pyx_CyFunction_New(&__pyx_mdef_9fontTools_5cu2qu_5cu2qu_6curves_to_quadratic, 0, __pyx_mstate_global->__pyx_n_u_curves_to_quadratic, NULL, __pyx_mstate_global->__pyx_n_u_fontTools_cu2qu_cu2qu, __pyx_mstate_global->__pyx_d, ((PyObject *)__pyx_mstate_global->__pyx_codeobj_tab[2])); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 503, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_CyFunction_SetDefaultsTuple(__pyx_t_5, __pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_curves_to_quadratic, __pyx_t_5) < 0) __PYX_ERR(0, 503, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /* "fontTools/cu2qu/cu2qu.py":1 - * # cython: language_level=3 # <<<<<<<<<<<<<< - * # distutils: define_macros=CYTHON_TRACE_NOGIL=1 - * -*/ - __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (PyDict_SetItem(__pyx_t_5, __pyx_mstate_global->__pyx_kp_u_curves_to_quadratic_line_503, __pyx_mstate_global->__pyx_kp_u_Return_quadratic_Bezier_splines) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - if (PyDict_SetItem(__pyx_mstate_global->__pyx_d, __pyx_mstate_global->__pyx_n_u_test, __pyx_t_5) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - if (__pyx_m) { - if (__pyx_mstate->__pyx_d && stringtab_initialized) { - __Pyx_AddTraceback("init fontTools.cu2qu.cu2qu", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - #if !CYTHON_USE_MODULE_STATE - Py_CLEAR(__pyx_m); - #else - Py_DECREF(__pyx_m); - if (pystate_addmodule_run) { - PyObject *tp, *value, *tb; - PyErr_Fetch(&tp, &value, &tb); - PyState_RemoveModule(&__pyx_moduledef); - PyErr_Restore(tp, value, tb); - } - #endif - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init fontTools.cu2qu.cu2qu"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #else - return __pyx_m; - #endif -} -/* #### Code section: pystring_table ### */ - -typedef struct { - const char *s; -#if 1602 <= 65535 - const unsigned short n; -#elif 1602 / 2 < INT_MAX - const unsigned int n; -#elif 1602 / 2 < LONG_MAX - const unsigned long n; -#else - const Py_ssize_t n; -#endif -#if 1 <= 31 - const unsigned int encoding : 5; -#elif 1 <= 255 - const unsigned char encoding; -#elif 1 <= 65535 - const unsigned short encoding; -#else - const Py_ssize_t encoding; -#endif - const unsigned int is_unicode : 1; - const unsigned int intern : 1; -} __Pyx_StringTabEntry; -static const char * const __pyx_string_tab_encodings[] = { 0 }; -static const __Pyx_StringTabEntry __pyx_string_tab[] = { - {__pyx_k_, sizeof(__pyx_k_), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_ */ - {__pyx_k_ApproxNotFoundError, sizeof(__pyx_k_ApproxNotFoundError), 0, 1, 1}, /* PyObject cname: __pyx_n_u_ApproxNotFoundError */ - {__pyx_k_AssertionError, sizeof(__pyx_k_AssertionError), 0, 1, 1}, /* PyObject cname: __pyx_n_u_AssertionError */ - {__pyx_k_AttributeError, sizeof(__pyx_k_AttributeError), 0, 1, 1}, /* PyObject cname: __pyx_n_u_AttributeError */ - {__pyx_k_COMPILED, sizeof(__pyx_k_COMPILED), 0, 1, 1}, /* PyObject cname: __pyx_n_u_COMPILED */ - {__pyx_k_Cu2QuError, sizeof(__pyx_k_Cu2QuError), 0, 1, 1}, /* PyObject cname: __pyx_n_u_Cu2QuError */ - {__pyx_k_Error, sizeof(__pyx_k_Error), 0, 1, 1}, /* PyObject cname: __pyx_n_u_Error */ - {__pyx_k_ImportError, sizeof(__pyx_k_ImportError), 0, 1, 1}, /* PyObject cname: __pyx_n_u_ImportError */ - {__pyx_k_Lib_fontTools_cu2qu_cu2qu_py, sizeof(__pyx_k_Lib_fontTools_cu2qu_cu2qu_py), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_Lib_fontTools_cu2qu_cu2qu_py */ - {__pyx_k_MAX_N, sizeof(__pyx_k_MAX_N), 0, 1, 1}, /* PyObject cname: __pyx_n_u_MAX_N */ - {__pyx_k_NAN, sizeof(__pyx_k_NAN), 0, 1, 1}, /* PyObject cname: __pyx_n_u_NAN */ - {__pyx_k_NaN, sizeof(__pyx_k_NaN), 0, 1, 1}, /* PyObject cname: __pyx_n_u_NaN */ - {__pyx_k_Return_quadratic_Bezier_splines, sizeof(__pyx_k_Return_quadratic_Bezier_splines), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_Return_quadratic_Bezier_splines */ - {__pyx_k_ZeroDivisionError, sizeof(__pyx_k_ZeroDivisionError), 0, 1, 1}, /* PyObject cname: __pyx_n_u_ZeroDivisionError */ - {__pyx_k__2, sizeof(__pyx_k__2), 0, 1, 0}, /* PyObject cname: __pyx_kp_u__2 */ - {__pyx_k_a, sizeof(__pyx_k_a), 0, 1, 1}, /* PyObject cname: __pyx_n_u_a */ - {__pyx_k_a1, sizeof(__pyx_k_a1), 0, 1, 1}, /* PyObject cname: __pyx_n_u_a1 */ - {__pyx_k_all, sizeof(__pyx_k_all), 0, 1, 1}, /* PyObject cname: __pyx_n_u_all */ - {__pyx_k_all_quadratic, sizeof(__pyx_k_all_quadratic), 0, 1, 1}, /* PyObject cname: __pyx_n_u_all_quadratic */ - {__pyx_k_asyncio_coroutines, sizeof(__pyx_k_asyncio_coroutines), 0, 1, 1}, /* PyObject cname: __pyx_n_u_asyncio_coroutines */ - {__pyx_k_b, sizeof(__pyx_k_b), 0, 1, 1}, /* PyObject cname: __pyx_n_u_b */ - {__pyx_k_b1, sizeof(__pyx_k_b1), 0, 1, 1}, /* PyObject cname: __pyx_n_u_b1 */ - {__pyx_k_c, sizeof(__pyx_k_c), 0, 1, 1}, /* PyObject cname: __pyx_n_u_c */ - {__pyx_k_c1, sizeof(__pyx_k_c1), 0, 1, 1}, /* PyObject cname: __pyx_n_u_c1 */ - {__pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 1, 1}, /* PyObject cname: __pyx_n_u_cline_in_traceback */ - {__pyx_k_close, sizeof(__pyx_k_close), 0, 1, 1}, /* PyObject cname: __pyx_n_u_close */ - {__pyx_k_curve, sizeof(__pyx_k_curve), 0, 1, 1}, /* PyObject cname: __pyx_n_u_curve */ - {__pyx_k_curve_to_quadratic, sizeof(__pyx_k_curve_to_quadratic), 0, 1, 1}, /* PyObject cname: __pyx_n_u_curve_to_quadratic */ - {__pyx_k_curves, sizeof(__pyx_k_curves), 0, 1, 1}, /* PyObject cname: __pyx_n_u_curves */ - {__pyx_k_curves_to_quadratic, sizeof(__pyx_k_curves_to_quadratic), 0, 1, 1}, /* PyObject cname: __pyx_n_u_curves_to_quadratic */ - {__pyx_k_curves_to_quadratic_line_503, sizeof(__pyx_k_curves_to_quadratic_line_503), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_curves_to_quadratic_line_503 */ - {__pyx_k_d, sizeof(__pyx_k_d), 0, 1, 1}, /* PyObject cname: __pyx_n_u_d */ - {__pyx_k_d1, sizeof(__pyx_k_d1), 0, 1, 1}, /* PyObject cname: __pyx_n_u_d1 */ - {__pyx_k_delta_2, sizeof(__pyx_k_delta_2), 0, 1, 1}, /* PyObject cname: __pyx_n_u_delta_2 */ - {__pyx_k_delta_3, sizeof(__pyx_k_delta_3), 0, 1, 1}, /* PyObject cname: __pyx_n_u_delta_3 */ - {__pyx_k_disable, sizeof(__pyx_k_disable), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_disable */ - {__pyx_k_dt, sizeof(__pyx_k_dt), 0, 1, 1}, /* PyObject cname: __pyx_n_u_dt */ - {__pyx_k_enable, sizeof(__pyx_k_enable), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_enable */ - {__pyx_k_errors, sizeof(__pyx_k_errors), 0, 1, 1}, /* PyObject cname: __pyx_n_u_errors */ - {__pyx_k_fontTools_cu2qu_cu2qu, sizeof(__pyx_k_fontTools_cu2qu_cu2qu), 0, 1, 1}, /* PyObject cname: __pyx_n_u_fontTools_cu2qu_cu2qu */ - {__pyx_k_func, sizeof(__pyx_k_func), 0, 1, 1}, /* PyObject cname: __pyx_n_u_func */ - {__pyx_k_gc, sizeof(__pyx_k_gc), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_gc */ - {__pyx_k_i, sizeof(__pyx_k_i), 0, 1, 1}, /* PyObject cname: __pyx_n_u_i */ - {__pyx_k_imag, sizeof(__pyx_k_imag), 0, 1, 1}, /* PyObject cname: __pyx_n_u_imag */ - {__pyx_k_initializing, sizeof(__pyx_k_initializing), 0, 1, 1}, /* PyObject cname: __pyx_n_u_initializing */ - {__pyx_k_is_coroutine, sizeof(__pyx_k_is_coroutine), 0, 1, 1}, /* PyObject cname: __pyx_n_u_is_coroutine */ - {__pyx_k_isenabled, sizeof(__pyx_k_isenabled), 0, 1, 0}, /* PyObject cname: __pyx_kp_u_isenabled */ - {__pyx_k_isnan, sizeof(__pyx_k_isnan), 0, 1, 1}, /* PyObject cname: __pyx_n_u_isnan */ - {__pyx_k_l, sizeof(__pyx_k_l), 0, 1, 1}, /* PyObject cname: __pyx_n_u_l */ - {__pyx_k_last_i, sizeof(__pyx_k_last_i), 0, 1, 1}, /* PyObject cname: __pyx_n_u_last_i */ - {__pyx_k_main, sizeof(__pyx_k_main), 0, 1, 1}, /* PyObject cname: __pyx_n_u_main */ - {__pyx_k_math, sizeof(__pyx_k_math), 0, 1, 1}, /* PyObject cname: __pyx_n_u_math */ - {__pyx_k_max_err, sizeof(__pyx_k_max_err), 0, 1, 1}, /* PyObject cname: __pyx_n_u_max_err */ - {__pyx_k_max_errors, sizeof(__pyx_k_max_errors), 0, 1, 1}, /* PyObject cname: __pyx_n_u_max_errors */ - {__pyx_k_module, sizeof(__pyx_k_module), 0, 1, 1}, /* PyObject cname: __pyx_n_u_module */ - {__pyx_k_n, sizeof(__pyx_k_n), 0, 1, 1}, /* PyObject cname: __pyx_n_u_n */ - {__pyx_k_name, sizeof(__pyx_k_name), 0, 1, 1}, /* PyObject cname: __pyx_n_u_name */ - {__pyx_k_next, sizeof(__pyx_k_next), 0, 1, 1}, /* PyObject cname: __pyx_n_u_next */ - {__pyx_k_p, sizeof(__pyx_k_p), 0, 1, 1}, /* PyObject cname: __pyx_n_u_p */ - {__pyx_k_p0, sizeof(__pyx_k_p0), 0, 1, 1}, /* PyObject cname: __pyx_n_u_p0 */ - {__pyx_k_p1, sizeof(__pyx_k_p1), 0, 1, 1}, /* PyObject cname: __pyx_n_u_p1 */ - {__pyx_k_p2, sizeof(__pyx_k_p2), 0, 1, 1}, /* PyObject cname: __pyx_n_u_p2 */ - {__pyx_k_p3, sizeof(__pyx_k_p3), 0, 1, 1}, /* PyObject cname: __pyx_n_u_p3 */ - {__pyx_k_pop, sizeof(__pyx_k_pop), 0, 1, 1}, /* PyObject cname: __pyx_n_u_pop */ - {__pyx_k_qualname, sizeof(__pyx_k_qualname), 0, 1, 1}, /* PyObject cname: __pyx_n_u_qualname */ - {__pyx_k_range, sizeof(__pyx_k_range), 0, 1, 1}, /* PyObject cname: __pyx_n_u_range */ - {__pyx_k_real, sizeof(__pyx_k_real), 0, 1, 1}, /* PyObject cname: __pyx_n_u_real */ - {__pyx_k_s, sizeof(__pyx_k_s), 0, 1, 1}, /* PyObject cname: __pyx_n_u_s */ - {__pyx_k_send, sizeof(__pyx_k_send), 0, 1, 1}, /* PyObject cname: __pyx_n_u_send */ - {__pyx_k_set_name, sizeof(__pyx_k_set_name), 0, 1, 1}, /* PyObject cname: __pyx_n_u_set_name */ - {__pyx_k_spec, sizeof(__pyx_k_spec), 0, 1, 1}, /* PyObject cname: __pyx_n_u_spec */ - {__pyx_k_spline, sizeof(__pyx_k_spline), 0, 1, 1}, /* PyObject cname: __pyx_n_u_spline */ - {__pyx_k_splines, sizeof(__pyx_k_splines), 0, 1, 1}, /* PyObject cname: __pyx_n_u_splines */ - {__pyx_k_split_cubic_into_n_gen, sizeof(__pyx_k_split_cubic_into_n_gen), 0, 1, 1}, /* PyObject cname: __pyx_n_u_split_cubic_into_n_gen */ - {__pyx_k_t1, sizeof(__pyx_k_t1), 0, 1, 1}, /* PyObject cname: __pyx_n_u_t1 */ - {__pyx_k_t1_2, sizeof(__pyx_k_t1_2), 0, 1, 1}, /* PyObject cname: __pyx_n_u_t1_2 */ - {__pyx_k_test, sizeof(__pyx_k_test), 0, 1, 1}, /* PyObject cname: __pyx_n_u_test */ - {__pyx_k_throw, sizeof(__pyx_k_throw), 0, 1, 1}, /* PyObject cname: __pyx_n_u_throw */ - {__pyx_k_value, sizeof(__pyx_k_value), 0, 1, 1}, /* PyObject cname: __pyx_n_u_value */ - {0, 0, 0, 0, 0} -}; -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry const *t, PyObject **target, const char* const* encoding_names); - -/* #### Code section: cached_builtins ### */ - -static int __Pyx_InitCachedBuiltins(__pyx_mstatetype *__pyx_mstate) { - CYTHON_UNUSED_VAR(__pyx_mstate); - __pyx_builtin_AttributeError = __Pyx_GetBuiltinName(__pyx_mstate->__pyx_n_u_AttributeError); if (!__pyx_builtin_AttributeError) __PYX_ERR(0, 20, __pyx_L1_error) - __pyx_builtin_ImportError = __Pyx_GetBuiltinName(__pyx_mstate->__pyx_n_u_ImportError); if (!__pyx_builtin_ImportError) __PYX_ERR(0, 20, __pyx_L1_error) - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_mstate->__pyx_n_u_range); if (!__pyx_builtin_range) __PYX_ERR(0, 169, __pyx_L1_error) - __pyx_builtin_ZeroDivisionError = __Pyx_GetBuiltinName(__pyx_mstate->__pyx_n_u_ZeroDivisionError); if (!__pyx_builtin_ZeroDivisionError) __PYX_ERR(0, 301, __pyx_L1_error) - __pyx_builtin_AssertionError = __Pyx_GetBuiltinName(__pyx_mstate->__pyx_n_u_AssertionError); if (!__pyx_builtin_AssertionError) __PYX_ERR(0, 543, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cached_constants ### */ - -static int __Pyx_InitCachedConstants(__pyx_mstatetype *__pyx_mstate) { - __Pyx_RefNannyDeclarations - CYTHON_UNUSED_VAR(__pyx_mstate); - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - __Pyx_RefNannyFinishContext(); - return 0; -} -/* #### Code section: init_constants ### */ - -static int __Pyx_InitConstants(__pyx_mstatetype *__pyx_mstate) { - CYTHON_UNUSED_VAR(__pyx_mstate); - __pyx_mstate->__pyx_umethod_PyDict_Type_pop.type = (PyObject*)&PyDict_Type; - __pyx_mstate->__pyx_umethod_PyDict_Type_pop.method_name = &__pyx_mstate->__pyx_n_u_pop; - if (__Pyx_InitStrings(__pyx_string_tab, __pyx_mstate->__pyx_string_tab, __pyx_string_tab_encodings) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_mstate->__pyx_int_1 = PyLong_FromLong(1); if (unlikely(!__pyx_mstate->__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_int_2 = PyLong_FromLong(2); if (unlikely(!__pyx_mstate->__pyx_int_2)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_int_3 = PyLong_FromLong(3); if (unlikely(!__pyx_mstate->__pyx_int_3)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_int_4 = PyLong_FromLong(4); if (unlikely(!__pyx_mstate->__pyx_int_4)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_int_6 = PyLong_FromLong(6); if (unlikely(!__pyx_mstate->__pyx_int_6)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_mstate->__pyx_int_100 = PyLong_FromLong(100); if (unlikely(!__pyx_mstate->__pyx_int_100)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: init_codeobjects ### */ -\ - typedef struct { - unsigned int argcount : 3; - unsigned int num_posonly_args : 1; - unsigned int num_kwonly_args : 1; - unsigned int nlocals : 5; - unsigned int flags : 10; - unsigned int first_line : 9; - unsigned int line_table_length : 13; - } __Pyx_PyCode_New_function_description; -/* NewCodeObj.proto */ -static PyObject* __Pyx_PyCode_New( - const __Pyx_PyCode_New_function_description descr, - PyObject * const *varnames, - PyObject *filename, - PyObject *funcname, - const char *line_table, - PyObject *tuple_dedup_map -); - - -static int __Pyx_CreateCodeObjects(__pyx_mstatetype *__pyx_mstate) { - PyObject* tuple_dedup_map = PyDict_New(); - if (unlikely(!tuple_dedup_map)) return -1; - { - const __Pyx_PyCode_New_function_description descr = {5, 0, 0, 19, (unsigned int)(CO_OPTIMIZED|CO_NEWLOCALS|CO_GENERATOR), 150, 2}; - PyObject* const varnames[] = {__pyx_mstate->__pyx_n_u_p0, __pyx_mstate->__pyx_n_u_p1, __pyx_mstate->__pyx_n_u_p2, __pyx_mstate->__pyx_n_u_p3, __pyx_mstate->__pyx_n_u_n, __pyx_mstate->__pyx_n_u_a1, __pyx_mstate->__pyx_n_u_b1, __pyx_mstate->__pyx_n_u_c1, __pyx_mstate->__pyx_n_u_d1, __pyx_mstate->__pyx_n_u_dt, __pyx_mstate->__pyx_n_u_delta_2, __pyx_mstate->__pyx_n_u_delta_3, __pyx_mstate->__pyx_n_u_i, __pyx_mstate->__pyx_n_u_a, __pyx_mstate->__pyx_n_u_b, __pyx_mstate->__pyx_n_u_c, __pyx_mstate->__pyx_n_u_d, __pyx_mstate->__pyx_n_u_t1, __pyx_mstate->__pyx_n_u_t1_2}; - __pyx_mstate_global->__pyx_codeobj_tab[0] = __Pyx_PyCode_New(descr, varnames, __pyx_mstate->__pyx_kp_u_Lib_fontTools_cu2qu_cu2qu_py, __pyx_mstate->__pyx_n_u_split_cubic_into_n_gen, __pyx_k__3, tuple_dedup_map); if (unlikely(!__pyx_mstate_global->__pyx_codeobj_tab[0])) goto bad; - } - { - const __Pyx_PyCode_New_function_description descr = {3, 0, 0, 7, (unsigned int)(CO_OPTIMIZED|CO_NEWLOCALS), 468, 97}; - PyObject* const varnames[] = {__pyx_mstate->__pyx_n_u_curve, __pyx_mstate->__pyx_n_u_max_err, __pyx_mstate->__pyx_n_u_all_quadratic, __pyx_mstate->__pyx_n_u_n, __pyx_mstate->__pyx_n_u_spline, __pyx_mstate->__pyx_n_u_p, __pyx_mstate->__pyx_n_u_s}; - __pyx_mstate_global->__pyx_codeobj_tab[1] = __Pyx_PyCode_New(descr, varnames, __pyx_mstate->__pyx_kp_u_Lib_fontTools_cu2qu_cu2qu_py, __pyx_mstate->__pyx_n_u_curve_to_quadratic, __pyx_k_AWBc_U_U_3fBa_AWCy_7_2QgQgT_a_Q, tuple_dedup_map); if (unlikely(!__pyx_mstate_global->__pyx_codeobj_tab[1])) goto bad; - } - { - const __Pyx_PyCode_New_function_description descr = {3, 0, 0, 13, (unsigned int)(CO_OPTIMIZED|CO_NEWLOCALS), 503, 211}; - PyObject* const varnames[] = {__pyx_mstate->__pyx_n_u_curves, __pyx_mstate->__pyx_n_u_max_errors, __pyx_mstate->__pyx_n_u_all_quadratic, __pyx_mstate->__pyx_n_u_l, __pyx_mstate->__pyx_n_u_last_i, __pyx_mstate->__pyx_n_u_i, __pyx_mstate->__pyx_n_u_splines, __pyx_mstate->__pyx_n_u_n, __pyx_mstate->__pyx_n_u_spline, __pyx_mstate->__pyx_n_u_curve, __pyx_mstate->__pyx_n_u_p, __pyx_mstate->__pyx_n_u_spline, __pyx_mstate->__pyx_n_u_s}; - __pyx_mstate_global->__pyx_codeobj_tab[2] = __Pyx_PyCode_New(descr, varnames, __pyx_mstate->__pyx_kp_u_Lib_fontTools_cu2qu_cu2qu_py, __pyx_mstate->__pyx_n_u_curves_to_quadratic, __pyx_k_J_Qawb_4uG4y_3a_3c_1A_avRq_T_AV, tuple_dedup_map); if (unlikely(!__pyx_mstate_global->__pyx_codeobj_tab[2])) goto bad; - } - Py_DECREF(tuple_dedup_map); - return 0; - bad: - Py_DECREF(tuple_dedup_map); - return -1; -} -/* #### Code section: init_globals ### */ - -static int __Pyx_InitGlobals(void) { - /* PythonCompatibility.init */ - if (likely(__Pyx_init_co_variables() == 0)); else - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - /* AssertionsEnabled.init */ - if (likely(__Pyx_init_assertions_enabled() == 0)); else - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - /* CachedMethodType.init */ - #if CYTHON_COMPILING_IN_LIMITED_API -{ - PyObject *typesModule=NULL; - typesModule = PyImport_ImportModule("types"); - if (typesModule) { - __pyx_mstate_global->__Pyx_CachedMethodType = PyObject_GetAttrString(typesModule, "MethodType"); - Py_DECREF(typesModule); - } -} // error handling follows -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - return 0; - __pyx_L1_error:; - return -1; -} -/* #### Code section: cleanup_globals ### */ -/* #### Code section: cleanup_module ### */ -/* #### Code section: main_method ### */ -/* #### Code section: utility_code_pragmas ### */ -#ifdef _MSC_VER -#pragma warning( push ) -/* Warning 4127: conditional expression is constant - * Cython uses constant conditional expressions to allow in inline functions to be optimized at - * compile-time, so this warning is not useful - */ -#pragma warning( disable : 4127 ) -#endif - - - -/* #### Code section: utility_code_def ### */ - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); - for (i=0; i= 0x030C00A6 - PyObject *current_exception = tstate->current_exception; - if (unlikely(!current_exception)) return 0; - exc_type = (PyObject*) Py_TYPE(current_exception); - if (exc_type == err) return 1; -#else - exc_type = tstate->curexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; -#endif - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(exc_type); - #endif - if (unlikely(PyTuple_Check(err))) { - result = __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - } else { - result = __Pyx_PyErr_GivenExceptionMatches(exc_type, err); - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(exc_type); - #endif - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject *tmp_value; - assert(type == NULL || (value != NULL && type == (PyObject*) Py_TYPE(value))); - if (value) { - #if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(((PyBaseExceptionObject*) value)->traceback != tb)) - #endif - PyException_SetTraceback(value, tb); - } - tmp_value = tstate->current_exception; - tstate->current_exception = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#endif -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyObject* exc_value; - exc_value = tstate->current_exception; - tstate->current_exception = 0; - *value = exc_value; - *type = NULL; - *tb = NULL; - if (exc_value) { - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - #if CYTHON_COMPILING_IN_CPYTHON - *tb = ((PyBaseExceptionObject*) exc_value)->traceback; - Py_XINCREF(*tb); - #else - *tb = PyException_GetTraceback(exc_value); - #endif - } -#else - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#endif -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* PyObjectGetAttrStrNoError */ -#if __PYX_LIMITED_VERSION_HEX < 0x030d0000 -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -#endif -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 - (void) PyObject_GetOptionalAttr(obj, attr_name, &result); - return result; -#else -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -#endif -} - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStrNoError(__pyx_mstate_global->__pyx_b, name); - if (unlikely(!result) && !PyErr_Occurred()) { - PyErr_Format(PyExc_NameError, - "name '%U' is not defined", name); - } - return result; -} - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL && !CYTHON_VECTORCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject *const *args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject *const *args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; - PyObject *kwdefs; - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (unlikely(Py_EnterRecursiveCall(" while calling a Python object"))) { - return NULL; - } - if ( - co->co_kwonlyargcount == 0 && - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); - kwdefs = PyFunction_GET_KW_DEFAULTS(func); - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall(" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = __Pyx_CyOrPyCFunction_GET_FUNCTION(func); - self = __Pyx_CyOrPyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall(" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectFastCall */ -#if PY_VERSION_HEX < 0x03090000 || CYTHON_COMPILING_IN_LIMITED_API -static PyObject* __Pyx_PyObject_FastCall_fallback(PyObject *func, PyObject * const*args, size_t nargs, PyObject *kwargs) { - PyObject *argstuple; - PyObject *result = 0; - size_t i; - argstuple = PyTuple_New((Py_ssize_t)nargs); - if (unlikely(!argstuple)) return NULL; - for (i = 0; i < nargs; i++) { - Py_INCREF(args[i]); - if (__Pyx_PyTuple_SET_ITEM(argstuple, (Py_ssize_t)i, args[i]) != (0)) goto bad; - } - result = __Pyx_PyObject_Call(func, argstuple, kwargs); - bad: - Py_DECREF(argstuple); - return result; -} -#endif -#if CYTHON_VECTORCALL && !CYTHON_COMPILING_IN_LIMITED_API - #if PY_VERSION_HEX < 0x03090000 - #define __Pyx_PyVectorcall_Function(callable) _PyVectorcall_Function(callable) - #elif CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE vectorcallfunc __Pyx_PyVectorcall_Function(PyObject *callable) { - PyTypeObject *tp = Py_TYPE(callable); - #if defined(__Pyx_CyFunction_USED) - if (__Pyx_CyFunction_CheckExact(callable)) { - return __Pyx_CyFunction_func_vectorcall(callable); - } - #endif - if (!PyType_HasFeature(tp, Py_TPFLAGS_HAVE_VECTORCALL)) { - return NULL; - } - assert(PyCallable_Check(callable)); - Py_ssize_t offset = tp->tp_vectorcall_offset; - assert(offset > 0); - vectorcallfunc ptr; - memcpy(&ptr, (char *) callable + offset, sizeof(ptr)); - return ptr; -} - #else - #define __Pyx_PyVectorcall_Function(callable) PyVectorcall_Function(callable) - #endif -#endif -static CYTHON_INLINE PyObject* __Pyx_PyObject_FastCallDict(PyObject *func, PyObject *const *args, size_t _nargs, PyObject *kwargs) { - Py_ssize_t nargs = __Pyx_PyVectorcall_NARGS(_nargs); -#if CYTHON_COMPILING_IN_CPYTHON - if (nargs == 0 && kwargs == NULL) { - if (__Pyx_CyOrPyCFunction_Check(func) && likely( __Pyx_CyOrPyCFunction_GET_FLAGS(func) & METH_NOARGS)) - return __Pyx_PyObject_CallMethO(func, NULL); - } - else if (nargs == 1 && kwargs == NULL) { - if (__Pyx_CyOrPyCFunction_Check(func) && likely( __Pyx_CyOrPyCFunction_GET_FLAGS(func) & METH_O)) - return __Pyx_PyObject_CallMethO(func, args[0]); - } -#endif - #if PY_VERSION_HEX < 0x030800B1 - #if CYTHON_FAST_PYCCALL - if (PyCFunction_Check(func)) { - if (kwargs) { - return _PyCFunction_FastCallDict(func, args, nargs, kwargs); - } else { - return _PyCFunction_FastCallKeywords(func, args, nargs, NULL); - } - } - if (!kwargs && __Pyx_IS_TYPE(func, &PyMethodDescr_Type)) { - return _PyMethodDescr_FastCallKeywords(func, args, nargs, NULL); - } - #endif - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs); - } - #endif - #endif - if (kwargs == NULL) { - #if CYTHON_VECTORCALL && !CYTHON_COMPILING_IN_LIMITED_API - vectorcallfunc f = __Pyx_PyVectorcall_Function(func); - if (f) { - return f(func, args, _nargs, NULL); - } - #elif defined(__Pyx_CyFunction_USED) && CYTHON_BACKPORT_VECTORCALL - if (__Pyx_CyFunction_CheckExact(func)) { - __pyx_vectorcallfunc f = __Pyx_CyFunction_func_vectorcall(func); - if (f) return f(func, args, _nargs, NULL); - } - #elif CYTHON_COMPILING_IN_LIMITED_API && CYTHON_VECTORCALL - return PyObject_Vectorcall(func, args, _nargs, NULL); - #endif - } - if (nargs == 0) { - return __Pyx_PyObject_Call(func, __pyx_mstate_global->__pyx_empty_tuple, kwargs); - } - #if PY_VERSION_HEX >= 0x03090000 && !CYTHON_COMPILING_IN_LIMITED_API - return PyObject_VectorcallDict(func, args, (size_t)nargs, kwargs); - #else - return __Pyx_PyObject_FastCall_fallback(func, args, (size_t)nargs, kwargs); - #endif -} - -/* PyLongCompare */ -static CYTHON_INLINE int __Pyx_PyLong_BoolEqObjC(PyObject *op1, PyObject *op2, long intval, long inplace) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(inplace); - if (op1 == op2) { - return 1; - } - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - int unequal; - unsigned long uintval; - Py_ssize_t size = __Pyx_PyLong_DigitCount(op1); - const digit* digits = __Pyx_PyLong_Digits(op1); - if (intval == 0) { - return (__Pyx_PyLong_IsZero(op1) == 1); - } else if (intval < 0) { - if (__Pyx_PyLong_IsNonNeg(op1)) - return 0; - intval = -intval; - } else { - if (__Pyx_PyLong_IsNeg(op1)) - return 0; - } - uintval = (unsigned long) intval; -#if PyLong_SHIFT * 4 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 4)) { - unequal = (size != 5) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[4] != ((uintval >> (4 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 3 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 3)) { - unequal = (size != 4) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[3] != ((uintval >> (3 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 2 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 2)) { - unequal = (size != 3) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)) | (digits[2] != ((uintval >> (2 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif -#if PyLong_SHIFT * 1 < SIZEOF_LONG*8 - if (uintval >> (PyLong_SHIFT * 1)) { - unequal = (size != 2) || (digits[0] != (uintval & (unsigned long) PyLong_MASK)) - | (digits[1] != ((uintval >> (1 * PyLong_SHIFT)) & (unsigned long) PyLong_MASK)); - } else -#endif - unequal = (size != 1) || (((unsigned long) digits[0]) != (uintval & (unsigned long) PyLong_MASK)); - return (unequal == 0); - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = __Pyx_PyFloat_AS_DOUBLE(op1); - return ((double)a == (double)b); - } - return __Pyx_PyObject_IsTrueAndDecref( - PyObject_RichCompare(op1, op2, Py_EQ)); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* IterFinish */ -static CYTHON_INLINE int __Pyx_IterFinish(void) { - PyObject* exc_type; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - exc_type = __Pyx_PyErr_CurrentExceptionType(); - if (unlikely(exc_type)) { - if (unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) - return -1; - __Pyx_PyErr_Clear(); - return 0; - } - return 0; -} - -/* UnpackItemEndCheck */ -static int __Pyx_IternextUnpackEndCheck(PyObject *retval, Py_ssize_t expected) { - if (unlikely(retval)) { - Py_DECREF(retval); - __Pyx_RaiseTooManyValuesError(expected); - return -1; - } - return __Pyx_IterFinish(); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (unlikely(!j)) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE && !CYTHON_AVOID_BORROWED_REFS && !CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyLong_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyLong_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - return __Pyx_PyList_GetItemRef(o, n); - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_subscript) { - PyObject *r, *key = PyLong_FromSsize_t(i); - if (unlikely(!key)) return NULL; - r = mm->mp_subscript(o, key); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return sm->sq_item(o, i); - } - } -#else - if (is_list || !PyMapping_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyLong_FromSsize_t(i)); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!__pyx_m)) { - if (!PyErr_Occurred()) - PyErr_SetNone(PyExc_NameError); - return NULL; - } - result = PyObject_GetAttr(__pyx_m, name); - if (likely(result)) { - return result; - } - PyErr_Clear(); -#elif CYTHON_AVOID_BORROWED_REFS || CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - if (unlikely(__Pyx_PyDict_GetItemRef(__pyx_mstate_global->__pyx_d, name, &result) == -1)) PyErr_Clear(); - __PYX_UPDATE_DICT_CACHE(__pyx_mstate_global->__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return result; - } -#else - result = _PyDict_GetItem_KnownHash(__pyx_mstate_global->__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_mstate_global->__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* TupleAndListFromArray */ -#if !CYTHON_COMPILING_IN_CPYTHON && CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - Py_ssize_t i; - if (n <= 0) { - return __Pyx_NewRef(__pyx_mstate_global->__pyx_empty_tuple); - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - for (i = 0; i < n; i++) { - if (unlikely(__Pyx_PyTuple_SET_ITEM(res, i, src[i]) < 0)) { - Py_DECREF(res); - return NULL; - } - Py_INCREF(src[i]); - } - return res; -} -#elif CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE void __Pyx_copy_object_array(PyObject *const *CYTHON_RESTRICT src, PyObject** CYTHON_RESTRICT dest, Py_ssize_t length) { - PyObject *v; - Py_ssize_t i; - for (i = 0; i < length; i++) { - v = dest[i] = src[i]; - Py_INCREF(v); - } -} -static CYTHON_INLINE PyObject * -__Pyx_PyTuple_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return __Pyx_NewRef(__pyx_mstate_global->__pyx_empty_tuple); - } - res = PyTuple_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyTupleObject*)res)->ob_item, n); - return res; -} -static CYTHON_INLINE PyObject * -__Pyx_PyList_FromArray(PyObject *const *src, Py_ssize_t n) -{ - PyObject *res; - if (n <= 0) { - return PyList_New(0); - } - res = PyList_New(n); - if (unlikely(res == NULL)) return NULL; - __Pyx_copy_object_array(src, ((PyListObject*)res)->ob_item, n); - return res; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_GRAAL ||\ - !(CYTHON_ASSUME_SAFE_SIZE && CYTHON_ASSUME_SAFE_MACROS) - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_GRAAL - return PyObject_RichCompareBool(s1, s2, equals); -#else - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length, length2; - int kind; - void *data1, *data2; - #if !CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - #endif - length = __Pyx_PyUnicode_GET_LENGTH(s1); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely(length < 0)) return -1; - #endif - length2 = __Pyx_PyUnicode_GET_LENGTH(s2); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely(length2 < 0)) return -1; - #endif - if (length != length2) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - return (equals == Py_EQ); -return_ne: - return (equals == Py_NE); -#endif -} - -/* fastcall */ -#if CYTHON_METH_FASTCALL -static CYTHON_INLINE PyObject * __Pyx_GetKwValue_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues, PyObject *s) -{ - Py_ssize_t i, n = __Pyx_PyTuple_GET_SIZE(kwnames); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely(n == -1)) return NULL; - #endif - for (i = 0; i < n; i++) - { - PyObject *namei = __Pyx_PyTuple_GET_ITEM(kwnames, i); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely(!namei)) return NULL; - #endif - if (s == namei) return kwvalues[i]; - } - for (i = 0; i < n; i++) - { - PyObject *namei = __Pyx_PyTuple_GET_ITEM(kwnames, i); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely(!namei)) return NULL; - #endif - int eq = __Pyx_PyUnicode_Equals(s, namei, Py_EQ); - if (unlikely(eq != 0)) { - if (unlikely(eq < 0)) return NULL; - return kwvalues[i]; - } - } - return NULL; -} -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030d0000 || CYTHON_COMPILING_IN_LIMITED_API -CYTHON_UNUSED static PyObject *__Pyx_KwargsAsDict_FASTCALL(PyObject *kwnames, PyObject *const *kwvalues) { - Py_ssize_t i, nkwargs; - PyObject *dict; -#if !CYTHON_ASSUME_SAFE_SIZE - nkwargs = PyTuple_Size(kwnames); - if (unlikely(nkwargs < 0)) return NULL; -#else - nkwargs = PyTuple_GET_SIZE(kwnames); -#endif - dict = PyDict_New(); - if (unlikely(!dict)) - return NULL; - for (i=0; itype, *target->method_name); - if (unlikely(!method)) - return -1; - result = method; -#if CYTHON_COMPILING_IN_CPYTHON - if (likely(__Pyx_TypeCheck(method, &PyMethodDescr_Type))) - { - PyMethodDescrObject *descr = (PyMethodDescrObject*) method; - target->func = descr->d_method->ml_meth; - target->flag = descr->d_method->ml_flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_STACKLESS); - } else -#endif -#if CYTHON_COMPILING_IN_PYPY -#else - if (PyCFunction_Check(method)) -#endif - { - PyObject *self; - int self_found; -#if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_PYPY - self = PyObject_GetAttrString(method, "__self__"); - if (!self) { - PyErr_Clear(); - } -#else - self = PyCFunction_GET_SELF(method); -#endif - self_found = (self && self != Py_None); -#if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_PYPY - Py_XDECREF(self); -#endif - if (self_found) { - PyObject *unbound_method = PyCFunction_New(&__Pyx_UnboundCMethod_Def, method); - if (unlikely(!unbound_method)) return -1; - Py_DECREF(method); - result = unbound_method; - } - } -#if !CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - if (unlikely(target->method)) { - Py_DECREF(result); - } else -#endif - target->method = result; - return 0; -} - -/* CallUnboundCMethod2 */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject *__Pyx_CallUnboundCMethod2(__Pyx_CachedCFunction *cfunc, PyObject *self, PyObject *arg1, PyObject *arg2) { - int was_initialized = __Pyx_CachedCFunction_GetAndSetInitializing(cfunc); - if (likely(was_initialized == 2 && cfunc->func)) { - PyObject *args[2] = {arg1, arg2}; - if (cfunc->flag == METH_FASTCALL) { - return __Pyx_CallCFunctionFast(cfunc, self, args, 2); - } - if (cfunc->flag == (METH_FASTCALL | METH_KEYWORDS)) - return __Pyx_CallCFunctionFastWithKeywords(cfunc, self, args, 2, NULL); - } -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - else if (unlikely(was_initialized == 1)) { - __Pyx_CachedCFunction tmp_cfunc = { -#ifndef __cplusplus - 0 -#endif - }; - tmp_cfunc.type = cfunc->type; - tmp_cfunc.method_name = cfunc->method_name; - return __Pyx__CallUnboundCMethod2(&tmp_cfunc, self, arg1, arg2); - } -#endif - PyObject *result = __Pyx__CallUnboundCMethod2(cfunc, self, arg1, arg2); - __Pyx_CachedCFunction_SetFinishedInitializing(cfunc); - return result; -} -#endif -static PyObject* __Pyx__CallUnboundCMethod2(__Pyx_CachedCFunction* cfunc, PyObject* self, PyObject* arg1, PyObject* arg2){ - if (unlikely(!cfunc->func && !cfunc->method) && unlikely(__Pyx_TryUnpackUnboundCMethod(cfunc) < 0)) return NULL; -#if CYTHON_COMPILING_IN_CPYTHON - if (cfunc->func && (cfunc->flag & METH_VARARGS)) { - PyObject *result = NULL; - PyObject *args = PyTuple_New(2); - if (unlikely(!args)) return NULL; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - if (cfunc->flag & METH_KEYWORDS) - result = __Pyx_CallCFunctionWithKeywords(cfunc, self, args, NULL); - else - result = __Pyx_CallCFunction(cfunc, self, args); - Py_DECREF(args); - return result; - } -#endif - { - PyObject *args[4] = {NULL, self, arg1, arg2}; - return __Pyx_PyObject_FastCall(cfunc->method, args+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); - } -} - -/* ParseKeywords */ -static int __Pyx_ValidateDuplicatePosArgs( - PyObject *kwds, - PyObject ** const argnames[], - PyObject ** const *first_kw_arg, - const char* function_name) -{ - PyObject ** const *name = argnames; - while (name != first_kw_arg) { - PyObject *key = **name; - int found = PyDict_Contains(kwds, key); - if (unlikely(found)) { - if (found == 1) __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; - } - name++; - } - return 0; -bad: - return -1; -} -#if CYTHON_USE_UNICODE_INTERNALS -static CYTHON_INLINE int __Pyx_UnicodeKeywordsEqual(PyObject *s1, PyObject *s2) { - int kind; - Py_ssize_t len = PyUnicode_GET_LENGTH(s1); - if (len != PyUnicode_GET_LENGTH(s2)) return 0; - kind = PyUnicode_KIND(s1); - if (kind != PyUnicode_KIND(s2)) return 0; - const void *data1 = PyUnicode_DATA(s1); - const void *data2 = PyUnicode_DATA(s2); - return (memcmp(data1, data2, (size_t) len * (size_t) kind) == 0); -} -#endif -static int __Pyx_MatchKeywordArg_str( - PyObject *key, - PyObject ** const argnames[], - PyObject ** const *first_kw_arg, - size_t *index_found, - const char *function_name) -{ - PyObject ** const *name; - #if CYTHON_USE_UNICODE_INTERNALS - Py_hash_t key_hash = ((PyASCIIObject*)key)->hash; - if (unlikely(key_hash == -1)) { - key_hash = PyObject_Hash(key); - if (unlikely(key_hash == -1)) - goto bad; - } - #endif - name = first_kw_arg; - while (*name) { - PyObject *name_str = **name; - #if CYTHON_USE_UNICODE_INTERNALS - if (key_hash == ((PyASCIIObject*)name_str)->hash && __Pyx_UnicodeKeywordsEqual(name_str, key)) { - *index_found = (size_t) (name - argnames); - return 1; - } - #else - #if CYTHON_ASSUME_SAFE_SIZE - if (PyUnicode_GET_LENGTH(name_str) == PyUnicode_GET_LENGTH(key)) - #endif - { - int cmp = PyUnicode_Compare(name_str, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - *index_found = (size_t) (name - argnames); - return 1; - } - } - #endif - name++; - } - name = argnames; - while (name != first_kw_arg) { - PyObject *name_str = **name; - #if CYTHON_USE_UNICODE_INTERNALS - if (unlikely(key_hash == ((PyASCIIObject*)name_str)->hash)) { - if (__Pyx_UnicodeKeywordsEqual(name_str, key)) - goto arg_passed_twice; - } - #else - #if CYTHON_ASSUME_SAFE_SIZE - if (PyUnicode_GET_LENGTH(name_str) == PyUnicode_GET_LENGTH(key)) - #endif - { - if (unlikely(name_str == key)) goto arg_passed_twice; - int cmp = PyUnicode_Compare(name_str, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - } - #endif - name++; - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -bad: - return -1; -} -static int __Pyx_MatchKeywordArg_nostr( - PyObject *key, - PyObject ** const argnames[], - PyObject ** const *first_kw_arg, - size_t *index_found, - const char *function_name) -{ - PyObject ** const *name; - if (unlikely(!PyUnicode_Check(key))) goto invalid_keyword_type; - name = first_kw_arg; - while (*name) { - int cmp = PyObject_RichCompareBool(**name, key, Py_EQ); - if (cmp == 1) { - *index_found = (size_t) (name - argnames); - return 1; - } - if (unlikely(cmp == -1)) goto bad; - name++; - } - name = argnames; - while (name != first_kw_arg) { - int cmp = PyObject_RichCompareBool(**name, key, Py_EQ); - if (unlikely(cmp != 0)) { - if (cmp == 1) goto arg_passed_twice; - else goto bad; - } - name++; - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -bad: - return -1; -} -static CYTHON_INLINE int __Pyx_MatchKeywordArg( - PyObject *key, - PyObject ** const argnames[], - PyObject ** const *first_kw_arg, - size_t *index_found, - const char *function_name) -{ - return likely(PyUnicode_CheckExact(key)) ? - __Pyx_MatchKeywordArg_str(key, argnames, first_kw_arg, index_found, function_name) : - __Pyx_MatchKeywordArg_nostr(key, argnames, first_kw_arg, index_found, function_name); -} -static void __Pyx_RejectUnknownKeyword( - PyObject *kwds, - PyObject ** const argnames[], - PyObject ** const *first_kw_arg, - const char *function_name) -{ - Py_ssize_t pos = 0; - PyObject *key = NULL; - __Pyx_BEGIN_CRITICAL_SECTION(kwds); - while (PyDict_Next(kwds, &pos, &key, NULL)) { - PyObject** const *name = first_kw_arg; - while (*name && (**name != key)) name++; - if (!*name) { - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(key); - #endif - size_t index_found = 0; - int cmp = __Pyx_MatchKeywordArg(key, argnames, first_kw_arg, &index_found, function_name); - if (cmp != 1) { - if (cmp == 0) { - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(key); - #endif - break; - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(key); - #endif - } - } - __Pyx_END_CRITICAL_SECTION(); - assert(PyErr_Occurred()); -} -static int __Pyx_ParseKeywordDict( - PyObject *kwds, - PyObject ** const argnames[], - PyObject *values[], - Py_ssize_t num_pos_args, - Py_ssize_t num_kwargs, - const char* function_name, - int ignore_unknown_kwargs) -{ - PyObject** const *name; - PyObject** const *first_kw_arg = argnames + num_pos_args; - Py_ssize_t extracted = 0; -#if !CYTHON_COMPILING_IN_PYPY || defined(PyArg_ValidateKeywordArguments) - if (unlikely(!PyArg_ValidateKeywordArguments(kwds))) return -1; -#endif - name = first_kw_arg; - while (*name && num_kwargs > extracted) { - PyObject * key = **name; - PyObject *value; - int found = 0; - #if __PYX_LIMITED_VERSION_HEX >= 0x030d0000 - found = PyDict_GetItemRef(kwds, key, &value); - #else - value = PyDict_GetItemWithError(kwds, key); - if (value) { - Py_INCREF(value); - found = 1; - } else { - if (unlikely(PyErr_Occurred())) goto bad; - } - #endif - if (found) { - if (unlikely(found < 0)) goto bad; - values[name-argnames] = value; - extracted++; - } - name++; - } - if (num_kwargs > extracted) { - if (ignore_unknown_kwargs) { - if (unlikely(__Pyx_ValidateDuplicatePosArgs(kwds, argnames, first_kw_arg, function_name) == -1)) - goto bad; - } else { - __Pyx_RejectUnknownKeyword(kwds, argnames, first_kw_arg, function_name); - goto bad; - } - } - return 0; -bad: - return -1; -} -static int __Pyx_ParseKeywordDictToDict( - PyObject *kwds, - PyObject ** const argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject** const *name; - PyObject** const *first_kw_arg = argnames + num_pos_args; - Py_ssize_t len; -#if !CYTHON_COMPILING_IN_PYPY || defined(PyArg_ValidateKeywordArguments) - if (unlikely(!PyArg_ValidateKeywordArguments(kwds))) return -1; -#endif - if (PyDict_Update(kwds2, kwds) < 0) goto bad; - name = first_kw_arg; - while (*name) { - PyObject *key = **name; - PyObject *value; -#if !CYTHON_COMPILING_IN_LIMITED_API && (PY_VERSION_HEX >= 0x030d00A2 || defined(PyDict_Pop)) - int found = PyDict_Pop(kwds2, key, &value); - if (found) { - if (unlikely(found < 0)) goto bad; - values[name-argnames] = value; - } -#elif __PYX_LIMITED_VERSION_HEX >= 0x030d0000 - int found = PyDict_GetItemRef(kwds2, key, &value); - if (found) { - if (unlikely(found < 0)) goto bad; - values[name-argnames] = value; - if (unlikely(PyDict_DelItem(kwds2, key) < 0)) goto bad; - } -#else - #if CYTHON_COMPILING_IN_CPYTHON - value = _PyDict_Pop(kwds2, key, kwds2); - #else - value = __Pyx_CallUnboundCMethod2(&__pyx_mstate_global->__pyx_umethod_PyDict_Type_pop, kwds2, key, kwds2); - #endif - if (value == kwds2) { - Py_DECREF(value); - } else { - if (unlikely(!value)) goto bad; - values[name-argnames] = value; - } -#endif - name++; - } - len = PyDict_Size(kwds2); - if (len > 0) { - return __Pyx_ValidateDuplicatePosArgs(kwds, argnames, first_kw_arg, function_name); - } else if (unlikely(len == -1)) { - goto bad; - } - return 0; -bad: - return -1; -} -static int __Pyx_ParseKeywordsTuple( - PyObject *kwds, - PyObject * const *kwvalues, - PyObject ** const argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - Py_ssize_t num_kwargs, - const char* function_name, - int ignore_unknown_kwargs) -{ - PyObject *key = NULL; - PyObject** const * name; - PyObject** const *first_kw_arg = argnames + num_pos_args; - for (Py_ssize_t pos = 0; pos < num_kwargs; pos++) { -#if CYTHON_AVOID_BORROWED_REFS - key = __Pyx_PySequence_ITEM(kwds, pos); -#else - key = __Pyx_PyTuple_GET_ITEM(kwds, pos); -#endif -#if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely(!key)) goto bad; -#endif - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - PyObject *value = kwvalues[pos]; - values[name-argnames] = __Pyx_NewRef(value); - } else { - size_t index_found = 0; - int cmp = __Pyx_MatchKeywordArg(key, argnames, first_kw_arg, &index_found, function_name); - if (cmp == 1) { - PyObject *value = kwvalues[pos]; - values[index_found] = __Pyx_NewRef(value); - } else { - if (unlikely(cmp == -1)) goto bad; - if (kwds2) { - PyObject *value = kwvalues[pos]; - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else if (!ignore_unknown_kwargs) { - goto invalid_keyword; - } - } - } - #if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(key); - key = NULL; - #endif - } - return 0; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - "%s() got an unexpected keyword argument '%U'", - function_name, key); - goto bad; -bad: - #if CYTHON_AVOID_BORROWED_REFS - Py_XDECREF(key); - #endif - return -1; -} -static int __Pyx_ParseKeywords( - PyObject *kwds, - PyObject * const *kwvalues, - PyObject ** const argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - Py_ssize_t num_kwargs, - const char* function_name, - int ignore_unknown_kwargs) -{ - if (CYTHON_METH_FASTCALL && likely(PyTuple_Check(kwds))) - return __Pyx_ParseKeywordsTuple(kwds, kwvalues, argnames, kwds2, values, num_pos_args, num_kwargs, function_name, ignore_unknown_kwargs); - else if (kwds2) - return __Pyx_ParseKeywordDictToDict(kwds, argnames, kwds2, values, num_pos_args, function_name); - else - return __Pyx_ParseKeywordDict(kwds, argnames, values, num_pos_args, num_kwargs, function_name, ignore_unknown_kwargs); -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type = NULL, *local_value, *local_tb = NULL; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if PY_VERSION_HEX >= 0x030C0000 - local_value = tstate->current_exception; - tstate->current_exception = 0; - #else - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; - #endif -#elif __PYX_LIMITED_VERSION_HEX > 0x030C0000 - local_value = PyErr_GetRaisedException(); -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif -#if __PYX_LIMITED_VERSION_HEX > 0x030C0000 - if (likely(local_value)) { - local_type = (PyObject*) Py_TYPE(local_value); - Py_INCREF(local_type); - local_tb = PyException_GetTraceback(local_value); - } -#else - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } -#endif // __PYX_LIMITED_VERSION_HEX > 0x030C0000 - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - #if PY_VERSION_HEX >= 0x030B00a4 - tmp_value = exc_info->exc_value; - exc_info->exc_value = local_value; - tmp_type = NULL; - tmp_tb = NULL; - Py_XDECREF(local_type); - Py_XDECREF(local_tb); - #else - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - #endif - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#elif __PYX_LIMITED_VERSION_HEX >= 0x030b0000 - PyErr_SetHandledException(local_value); - Py_XDECREF(local_value); - Py_XDECREF(local_type); - Py_XDECREF(local_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -#if __PYX_LIMITED_VERSION_HEX <= 0x030C0000 -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -#endif -} - -/* pep479 */ -static void __Pyx_Generator_Replace_StopIteration(int in_async_gen) { - PyObject *exc, *val, *tb, *cur_exc, *new_exc; - __Pyx_PyThreadState_declare - int is_async_stopiteration = 0; - CYTHON_MAYBE_UNUSED_VAR(in_async_gen); - __Pyx_PyThreadState_assign - cur_exc = __Pyx_PyErr_CurrentExceptionType(); - if (likely(!__Pyx_PyErr_GivenExceptionMatches(cur_exc, PyExc_StopIteration))) { - if (in_async_gen && unlikely(__Pyx_PyErr_GivenExceptionMatches(cur_exc, PyExc_StopAsyncIteration))) { - is_async_stopiteration = 1; - } else { - return; - } - } - __Pyx_GetException(&exc, &val, &tb); - Py_XDECREF(exc); - Py_XDECREF(tb); - new_exc = PyObject_CallFunction(PyExc_RuntimeError, "s", - is_async_stopiteration ? "async generator raised StopAsyncIteration" : - in_async_gen ? "async generator raised StopIteration" : - "generator raised StopIteration"); - if (!new_exc) { - Py_XDECREF(val); - return; - } - PyException_SetCause(new_exc, val); // steals ref to val - PyErr_SetObject(PyExc_RuntimeError, new_exc); -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK && CYTHON_FAST_THREAD_STATE -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_value == NULL || exc_info->exc_value == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - PyObject *exc_value = exc_info->exc_value; - if (exc_value == NULL || exc_value == Py_None) { - *value = NULL; - *type = NULL; - *tb = NULL; - } else { - *value = exc_value; - Py_INCREF(*value); - *type = (PyObject*) Py_TYPE(exc_value); - Py_INCREF(*type); - *tb = PyException_GetTraceback(exc_value); - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); - #endif -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - #if CYTHON_USE_EXC_INFO_STACK && PY_VERSION_HEX >= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - PyObject *tmp_value = exc_info->exc_value; - exc_info->exc_value = value; - Py_XDECREF(tmp_value); - Py_XDECREF(type); - Py_XDECREF(tb); - #else - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); - #endif -} -#endif - -/* IterNextPlain */ -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x030A0000 -static PyObject *__Pyx_GetBuiltinNext_LimitedAPI(void) { - if (unlikely(!__pyx_mstate_global->__Pyx_GetBuiltinNext_LimitedAPI_cache)) - __pyx_mstate_global->__Pyx_GetBuiltinNext_LimitedAPI_cache = __Pyx_GetBuiltinName(__pyx_mstate_global->__pyx_n_u_next); - return __pyx_mstate_global->__Pyx_GetBuiltinNext_LimitedAPI_cache; -} -#endif -static CYTHON_INLINE PyObject *__Pyx_PyIter_Next_Plain(PyObject *iterator) { -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x030A0000 - PyObject *result; - PyObject *next = __Pyx_GetBuiltinNext_LimitedAPI(); - if (unlikely(!next)) return NULL; - result = PyObject_CallFunctionObjArgs(next, iterator, NULL); - return result; -#else - (void)__Pyx_GetBuiltinName; // only for early limited API - iternextfunc iternext = __Pyx_PyObject_GetIterNextFunc(iterator); - assert(iternext); - return iternext(iterator); -#endif -} - -/* IterNext */ -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x03080000 -static PyObject *__Pyx_PyIter_Next2(PyObject *o, PyObject *defval) { - PyObject *result; - PyObject *next = __Pyx_GetBuiltinNext_LimitedAPI(); - if (unlikely(!next)) return NULL; - result = PyObject_CallFunctionObjArgs(next, o, defval, NULL); - return result; -} -#else -static PyObject *__Pyx_PyIter_Next2Default(PyObject* defval) { - PyObject* exc_type; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - exc_type = __Pyx_PyErr_CurrentExceptionType(); - if (unlikely(exc_type)) { - if (!defval || unlikely(!__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(defval); - return defval; - } - if (defval) { - Py_INCREF(defval); - return defval; - } - __Pyx_PyErr_SetNone(PyExc_StopIteration); - return NULL; -} -static void __Pyx_PyIter_Next_ErrorNoIterator(PyObject *iterator) { - __Pyx_TypeName iterator_type_name = __Pyx_PyType_GetFullyQualifiedName(Py_TYPE(iterator)); - PyErr_Format(PyExc_TypeError, - __Pyx_FMT_TYPENAME " object is not an iterator", iterator_type_name); - __Pyx_DECREF_TypeName(iterator_type_name); -} -static CYTHON_INLINE PyObject *__Pyx_PyIter_Next2(PyObject* iterator, PyObject* defval) { - PyObject* next; -#if !CYTHON_COMPILING_IN_LIMITED_API - iternextfunc iternext = __Pyx_PyObject_TryGetSlot(iterator, tp_iternext, iternextfunc); - if (likely(iternext)) { - next = iternext(iterator); - if (likely(next)) - return next; - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030d0000 - if (unlikely(iternext == &_PyObject_NextNotImplemented)) - return NULL; - #endif - } else if (CYTHON_USE_TYPE_SLOTS) { - __Pyx_PyIter_Next_ErrorNoIterator(iterator); - return NULL; - } else -#endif - if (unlikely(!PyIter_Check(iterator))) { - __Pyx_PyIter_Next_ErrorNoIterator(iterator); - return NULL; - } else { - next = defval ? PyIter_Next(iterator) : __Pyx_PyIter_Next_Plain(iterator); - if (likely(next)) - return next; - } - return __Pyx_PyIter_Next2Default(defval); -} -#endif - -/* PyLongBinop */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_Fallback___Pyx_PyLong_AddObjC(PyObject *op1, PyObject *op2, int inplace) { - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#if CYTHON_USE_PYLONG_INTERNALS -static PyObject* __Pyx_Unpacked___Pyx_PyLong_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(inplace); - CYTHON_UNUSED_VAR(zerodivision_check); - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - if (unlikely(__Pyx_PyLong_IsZero(op1))) { - return __Pyx_NewRef(op2); - } - if (likely(__Pyx_PyLong_IsCompact(op1))) { - a = __Pyx_PyLong_CompactValue(op1); - } else { - const digit* digits = __Pyx_PyLong_Digits(op1); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(op1); - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; - #ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; - #endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - return __Pyx_Fallback___Pyx_PyLong_AddObjC(op1, op2, inplace); - - -} -#endif -static PyObject* __Pyx_Float___Pyx_PyLong_AddObjC(PyObject *float_val, long intval, int zerodivision_check) { - CYTHON_UNUSED_VAR(zerodivision_check); - const long b = intval; - double a = __Pyx_PyFloat_AS_DOUBLE(float_val); - double result; - - result = ((double)a) + (double)b; - return PyFloat_FromDouble(result); -} -static CYTHON_INLINE PyObject* __Pyx_PyLong_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check) { - CYTHON_MAYBE_UNUSED_VAR(intval); - CYTHON_UNUSED_VAR(zerodivision_check); - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - return __Pyx_Unpacked___Pyx_PyLong_AddObjC(op1, op2, intval, inplace, zerodivision_check); - } - #endif - if (PyFloat_CheckExact(op1)) { - return __Pyx_Float___Pyx_PyLong_AddObjC(op1, intval, zerodivision_check); - } - return __Pyx_Fallback___Pyx_PyLong_AddObjC(op1, op2, inplace); -} -#endif - -/* RaiseException */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if PY_VERSION_HEX >= 0x030C00A6 - PyException_SetTraceback(value, tb); -#elif CYTHON_FAST_THREAD_STATE - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#else - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} - -/* SetItemInt */ -static int __Pyx_SetItemInt_Generic(PyObject *o, PyObject *j, PyObject *v) { - int r; - if (unlikely(!j)) return -1; - r = PyObject_SetItem(o, j, v); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE int __Pyx_SetItemInt_Fast(PyObject *o, Py_ssize_t i, PyObject *v, int is_list, - CYTHON_NCP_UNUSED int wraparound, CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = (!wraparound) ? i : ((likely(i >= 0)) ? i : i + PyList_GET_SIZE(o)); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o)))) { - Py_INCREF(v); -#if CYTHON_AVOID_THREAD_UNSAFE_BORROWED_REFS - PyList_SetItem(o, n, v); -#else - PyObject* old = PyList_GET_ITEM(o, n); - PyList_SET_ITEM(o, n, v); - Py_DECREF(old); -#endif - return 1; - } - } else { - PyMappingMethods *mm = Py_TYPE(o)->tp_as_mapping; - PySequenceMethods *sm = Py_TYPE(o)->tp_as_sequence; - if (mm && mm->mp_ass_subscript) { - int r; - PyObject *key = PyLong_FromSsize_t(i); - if (unlikely(!key)) return -1; - r = mm->mp_ass_subscript(o, key, v); - Py_DECREF(key); - return r; - } - if (likely(sm && sm->sq_ass_item)) { - if (wraparound && unlikely(i < 0) && likely(sm->sq_length)) { - Py_ssize_t l = sm->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return -1; - PyErr_Clear(); - } - } - return sm->sq_ass_item(o, i, v); - } - } -#else - if (is_list || !PyMapping_Check(o)) - { - return PySequence_SetItem(o, i, v); - } -#endif - return __Pyx_SetItemInt_Generic(o, PyLong_FromSsize_t(i), v); -} - -/* ModInt[long] */ -static CYTHON_INLINE long __Pyx_mod_long(long a, long b, int b_is_constant) { - long r = a % b; - long adapt_python = (b_is_constant ? - ((r != 0) & ((r < 0) ^ (b < 0))) : - ((r != 0) & ((r ^ b) < 0)) - ); - return r + adapt_python * b; -} - -/* LimitedApiGetTypeDict */ -#if CYTHON_COMPILING_IN_LIMITED_API -static Py_ssize_t __Pyx_GetTypeDictOffset(void) { - PyObject *tp_dictoffset_o; - Py_ssize_t tp_dictoffset; - tp_dictoffset_o = PyObject_GetAttrString((PyObject*)(&PyType_Type), "__dictoffset__"); - if (unlikely(!tp_dictoffset_o)) return -1; - tp_dictoffset = PyLong_AsSsize_t(tp_dictoffset_o); - Py_DECREF(tp_dictoffset_o); - if (unlikely(tp_dictoffset == 0)) { - PyErr_SetString( - PyExc_TypeError, - "'type' doesn't have a dictoffset"); - return -1; - } else if (unlikely(tp_dictoffset < 0)) { - PyErr_SetString( - PyExc_TypeError, - "'type' has an unexpected negative dictoffset. " - "Please report this as Cython bug"); - return -1; - } - return tp_dictoffset; -} -static PyObject *__Pyx_GetTypeDict(PyTypeObject *tp) { - static Py_ssize_t tp_dictoffset = 0; - if (unlikely(tp_dictoffset == 0)) { - tp_dictoffset = __Pyx_GetTypeDictOffset(); - if (unlikely(tp_dictoffset == -1 && PyErr_Occurred())) { - tp_dictoffset = 0; // try again next time? - return NULL; - } - } - return *(PyObject**)((char*)tp + tp_dictoffset); -} -#endif - -/* SetItemOnTypeDict */ -static int __Pyx__SetItemOnTypeDict(PyTypeObject *tp, PyObject *k, PyObject *v) { - int result; - PyObject *tp_dict; -#if CYTHON_COMPILING_IN_LIMITED_API - tp_dict = __Pyx_GetTypeDict(tp); - if (unlikely(!tp_dict)) return -1; -#else - tp_dict = tp->tp_dict; -#endif - result = PyDict_SetItem(tp_dict, k, v); - if (likely(!result)) { - PyType_Modified(tp); - if (unlikely(PyObject_HasAttr(v, __pyx_mstate_global->__pyx_n_u_set_name))) { - PyObject *setNameResult = PyObject_CallMethodObjArgs(v, __pyx_mstate_global->__pyx_n_u_set_name, (PyObject *) tp, k, NULL); - if (!setNameResult) return -1; - Py_DECREF(setNameResult); - } - } - return result; -} - -/* FixUpExtensionType */ -static int __Pyx_fix_up_extension_type_from_spec(PyType_Spec *spec, PyTypeObject *type) { -#if __PYX_LIMITED_VERSION_HEX > 0x030900B1 - CYTHON_UNUSED_VAR(spec); - CYTHON_UNUSED_VAR(type); - CYTHON_UNUSED_VAR(__Pyx__SetItemOnTypeDict); -#else - const PyType_Slot *slot = spec->slots; - int changed = 0; -#if !CYTHON_COMPILING_IN_LIMITED_API - while (slot && slot->slot && slot->slot != Py_tp_members) - slot++; - if (slot && slot->slot == Py_tp_members) { -#if !CYTHON_COMPILING_IN_CPYTHON - const -#endif // !CYTHON_COMPILING_IN_CPYTHON) - PyMemberDef *memb = (PyMemberDef*) slot->pfunc; - while (memb && memb->name) { - if (memb->name[0] == '_' && memb->name[1] == '_') { - if (strcmp(memb->name, "__weaklistoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_weaklistoffset = memb->offset; - changed = 1; - } - else if (strcmp(memb->name, "__dictoffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); - type->tp_dictoffset = memb->offset; - changed = 1; - } -#if CYTHON_METH_FASTCALL - else if (strcmp(memb->name, "__vectorcalloffset__") == 0) { - assert(memb->type == T_PYSSIZET); - assert(memb->flags == READONLY); -#if PY_VERSION_HEX >= 0x030800b4 - type->tp_vectorcall_offset = memb->offset; -#else - type->tp_print = (printfunc) memb->offset; -#endif - changed = 1; - } -#endif // CYTHON_METH_FASTCALL -#if !CYTHON_COMPILING_IN_PYPY - else if (strcmp(memb->name, "__module__") == 0) { - PyObject *descr; - assert(memb->type == T_OBJECT); - assert(memb->flags == 0 || memb->flags == READONLY); - descr = PyDescr_NewMember(type, memb); - if (unlikely(!descr)) - return -1; - int set_item_result = PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr); - Py_DECREF(descr); - if (unlikely(set_item_result < 0)) { - return -1; - } - changed = 1; - } -#endif // !CYTHON_COMPILING_IN_PYPY - } - memb++; - } - } -#endif // !CYTHON_COMPILING_IN_LIMITED_API -#if !CYTHON_COMPILING_IN_PYPY - slot = spec->slots; - while (slot && slot->slot && slot->slot != Py_tp_getset) - slot++; - if (slot && slot->slot == Py_tp_getset) { - PyGetSetDef *getset = (PyGetSetDef*) slot->pfunc; - while (getset && getset->name) { - if (getset->name[0] == '_' && getset->name[1] == '_' && strcmp(getset->name, "__module__") == 0) { - PyObject *descr = PyDescr_NewGetSet(type, getset); - if (unlikely(!descr)) - return -1; - #if CYTHON_COMPILING_IN_LIMITED_API - PyObject *pyname = PyUnicode_FromString(getset->name); - if (unlikely(!pyname)) { - Py_DECREF(descr); - return -1; - } - int set_item_result = __Pyx_SetItemOnTypeDict(type, pyname, descr); - Py_DECREF(pyname); - #else - CYTHON_UNUSED_VAR(__Pyx__SetItemOnTypeDict); - int set_item_result = PyDict_SetItem(type->tp_dict, PyDescr_NAME(descr), descr); - #endif - Py_DECREF(descr); - if (unlikely(set_item_result < 0)) { - return -1; - } - changed = 1; - } - ++getset; - } - } -#endif // !CYTHON_COMPILING_IN_PYPY - if (changed) - PyType_Modified(type); -#endif // PY_VERSION_HEX > 0x030900B1 - return 0; -} - -/* PyObjectCallNoArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func) { - PyObject *arg[2] = {NULL, NULL}; - return __Pyx_PyObject_FastCall(func, arg + 1, 0 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectCallOneArg */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *args[2] = {NULL, arg}; - return __Pyx_PyObject_FastCall(func, args+1, 1 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectGetMethod */ -static int __Pyx_PyObject_GetMethod(PyObject *obj, PyObject *name, PyObject **method) { - PyObject *attr; -#if CYTHON_UNPACK_METHODS && CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_PYTYPE_LOOKUP - __Pyx_TypeName type_name; - PyTypeObject *tp = Py_TYPE(obj); - PyObject *descr; - descrgetfunc f = NULL; - PyObject **dictptr, *dict; - int meth_found = 0; - assert (*method == NULL); - if (unlikely(tp->tp_getattro != PyObject_GenericGetAttr)) { - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; - } - if (unlikely(tp->tp_dict == NULL) && unlikely(PyType_Ready(tp) < 0)) { - return 0; - } - descr = _PyType_Lookup(tp, name); - if (likely(descr != NULL)) { - Py_INCREF(descr); -#if defined(Py_TPFLAGS_METHOD_DESCRIPTOR) && Py_TPFLAGS_METHOD_DESCRIPTOR - if (__Pyx_PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_METHOD_DESCRIPTOR)) -#else - #ifdef __Pyx_CyFunction_USED - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type) || __Pyx_CyFunction_Check(descr))) - #else - if (likely(PyFunction_Check(descr) || __Pyx_IS_TYPE(descr, &PyMethodDescr_Type))) - #endif -#endif - { - meth_found = 1; - } else { - f = Py_TYPE(descr)->tp_descr_get; - if (f != NULL && PyDescr_IsData(descr)) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - } - } - dictptr = _PyObject_GetDictPtr(obj); - if (dictptr != NULL && (dict = *dictptr) != NULL) { - Py_INCREF(dict); - attr = __Pyx_PyDict_GetItemStr(dict, name); - if (attr != NULL) { - Py_INCREF(attr); - Py_DECREF(dict); - Py_XDECREF(descr); - goto try_unpack; - } - Py_DECREF(dict); - } - if (meth_found) { - *method = descr; - return 1; - } - if (f != NULL) { - attr = f(descr, obj, (PyObject *)Py_TYPE(obj)); - Py_DECREF(descr); - goto try_unpack; - } - if (likely(descr != NULL)) { - *method = descr; - return 0; - } - type_name = __Pyx_PyType_GetFullyQualifiedName(tp); - PyErr_Format(PyExc_AttributeError, - "'" __Pyx_FMT_TYPENAME "' object has no attribute '%U'", - type_name, name); - __Pyx_DECREF_TypeName(type_name); - return 0; -#else - attr = __Pyx_PyObject_GetAttrStr(obj, name); - goto try_unpack; -#endif -try_unpack: -#if CYTHON_UNPACK_METHODS - if (likely(attr) && PyMethod_Check(attr) && likely(PyMethod_GET_SELF(attr) == obj)) { - PyObject *function = PyMethod_GET_FUNCTION(attr); - Py_INCREF(function); - Py_DECREF(attr); - *method = function; - return 1; - } -#endif - *method = attr; - return 0; -} - -/* PyObjectCallMethod0 */ -static PyObject* __Pyx_PyObject_CallMethod0(PyObject* obj, PyObject* method_name) { -#if CYTHON_VECTORCALL && (__PYX_LIMITED_VERSION_HEX >= 0x030C0000 || (!CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX >= 0x03090000)) - PyObject *args[1] = {obj}; - (void) __Pyx_PyObject_GetMethod; - (void) __Pyx_PyObject_CallOneArg; - (void) __Pyx_PyObject_CallNoArg; - return PyObject_VectorcallMethod(method_name, args, 1 | PY_VECTORCALL_ARGUMENTS_OFFSET, NULL); -#else - PyObject *method = NULL, *result = NULL; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_CallOneArg(method, obj); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) goto bad; - result = __Pyx_PyObject_CallNoArg(method); - Py_DECREF(method); -bad: - return result; -#endif -} - -/* ValidateBasesTuple */ -#if CYTHON_COMPILING_IN_CPYTHON || CYTHON_COMPILING_IN_LIMITED_API || CYTHON_USE_TYPE_SPECS -static int __Pyx_validate_bases_tuple(const char *type_name, Py_ssize_t dictoffset, PyObject *bases) { - Py_ssize_t i, n; -#if CYTHON_ASSUME_SAFE_SIZE - n = PyTuple_GET_SIZE(bases); -#else - n = PyTuple_Size(bases); - if (unlikely(n < 0)) return -1; -#endif - for (i = 1; i < n; i++) - { - PyTypeObject *b; -#if CYTHON_AVOID_BORROWED_REFS - PyObject *b0 = PySequence_GetItem(bases, i); - if (!b0) return -1; -#elif CYTHON_ASSUME_SAFE_MACROS - PyObject *b0 = PyTuple_GET_ITEM(bases, i); -#else - PyObject *b0 = PyTuple_GetItem(bases, i); - if (!b0) return -1; -#endif - b = (PyTypeObject*) b0; - if (!__Pyx_PyType_HasFeature(b, Py_TPFLAGS_HEAPTYPE)) - { - __Pyx_TypeName b_name = __Pyx_PyType_GetFullyQualifiedName(b); - PyErr_Format(PyExc_TypeError, - "base class '" __Pyx_FMT_TYPENAME "' is not a heap type", b_name); - __Pyx_DECREF_TypeName(b_name); -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - return -1; - } - if (dictoffset == 0) - { - Py_ssize_t b_dictoffset = 0; -#if CYTHON_USE_TYPE_SLOTS - b_dictoffset = b->tp_dictoffset; -#else - PyObject *py_b_dictoffset = PyObject_GetAttrString((PyObject*)b, "__dictoffset__"); - if (!py_b_dictoffset) goto dictoffset_return; - b_dictoffset = PyLong_AsSsize_t(py_b_dictoffset); - Py_DECREF(py_b_dictoffset); - if (b_dictoffset == -1 && PyErr_Occurred()) goto dictoffset_return; -#endif - if (b_dictoffset) { - { - __Pyx_TypeName b_name = __Pyx_PyType_GetFullyQualifiedName(b); - PyErr_Format(PyExc_TypeError, - "extension type '%.200s' has no __dict__ slot, " - "but base type '" __Pyx_FMT_TYPENAME "' has: " - "either add 'cdef dict __dict__' to the extension type " - "or add '__slots__ = [...]' to the base type", - type_name, b_name); - __Pyx_DECREF_TypeName(b_name); - } -#if !CYTHON_USE_TYPE_SLOTS - dictoffset_return: -#endif -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - return -1; - } - } -#if CYTHON_AVOID_BORROWED_REFS - Py_DECREF(b0); -#endif - } - return 0; -} -#endif - -/* PyType_Ready */ -CYTHON_UNUSED static int __Pyx_PyType_HasMultipleInheritance(PyTypeObject *t) { - while (t) { - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases) { - return 1; - } - t = __Pyx_PyType_GetSlot(t, tp_base, PyTypeObject*); - } - return 0; -} -static int __Pyx_PyType_Ready(PyTypeObject *t) { -#if CYTHON_USE_TYPE_SPECS || !CYTHON_COMPILING_IN_CPYTHON || defined(PYSTON_MAJOR_VERSION) - (void)__Pyx_PyObject_CallMethod0; -#if CYTHON_USE_TYPE_SPECS - (void)__Pyx_validate_bases_tuple; -#endif - return PyType_Ready(t); -#else - int r; - if (!__Pyx_PyType_HasMultipleInheritance(t)) { - return PyType_Ready(t); - } - PyObject *bases = __Pyx_PyType_GetSlot(t, tp_bases, PyObject*); - if (bases && unlikely(__Pyx_validate_bases_tuple(t->tp_name, t->tp_dictoffset, bases) == -1)) - return -1; -#if !defined(PYSTON_MAJOR_VERSION) - { - int gc_was_enabled; - #if PY_VERSION_HEX >= 0x030A00b1 - gc_was_enabled = PyGC_Disable(); - (void)__Pyx_PyObject_CallMethod0; - #else - PyObject *ret, *py_status; - PyObject *gc = NULL; - #if (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM+0 >= 0x07030400) &&\ - !CYTHON_COMPILING_IN_GRAAL - gc = PyImport_GetModule(__pyx_mstate_global->__pyx_kp_u_gc); - #endif - if (unlikely(!gc)) gc = PyImport_Import(__pyx_mstate_global->__pyx_kp_u_gc); - if (unlikely(!gc)) return -1; - py_status = __Pyx_PyObject_CallMethod0(gc, __pyx_mstate_global->__pyx_kp_u_isenabled); - if (unlikely(!py_status)) { - Py_DECREF(gc); - return -1; - } - gc_was_enabled = __Pyx_PyObject_IsTrue(py_status); - Py_DECREF(py_status); - if (gc_was_enabled > 0) { - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_mstate_global->__pyx_kp_u_disable); - if (unlikely(!ret)) { - Py_DECREF(gc); - return -1; - } - Py_DECREF(ret); - } else if (unlikely(gc_was_enabled == -1)) { - Py_DECREF(gc); - return -1; - } - #endif - t->tp_flags |= Py_TPFLAGS_HEAPTYPE; -#if PY_VERSION_HEX >= 0x030A0000 - t->tp_flags |= Py_TPFLAGS_IMMUTABLETYPE; -#endif -#else - (void)__Pyx_PyObject_CallMethod0; -#endif - r = PyType_Ready(t); -#if !defined(PYSTON_MAJOR_VERSION) - t->tp_flags &= ~Py_TPFLAGS_HEAPTYPE; - #if PY_VERSION_HEX >= 0x030A00b1 - if (gc_was_enabled) - PyGC_Enable(); - #else - if (gc_was_enabled) { - PyObject *tp, *v, *tb; - PyErr_Fetch(&tp, &v, &tb); - ret = __Pyx_PyObject_CallMethod0(gc, __pyx_mstate_global->__pyx_kp_u_enable); - if (likely(ret || r == -1)) { - Py_XDECREF(ret); - PyErr_Restore(tp, v, tb); - } else { - Py_XDECREF(tp); - Py_XDECREF(v); - Py_XDECREF(tb); - r = -1; - } - } - Py_DECREF(gc); - #endif - } -#endif - return r; -#endif -} - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *module = 0; - PyObject *empty_dict = 0; - PyObject *empty_list = 0; - empty_dict = PyDict_New(); - if (unlikely(!empty_dict)) - goto bad; - if (level == -1) { - const char* package_sep = strchr(__Pyx_MODULE_NAME, '.'); - if (package_sep != (0)) { - module = PyImport_ImportModuleLevelObject( - name, __pyx_mstate_global->__pyx_d, empty_dict, from_list, 1); - if (unlikely(!module)) { - if (unlikely(!PyErr_ExceptionMatches(PyExc_ImportError))) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - if (!module) { - module = PyImport_ImportModuleLevelObject( - name, __pyx_mstate_global->__pyx_d, empty_dict, from_list, level); - } -bad: - Py_XDECREF(empty_dict); - Py_XDECREF(empty_list); - return module; -} - -/* ImportDottedModule */ -static PyObject *__Pyx__ImportDottedModule_Error(PyObject *name, PyObject *parts_tuple, Py_ssize_t count) { - PyObject *partial_name = NULL, *slice = NULL, *sep = NULL; - Py_ssize_t size; - if (unlikely(PyErr_Occurred())) { - PyErr_Clear(); - } -#if CYTHON_ASSUME_SAFE_SIZE - size = PyTuple_GET_SIZE(parts_tuple); -#else - size = PyTuple_Size(parts_tuple); - if (size < 0) goto bad; -#endif - if (likely(size == count)) { - partial_name = name; - } else { - slice = PySequence_GetSlice(parts_tuple, 0, count); - if (unlikely(!slice)) - goto bad; - sep = PyUnicode_FromStringAndSize(".", 1); - if (unlikely(!sep)) - goto bad; - partial_name = PyUnicode_Join(sep, slice); - } - PyErr_Format( - PyExc_ModuleNotFoundError, - "No module named '%U'", partial_name); -bad: - Py_XDECREF(sep); - Py_XDECREF(slice); - Py_XDECREF(partial_name); - return NULL; -} -static PyObject *__Pyx__ImportDottedModule_Lookup(PyObject *name) { - PyObject *imported_module; -#if (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) ||\ - CYTHON_COMPILING_IN_GRAAL - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - return NULL; - imported_module = __Pyx_PyDict_GetItemStr(modules, name); - Py_XINCREF(imported_module); -#else - imported_module = PyImport_GetModule(name); -#endif - return imported_module; -} -static PyObject *__Pyx_ImportDottedModule_WalkParts(PyObject *module, PyObject *name, PyObject *parts_tuple) { - Py_ssize_t i, nparts; -#if CYTHON_ASSUME_SAFE_SIZE - nparts = PyTuple_GET_SIZE(parts_tuple); -#else - nparts = PyTuple_Size(parts_tuple); - if (nparts < 0) return NULL; -#endif - for (i=1; i < nparts && module; i++) { - PyObject *part, *submodule; -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - part = PyTuple_GET_ITEM(parts_tuple, i); -#else - part = __Pyx_PySequence_ITEM(parts_tuple, i); - if (!part) return NULL; -#endif - submodule = __Pyx_PyObject_GetAttrStrNoError(module, part); -#if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(part); -#endif - Py_DECREF(module); - module = submodule; - } - if (unlikely(!module)) { - return __Pyx__ImportDottedModule_Error(name, parts_tuple, i); - } - return module; -} -static PyObject *__Pyx__ImportDottedModule(PyObject *name, PyObject *parts_tuple) { - PyObject *imported_module; - PyObject *module = __Pyx_Import(name, NULL, 0); - if (!parts_tuple || unlikely(!module)) - return module; - imported_module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(imported_module)) { - Py_DECREF(module); - return imported_module; - } - PyErr_Clear(); - return __Pyx_ImportDottedModule_WalkParts(module, name, parts_tuple); -} -static PyObject *__Pyx_ImportDottedModule(PyObject *name, PyObject *parts_tuple) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject *module = __Pyx__ImportDottedModule_Lookup(name); - if (likely(module)) { - PyObject *spec = __Pyx_PyObject_GetAttrStrNoError(module, __pyx_mstate_global->__pyx_n_u_spec); - if (likely(spec)) { - PyObject *unsafe = __Pyx_PyObject_GetAttrStrNoError(spec, __pyx_mstate_global->__pyx_n_u_initializing); - if (likely(!unsafe || !__Pyx_PyObject_IsTrue(unsafe))) { - Py_DECREF(spec); - spec = NULL; - } - Py_XDECREF(unsafe); - } - if (likely(!spec)) { - PyErr_Clear(); - return module; - } - Py_DECREF(spec); - Py_DECREF(module); - } else if (PyErr_Occurred()) { - PyErr_Clear(); - } -#endif - return __Pyx__ImportDottedModule(name, parts_tuple); -} - -/* ListPack */ -static PyObject *__Pyx_PyList_Pack(Py_ssize_t n, ...) { - va_list va; - PyObject *l = PyList_New(n); - va_start(va, n); - if (unlikely(!l)) goto end; - for (Py_ssize_t i=0; i__pyx_kp_u_); - if (unlikely(!module_dot)) { goto modbad; } - full_name = PyUnicode_Concat(module_dot, name); - if (unlikely(!full_name)) { goto modbad; } - #if (CYTHON_COMPILING_IN_PYPY && PYPY_VERSION_NUM < 0x07030400) ||\ - CYTHON_COMPILING_IN_GRAAL - { - PyObject *modules = PyImport_GetModuleDict(); - if (unlikely(!modules)) - goto modbad; - value = PyObject_GetItem(modules, full_name); - } - #else - value = PyImport_GetModule(full_name); - #endif - modbad: - Py_XDECREF(full_name); - Py_XDECREF(module_dot); - Py_XDECREF(module_name); - } - if (unlikely(!value)) { - PyErr_Format(PyExc_ImportError, "cannot import name %S", name); - } - return value; -} - -/* pybytes_as_double */ -static double __Pyx_SlowPyString_AsDouble(PyObject *obj) { - PyObject *float_value = PyFloat_FromString(obj); - if (likely(float_value)) { - double value = __Pyx_PyFloat_AS_DOUBLE(float_value); - Py_DECREF(float_value); - return value; - } - return (double)-1; -} -static const char* __Pyx__PyBytes_AsDouble_Copy(const char* start, char* buffer, Py_ssize_t length) { - int last_was_punctuation = 1; - int parse_error_found = 0; - Py_ssize_t i; - for (i=0; i < length; i++) { - char chr = start[i]; - int is_punctuation = (chr == '_') | (chr == '.') | (chr == 'e') | (chr == 'E'); - *buffer = chr; - buffer += (chr != '_'); - parse_error_found |= last_was_punctuation & is_punctuation; - last_was_punctuation = is_punctuation; - } - parse_error_found |= last_was_punctuation; - *buffer = '\0'; - return unlikely(parse_error_found) ? NULL : buffer; -} -static double __Pyx__PyBytes_AsDouble_inf_nan(const char* start, Py_ssize_t length) { - int matches = 1; - char sign = start[0]; - int is_signed = (sign == '+') | (sign == '-'); - start += is_signed; - length -= is_signed; - switch (start[0]) { - #ifdef Py_NAN - case 'n': - case 'N': - if (unlikely(length != 3)) goto parse_failure; - matches &= (start[1] == 'a' || start[1] == 'A'); - matches &= (start[2] == 'n' || start[2] == 'N'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_NAN : Py_NAN; - #endif - case 'i': - case 'I': - if (unlikely(length < 3)) goto parse_failure; - matches &= (start[1] == 'n' || start[1] == 'N'); - matches &= (start[2] == 'f' || start[2] == 'F'); - if (likely(length == 3 && matches)) - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - if (unlikely(length != 8)) goto parse_failure; - matches &= (start[3] == 'i' || start[3] == 'I'); - matches &= (start[4] == 'n' || start[4] == 'N'); - matches &= (start[5] == 'i' || start[5] == 'I'); - matches &= (start[6] == 't' || start[6] == 'T'); - matches &= (start[7] == 'y' || start[7] == 'Y'); - if (unlikely(!matches)) goto parse_failure; - return (sign == '-') ? -Py_HUGE_VAL : Py_HUGE_VAL; - case '.': case '0': case '1': case '2': case '3': case '4': case '5': case '6': case '7': case '8': case '9': - break; - default: - goto parse_failure; - } - return 0.0; -parse_failure: - return -1.0; -} -static CYTHON_INLINE int __Pyx__PyBytes_AsDouble_IsSpace(char ch) { - return (ch == 0x20) | !((ch < 0x9) | (ch > 0xd)); -} -CYTHON_UNUSED static double __Pyx__PyBytes_AsDouble(PyObject *obj, const char* start, Py_ssize_t length) { - double value; - Py_ssize_t i, digits; - const char *last = start + length; - char *end; - while (__Pyx__PyBytes_AsDouble_IsSpace(*start)) - start++; - while (start < last - 1 && __Pyx__PyBytes_AsDouble_IsSpace(last[-1])) - last--; - length = last - start; - if (unlikely(length <= 0)) goto fallback; - value = __Pyx__PyBytes_AsDouble_inf_nan(start, length); - if (unlikely(value == -1.0)) goto fallback; - if (value != 0.0) return value; - digits = 0; - for (i=0; i < length; digits += start[i++] != '_'); - if (likely(digits == length)) { - value = PyOS_string_to_double(start, &end, NULL); - } else if (digits < 40) { - char number[40]; - last = __Pyx__PyBytes_AsDouble_Copy(start, number, length); - if (unlikely(!last)) goto fallback; - value = PyOS_string_to_double(number, &end, NULL); - } else { - char *number = (char*) PyMem_Malloc((digits + 1) * sizeof(char)); - if (unlikely(!number)) goto fallback; - last = __Pyx__PyBytes_AsDouble_Copy(start, number, length); - if (unlikely(!last)) { - PyMem_Free(number); - goto fallback; - } - value = PyOS_string_to_double(number, &end, NULL); - PyMem_Free(number); - } - if (likely(end == last) || (value == (double)-1 && PyErr_Occurred())) { - return value; - } -fallback: - return __Pyx_SlowPyString_AsDouble(obj); -} - -/* FetchSharedCythonModule */ -static PyObject *__Pyx_FetchSharedCythonABIModule(void) { - return __Pyx_PyImport_AddModuleRef(__PYX_ABI_MODULE_NAME); -} - -/* dict_setdefault */ -static CYTHON_INLINE PyObject *__Pyx_PyDict_SetDefault(PyObject *d, PyObject *key, PyObject *default_value, - int is_safe_type) { - PyObject* value; - CYTHON_MAYBE_UNUSED_VAR(is_safe_type); -#if CYTHON_COMPILING_IN_LIMITED_API - value = PyObject_CallMethod(d, "setdefault", "OO", key, default_value); -#elif PY_VERSION_HEX >= 0x030d0000 - PyDict_SetDefaultRef(d, key, default_value, &value); -#else - value = PyDict_SetDefault(d, key, default_value); - if (unlikely(!value)) return NULL; - Py_INCREF(value); -#endif - return value; -} - -/* FetchCommonType */ -#if __PYX_LIMITED_VERSION_HEX < 0x030C0000 -static PyObject* __Pyx_PyType_FromMetaclass(PyTypeObject *metaclass, PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *result = __Pyx_PyType_FromModuleAndSpec(module, spec, bases); - if (result && metaclass) { - PyObject *old_tp = (PyObject*)Py_TYPE(result); - Py_INCREF((PyObject*)metaclass); -#if __PYX_LIMITED_VERSION_HEX >= 0x03090000 - Py_SET_TYPE(result, metaclass); -#else - result->ob_type = metaclass; -#endif - Py_DECREF(old_tp); - } - return result; -} -#else -#define __Pyx_PyType_FromMetaclass(me, mo, s, b) PyType_FromMetaclass(me, mo, s, b) -#endif -static int __Pyx_VerifyCachedType(PyObject *cached_type, - const char *name, - Py_ssize_t expected_basicsize) { - Py_ssize_t basicsize; - if (!PyType_Check(cached_type)) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s is not a type object", name); - return -1; - } - if (expected_basicsize == 0) { - return 0; // size is inherited, nothing useful to check - } -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_basicsize; - py_basicsize = PyObject_GetAttrString(cached_type, "__basicsize__"); - if (unlikely(!py_basicsize)) return -1; - basicsize = PyLong_AsSsize_t(py_basicsize); - Py_DECREF(py_basicsize); - py_basicsize = NULL; - if (unlikely(basicsize == (Py_ssize_t)-1) && PyErr_Occurred()) return -1; -#else - basicsize = ((PyTypeObject*) cached_type)->tp_basicsize; -#endif - if (basicsize != expected_basicsize) { - PyErr_Format(PyExc_TypeError, - "Shared Cython type %.200s has the wrong size, try recompiling", - name); - return -1; - } - return 0; -} -static PyTypeObject *__Pyx_FetchCommonTypeFromSpec(PyTypeObject *metaclass, PyObject *module, PyType_Spec *spec, PyObject *bases) { - PyObject *abi_module = NULL, *cached_type = NULL, *abi_module_dict, *new_cached_type, *py_object_name; - int get_item_ref_result; - const char* object_name = strrchr(spec->name, '.'); - object_name = object_name ? object_name+1 : spec->name; - py_object_name = PyUnicode_FromString(object_name); - if (!py_object_name) return NULL; - abi_module = __Pyx_FetchSharedCythonABIModule(); - if (!abi_module) goto done; - abi_module_dict = PyModule_GetDict(abi_module); - if (!abi_module_dict) goto done; - get_item_ref_result = __Pyx_PyDict_GetItemRef(abi_module_dict, py_object_name, &cached_type); - if (get_item_ref_result == 1) { - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } else if (unlikely(get_item_ref_result == -1)) { - goto bad; - } - CYTHON_UNUSED_VAR(module); - cached_type = __Pyx_PyType_FromMetaclass(metaclass, abi_module, spec, bases); - if (unlikely(!cached_type)) goto bad; - if (unlikely(__Pyx_fix_up_extension_type_from_spec(spec, (PyTypeObject *) cached_type) < 0)) goto bad; - new_cached_type = __Pyx_PyDict_SetDefault(abi_module_dict, py_object_name, cached_type, 1); - if (unlikely(new_cached_type != cached_type)) { - if (unlikely(!new_cached_type)) goto bad; - Py_DECREF(cached_type); - cached_type = new_cached_type; - if (__Pyx_VerifyCachedType( - cached_type, - object_name, - spec->basicsize) < 0) { - goto bad; - } - goto done; - } else { - Py_DECREF(new_cached_type); - } -done: - Py_XDECREF(abi_module); - Py_DECREF(py_object_name); - assert(cached_type == NULL || PyType_Check(cached_type)); - return (PyTypeObject *) cached_type; -bad: - Py_XDECREF(cached_type); - cached_type = NULL; - goto done; -} - -/* CommonTypesMetaclass */ -static PyObject* __pyx_CommonTypesMetaclass_get_module(CYTHON_UNUSED PyObject *self, CYTHON_UNUSED void* context) { - return PyUnicode_FromString(__PYX_ABI_MODULE_NAME); -} -static PyGetSetDef __pyx_CommonTypesMetaclass_getset[] = { - {"__module__", __pyx_CommonTypesMetaclass_get_module, NULL, NULL, NULL}, - {0, 0, 0, 0, 0} -}; -static PyType_Slot __pyx_CommonTypesMetaclass_slots[] = { - {Py_tp_getset, (void *)__pyx_CommonTypesMetaclass_getset}, - {0, 0} -}; -static PyType_Spec __pyx_CommonTypesMetaclass_spec = { - __PYX_TYPE_MODULE_PREFIX "_common_types_metatype", - 0, - 0, -#if PY_VERSION_HEX >= 0x030A0000 - Py_TPFLAGS_IMMUTABLETYPE | - Py_TPFLAGS_DISALLOW_INSTANTIATION | -#endif - Py_TPFLAGS_DEFAULT, - __pyx_CommonTypesMetaclass_slots -}; -static int __pyx_CommonTypesMetaclass_init(PyObject *module) { - __pyx_mstatetype *mstate = __Pyx_PyModule_GetState(module); - PyObject *bases = PyTuple_Pack(1, &PyType_Type); - if (unlikely(!bases)) { - return -1; - } - mstate->__pyx_CommonTypesMetaclassType = __Pyx_FetchCommonTypeFromSpec(NULL, module, &__pyx_CommonTypesMetaclass_spec, bases); - Py_DECREF(bases); - if (unlikely(mstate->__pyx_CommonTypesMetaclassType == NULL)) { - return -1; - } - return 0; -} - -/* CallTypeTraverse */ -#if !CYTHON_USE_TYPE_SPECS || (!CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x03090000) -#else -static int __Pyx_call_type_traverse(PyObject *o, int always_call, visitproc visit, void *arg) { - #if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x03090000 - if (__Pyx_get_runtime_version() < 0x03090000) return 0; - #endif - if (!always_call) { - PyTypeObject *base = __Pyx_PyObject_GetSlot(o, tp_base, PyTypeObject*); - unsigned long flags = PyType_GetFlags(base); - if (flags & Py_TPFLAGS_HEAPTYPE) { - return 0; - } - } - Py_VISIT((PyObject*)Py_TYPE(o)); - return 0; -} -#endif - -/* PyMethodNew */ -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - PyObject *result; - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - #if __PYX_LIMITED_VERSION_HEX >= 0x030C0000 - { - PyObject *args[] = {func, self}; - result = PyObject_Vectorcall(__pyx_mstate_global->__Pyx_CachedMethodType, args, 2, NULL); - } - #else - result = PyObject_CallFunctionObjArgs(__pyx_mstate_global->__Pyx_CachedMethodType, func, self, NULL); - #endif - return result; -} -#else -static PyObject *__Pyx_PyMethod_New(PyObject *func, PyObject *self, PyObject *typ) { - CYTHON_UNUSED_VAR(typ); - if (!self) - return __Pyx_NewRef(func); - return PyMethod_New(func, self); -} -#endif - -/* PyVectorcallFastCallDict */ -#if CYTHON_METH_FASTCALL && (CYTHON_VECTORCALL || CYTHON_BACKPORT_VECTORCALL) -static PyObject *__Pyx_PyVectorcall_FastCallDict_kw(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - PyObject *res = NULL; - PyObject *kwnames; - PyObject **newargs; - PyObject **kwvalues; - Py_ssize_t i, pos; - size_t j; - PyObject *key, *value; - unsigned long keys_are_strings; - #if !CYTHON_ASSUME_SAFE_SIZE - Py_ssize_t nkw = PyDict_Size(kw); - if (unlikely(nkw == -1)) return NULL; - #else - Py_ssize_t nkw = PyDict_GET_SIZE(kw); - #endif - newargs = (PyObject **)PyMem_Malloc((nargs + (size_t)nkw) * sizeof(args[0])); - if (unlikely(newargs == NULL)) { - PyErr_NoMemory(); - return NULL; - } - for (j = 0; j < nargs; j++) newargs[j] = args[j]; - kwnames = PyTuple_New(nkw); - if (unlikely(kwnames == NULL)) { - PyMem_Free(newargs); - return NULL; - } - kwvalues = newargs + nargs; - pos = i = 0; - keys_are_strings = Py_TPFLAGS_UNICODE_SUBCLASS; - while (PyDict_Next(kw, &pos, &key, &value)) { - keys_are_strings &= - #if CYTHON_COMPILING_IN_LIMITED_API - PyType_GetFlags(Py_TYPE(key)); - #else - Py_TYPE(key)->tp_flags; - #endif - Py_INCREF(key); - Py_INCREF(value); - #if !CYTHON_ASSUME_SAFE_MACROS - if (unlikely(PyTuple_SetItem(kwnames, i, key) < 0)) goto cleanup; - #else - PyTuple_SET_ITEM(kwnames, i, key); - #endif - kwvalues[i] = value; - i++; - } - if (unlikely(!keys_are_strings)) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - goto cleanup; - } - res = vc(func, newargs, nargs, kwnames); -cleanup: - Py_DECREF(kwnames); - for (i = 0; i < nkw; i++) - Py_DECREF(kwvalues[i]); - PyMem_Free(newargs); - return res; -} -static CYTHON_INLINE PyObject *__Pyx_PyVectorcall_FastCallDict(PyObject *func, __pyx_vectorcallfunc vc, PyObject *const *args, size_t nargs, PyObject *kw) -{ - Py_ssize_t kw_size = - likely(kw == NULL) ? - 0 : -#if !CYTHON_ASSUME_SAFE_SIZE - PyDict_Size(kw); -#else - PyDict_GET_SIZE(kw); -#endif - if (kw_size == 0) { - return vc(func, args, nargs, NULL); - } -#if !CYTHON_ASSUME_SAFE_SIZE - else if (unlikely(kw_size == -1)) { - return NULL; - } -#endif - return __Pyx_PyVectorcall_FastCallDict_kw(func, vc, args, nargs, kw); -} -#endif - -/* CythonFunctionShared */ -#if CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunctionNoMethod(PyObject *func, void (*cfunc)(void)) { - if (__Pyx_CyFunction_Check(func)) { - return PyCFunction_GetFunction(((__pyx_CyFunctionObject*)func)->func) == (PyCFunction) cfunc; - } else if (PyCFunction_Check(func)) { - return PyCFunction_GetFunction(func) == (PyCFunction) cfunc; - } - return 0; -} -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunction(PyObject *func, void (*cfunc)(void)) { - if ((PyObject*)Py_TYPE(func) == __pyx_mstate_global->__Pyx_CachedMethodType) { - int result; - PyObject *newFunc = PyObject_GetAttr(func, __pyx_mstate_global->__pyx_n_u_func); - if (unlikely(!newFunc)) { - PyErr_Clear(); // It's only an optimization, so don't throw an error - return 0; - } - result = __Pyx__IsSameCyOrCFunctionNoMethod(newFunc, cfunc); - Py_DECREF(newFunc); - return result; - } - return __Pyx__IsSameCyOrCFunctionNoMethod(func, cfunc); -} -#else -static CYTHON_INLINE int __Pyx__IsSameCyOrCFunction(PyObject *func, void (*cfunc)(void)) { - if (PyMethod_Check(func)) { - func = PyMethod_GET_FUNCTION(func); - } - return __Pyx_CyOrPyCFunction_Check(func) && __Pyx_CyOrPyCFunction_GET_FUNCTION(func) == (PyCFunction) cfunc; -} -#endif -static CYTHON_INLINE void __Pyx__CyFunction_SetClassObj(__pyx_CyFunctionObject* f, PyObject* classobj) { -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - __Pyx_Py_XDECREF_SET( - __Pyx_CyFunction_GetClassObj(f), - ((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#else - __Pyx_Py_XDECREF_SET( - ((PyCMethodObject *) (f))->mm_class, - (PyTypeObject*)((classobj) ? __Pyx_NewRef(classobj) : NULL)); -#endif -} -static PyObject * -__Pyx_CyFunction_get_doc_locked(__pyx_CyFunctionObject *op) -{ - if (unlikely(op->func_doc == NULL)) { -#if CYTHON_COMPILING_IN_LIMITED_API - op->func_doc = PyObject_GetAttrString(op->func, "__doc__"); - if (unlikely(!op->func_doc)) return NULL; -#else - if (((PyCFunctionObject*)op)->m_ml->ml_doc) { - op->func_doc = PyUnicode_FromString(((PyCFunctionObject*)op)->m_ml->ml_doc); - if (unlikely(op->func_doc == NULL)) - return NULL; - } else { - Py_INCREF(Py_None); - return Py_None; - } -#endif - } - Py_INCREF(op->func_doc); - return op->func_doc; -} -static PyObject * -__Pyx_CyFunction_get_doc(__pyx_CyFunctionObject *op, void *closure) { - PyObject *result; - CYTHON_UNUSED_VAR(closure); - __Pyx_BEGIN_CRITICAL_SECTION(op); - result = __Pyx_CyFunction_get_doc_locked(op); - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static int -__Pyx_CyFunction_set_doc(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (value == NULL) { - value = Py_None; - } - Py_INCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->func_doc, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_name_locked(__pyx_CyFunctionObject *op) -{ - if (unlikely(op->func_name == NULL)) { -#if CYTHON_COMPILING_IN_LIMITED_API - op->func_name = PyObject_GetAttrString(op->func, "__name__"); -#else - op->func_name = PyUnicode_InternFromString(((PyCFunctionObject*)op)->m_ml->ml_name); -#endif - if (unlikely(op->func_name == NULL)) - return NULL; - } - Py_INCREF(op->func_name); - return op->func_name; -} -static PyObject * -__Pyx_CyFunction_get_name(__pyx_CyFunctionObject *op, void *context) -{ - PyObject *result = NULL; - CYTHON_UNUSED_VAR(context); - __Pyx_BEGIN_CRITICAL_SECTION(op); - result = __Pyx_CyFunction_get_name_locked(op); - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static int -__Pyx_CyFunction_set_name(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL || !PyUnicode_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->func_name, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_qualname(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - PyObject *result; - __Pyx_BEGIN_CRITICAL_SECTION(op); - Py_INCREF(op->func_qualname); - result = op->func_qualname; - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static int -__Pyx_CyFunction_set_qualname(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL || !PyUnicode_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->func_qualname, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_dict_locked(__pyx_CyFunctionObject *op) -{ - if (unlikely(op->func_dict == NULL)) { - op->func_dict = PyDict_New(); - if (unlikely(op->func_dict == NULL)) - return NULL; - } - Py_INCREF(op->func_dict); - return op->func_dict; -} -static PyObject * -__Pyx_CyFunction_get_dict(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - PyObject *result; - __Pyx_BEGIN_CRITICAL_SECTION(op); - result = __Pyx_CyFunction_get_dict_locked(op); - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static int -__Pyx_CyFunction_set_dict(__pyx_CyFunctionObject *op, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL)) { - PyErr_SetString(PyExc_TypeError, - "function's dictionary may not be deleted"); - return -1; - } - if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "setting function's dictionary to a non-dict"); - return -1; - } - Py_INCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->func_dict, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_globals(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(context); - Py_INCREF(op->func_globals); - return op->func_globals; -} -static PyObject * -__Pyx_CyFunction_get_closure(__pyx_CyFunctionObject *op, void *context) -{ - CYTHON_UNUSED_VAR(op); - CYTHON_UNUSED_VAR(context); - Py_INCREF(Py_None); - return Py_None; -} -static PyObject * -__Pyx_CyFunction_get_code(__pyx_CyFunctionObject *op, void *context) -{ - PyObject* result = (op->func_code) ? op->func_code : Py_None; - CYTHON_UNUSED_VAR(context); - Py_INCREF(result); - return result; -} -static int -__Pyx_CyFunction_init_defaults(__pyx_CyFunctionObject *op) { - int result = 0; - PyObject *res = op->defaults_getter((PyObject *) op); - if (unlikely(!res)) - return -1; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - op->defaults_tuple = PyTuple_GET_ITEM(res, 0); - Py_INCREF(op->defaults_tuple); - op->defaults_kwdict = PyTuple_GET_ITEM(res, 1); - Py_INCREF(op->defaults_kwdict); - #else - op->defaults_tuple = __Pyx_PySequence_ITEM(res, 0); - if (unlikely(!op->defaults_tuple)) result = -1; - else { - op->defaults_kwdict = __Pyx_PySequence_ITEM(res, 1); - if (unlikely(!op->defaults_kwdict)) result = -1; - } - #endif - Py_DECREF(res); - return result; -} -static int -__Pyx_CyFunction_set_defaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyTuple_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__defaults__ must be set to a tuple object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__defaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->defaults_tuple, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_defaults_locked(__pyx_CyFunctionObject *op) { - PyObject* result = op->defaults_tuple; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_tuple; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_defaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result = NULL; - CYTHON_UNUSED_VAR(context); - __Pyx_BEGIN_CRITICAL_SECTION(op); - result = __Pyx_CyFunction_get_defaults_locked(op); - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static int -__Pyx_CyFunction_set_kwdefaults(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value) { - value = Py_None; - } else if (unlikely(value != Py_None && !PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__kwdefaults__ must be set to a dict object"); - return -1; - } - PyErr_WarnEx(PyExc_RuntimeWarning, "changes to cyfunction.__kwdefaults__ will not " - "currently affect the values used in function calls", 1); - Py_INCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->defaults_kwdict, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults_locked(__pyx_CyFunctionObject *op) { - PyObject* result = op->defaults_kwdict; - if (unlikely(!result)) { - if (op->defaults_getter) { - if (unlikely(__Pyx_CyFunction_init_defaults(op) < 0)) return NULL; - result = op->defaults_kwdict; - } else { - result = Py_None; - } - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_kwdefaults(__pyx_CyFunctionObject *op, void *context) { - PyObject* result; - CYTHON_UNUSED_VAR(context); - __Pyx_BEGIN_CRITICAL_SECTION(op); - result = __Pyx_CyFunction_get_kwdefaults_locked(op); - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static int -__Pyx_CyFunction_set_annotations(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - if (!value || value == Py_None) { - value = NULL; - } else if (unlikely(!PyDict_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__annotations__ must be set to a dict object"); - return -1; - } - Py_XINCREF(value); - __Pyx_BEGIN_CRITICAL_SECTION(op); - __Pyx_Py_XDECREF_SET(op->func_annotations, value); - __Pyx_END_CRITICAL_SECTION(); - return 0; -} -static PyObject * -__Pyx_CyFunction_get_annotations_locked(__pyx_CyFunctionObject *op) { - PyObject* result = op->func_annotations; - if (unlikely(!result)) { - result = PyDict_New(); - if (unlikely(!result)) return NULL; - op->func_annotations = result; - } - Py_INCREF(result); - return result; -} -static PyObject * -__Pyx_CyFunction_get_annotations(__pyx_CyFunctionObject *op, void *context) { - PyObject *result; - CYTHON_UNUSED_VAR(context); - __Pyx_BEGIN_CRITICAL_SECTION(op); - result = __Pyx_CyFunction_get_annotations_locked(op); - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine_value(__pyx_CyFunctionObject *op) { - int is_coroutine = op->flags & __Pyx_CYFUNCTION_COROUTINE; - if (is_coroutine) { - PyObject *is_coroutine_value, *module, *fromlist, *marker = __pyx_mstate_global->__pyx_n_u_is_coroutine; - fromlist = PyList_New(1); - if (unlikely(!fromlist)) return NULL; - Py_INCREF(marker); -#if CYTHON_ASSUME_SAFE_MACROS - PyList_SET_ITEM(fromlist, 0, marker); -#else - if (unlikely(PyList_SetItem(fromlist, 0, marker) < 0)) { - Py_DECREF(marker); - Py_DECREF(fromlist); - return NULL; - } -#endif - module = PyImport_ImportModuleLevelObject(__pyx_mstate_global->__pyx_n_u_asyncio_coroutines, NULL, NULL, fromlist, 0); - Py_DECREF(fromlist); - if (unlikely(!module)) goto ignore; - is_coroutine_value = __Pyx_PyObject_GetAttrStr(module, marker); - Py_DECREF(module); - if (likely(is_coroutine_value)) { - return is_coroutine_value; - } -ignore: - PyErr_Clear(); - } - return __Pyx_PyBool_FromLong(is_coroutine); -} -static PyObject * -__Pyx_CyFunction_get_is_coroutine(__pyx_CyFunctionObject *op, void *context) { - PyObject *result; - CYTHON_UNUSED_VAR(context); - if (op->func_is_coroutine) { - return __Pyx_NewRef(op->func_is_coroutine); - } - result = __Pyx_CyFunction_get_is_coroutine_value(op); - if (unlikely(!result)) - return NULL; - __Pyx_BEGIN_CRITICAL_SECTION(op); - if (op->func_is_coroutine) { - Py_DECREF(result); - result = __Pyx_NewRef(op->func_is_coroutine); - } else { - op->func_is_coroutine = __Pyx_NewRef(result); - } - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static void __Pyx_CyFunction_raise_argument_count_error(__pyx_CyFunctionObject *func, const char* message, Py_ssize_t size) { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_name = __Pyx_CyFunction_get_name(func, NULL); - if (!py_name) return; - PyErr_Format(PyExc_TypeError, - "%.200S() %s (%" CYTHON_FORMAT_SSIZE_T "d given)", - py_name, message, size); - Py_DECREF(py_name); -#else - const char* name = ((PyCFunctionObject*)func)->m_ml->ml_name; - PyErr_Format(PyExc_TypeError, - "%.200s() %s (%" CYTHON_FORMAT_SSIZE_T "d given)", - name, message, size); -#endif -} -static void __Pyx_CyFunction_raise_type_error(__pyx_CyFunctionObject *func, const char* message) { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *py_name = __Pyx_CyFunction_get_name(func, NULL); - if (!py_name) return; - PyErr_Format(PyExc_TypeError, - "%.200S() %s", - py_name, message); - Py_DECREF(py_name); -#else - const char* name = ((PyCFunctionObject*)func)->m_ml->ml_name; - PyErr_Format(PyExc_TypeError, - "%.200s() %s", - name, message); -#endif -} -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject * -__Pyx_CyFunction_get_module(__pyx_CyFunctionObject *op, void *context) { - CYTHON_UNUSED_VAR(context); - return PyObject_GetAttrString(op->func, "__module__"); -} -static int -__Pyx_CyFunction_set_module(__pyx_CyFunctionObject *op, PyObject* value, void *context) { - CYTHON_UNUSED_VAR(context); - return PyObject_SetAttrString(op->func, "__module__", value); -} -#endif -static PyGetSetDef __pyx_CyFunction_getsets[] = { - {"func_doc", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {"__doc__", (getter)__Pyx_CyFunction_get_doc, (setter)__Pyx_CyFunction_set_doc, 0, 0}, - {"func_name", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {"__name__", (getter)__Pyx_CyFunction_get_name, (setter)__Pyx_CyFunction_set_name, 0, 0}, - {"__qualname__", (getter)__Pyx_CyFunction_get_qualname, (setter)__Pyx_CyFunction_set_qualname, 0, 0}, - {"func_dict", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {"__dict__", (getter)__Pyx_CyFunction_get_dict, (setter)__Pyx_CyFunction_set_dict, 0, 0}, - {"func_globals", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {"__globals__", (getter)__Pyx_CyFunction_get_globals, 0, 0, 0}, - {"func_closure", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {"__closure__", (getter)__Pyx_CyFunction_get_closure, 0, 0, 0}, - {"func_code", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {"__code__", (getter)__Pyx_CyFunction_get_code, 0, 0, 0}, - {"func_defaults", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {"__defaults__", (getter)__Pyx_CyFunction_get_defaults, (setter)__Pyx_CyFunction_set_defaults, 0, 0}, - {"__kwdefaults__", (getter)__Pyx_CyFunction_get_kwdefaults, (setter)__Pyx_CyFunction_set_kwdefaults, 0, 0}, - {"__annotations__", (getter)__Pyx_CyFunction_get_annotations, (setter)__Pyx_CyFunction_set_annotations, 0, 0}, - {"_is_coroutine", (getter)__Pyx_CyFunction_get_is_coroutine, 0, 0, 0}, -#if CYTHON_COMPILING_IN_LIMITED_API - {"__module__", (getter)__Pyx_CyFunction_get_module, (setter)__Pyx_CyFunction_set_module, 0, 0}, -#endif - {0, 0, 0, 0, 0} -}; -static PyMemberDef __pyx_CyFunction_members[] = { -#if !CYTHON_COMPILING_IN_LIMITED_API - {"__module__", T_OBJECT, offsetof(PyCFunctionObject, m_module), 0, 0}, -#endif - {"__dictoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_dict), READONLY, 0}, -#if CYTHON_METH_FASTCALL -#if CYTHON_BACKPORT_VECTORCALL || CYTHON_COMPILING_IN_LIMITED_API - {"__vectorcalloffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_vectorcall), READONLY, 0}, -#else - {"__vectorcalloffset__", T_PYSSIZET, offsetof(PyCFunctionObject, vectorcall), READONLY, 0}, -#endif -#if CYTHON_COMPILING_IN_LIMITED_API - {"__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CyFunctionObject, func_weakreflist), READONLY, 0}, -#else - {"__weaklistoffset__", T_PYSSIZET, offsetof(PyCFunctionObject, m_weakreflist), READONLY, 0}, -#endif -#endif - {0, 0, 0, 0, 0} -}; -static PyObject * -__Pyx_CyFunction_reduce(__pyx_CyFunctionObject *m, PyObject *args) -{ - PyObject *result = NULL; - CYTHON_UNUSED_VAR(args); - __Pyx_BEGIN_CRITICAL_SECTION(m); - Py_INCREF(m->func_qualname); - result = m->func_qualname; - __Pyx_END_CRITICAL_SECTION(); - return result; -} -static PyMethodDef __pyx_CyFunction_methods[] = { - {"__reduce__", (PyCFunction)__Pyx_CyFunction_reduce, METH_VARARGS, 0}, - {0, 0, 0, 0} -}; -#if CYTHON_COMPILING_IN_LIMITED_API -#define __Pyx_CyFunction_weakreflist(cyfunc) ((cyfunc)->func_weakreflist) -#else -#define __Pyx_CyFunction_weakreflist(cyfunc) (((PyCFunctionObject*)cyfunc)->m_weakreflist) -#endif -static PyObject *__Pyx_CyFunction_Init(__pyx_CyFunctionObject *op, PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { -#if !CYTHON_COMPILING_IN_LIMITED_API - PyCFunctionObject *cf = (PyCFunctionObject*) op; -#endif - if (unlikely(op == NULL)) - return NULL; -#if CYTHON_COMPILING_IN_LIMITED_API - op->func = PyCFunction_NewEx(ml, (PyObject*)op, module); - if (unlikely(!op->func)) return NULL; -#endif - op->flags = flags; - __Pyx_CyFunction_weakreflist(op) = NULL; -#if !CYTHON_COMPILING_IN_LIMITED_API - cf->m_ml = ml; - cf->m_self = (PyObject *) op; -#endif - Py_XINCREF(closure); - op->func_closure = closure; -#if !CYTHON_COMPILING_IN_LIMITED_API - Py_XINCREF(module); - cf->m_module = module; -#endif - op->func_dict = NULL; - op->func_name = NULL; - Py_INCREF(qualname); - op->func_qualname = qualname; - op->func_doc = NULL; -#if PY_VERSION_HEX < 0x030900B1 || CYTHON_COMPILING_IN_LIMITED_API - op->func_classobj = NULL; -#else - ((PyCMethodObject*)op)->mm_class = NULL; -#endif - op->func_globals = globals; - Py_INCREF(op->func_globals); - Py_XINCREF(code); - op->func_code = code; - op->defaults = NULL; - op->defaults_tuple = NULL; - op->defaults_kwdict = NULL; - op->defaults_getter = NULL; - op->func_annotations = NULL; - op->func_is_coroutine = NULL; -#if CYTHON_METH_FASTCALL - switch (ml->ml_flags & (METH_VARARGS | METH_FASTCALL | METH_NOARGS | METH_O | METH_KEYWORDS | METH_METHOD)) { - case METH_NOARGS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_NOARGS; - break; - case METH_O: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_O; - break; - case METH_METHOD | METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD; - break; - case METH_FASTCALL | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS; - break; - case METH_VARARGS | METH_KEYWORDS: - __Pyx_CyFunction_func_vectorcall(op) = NULL; - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - Py_DECREF(op); - return NULL; - } -#endif - return (PyObject *) op; -} -static int -__Pyx_CyFunction_clear(__pyx_CyFunctionObject *m) -{ - Py_CLEAR(m->func_closure); -#if CYTHON_COMPILING_IN_LIMITED_API - Py_CLEAR(m->func); -#else - Py_CLEAR(((PyCFunctionObject*)m)->m_module); -#endif - Py_CLEAR(m->func_dict); - Py_CLEAR(m->func_name); - Py_CLEAR(m->func_qualname); - Py_CLEAR(m->func_doc); - Py_CLEAR(m->func_globals); - Py_CLEAR(m->func_code); -#if !CYTHON_COMPILING_IN_LIMITED_API -#if PY_VERSION_HEX < 0x030900B1 - Py_CLEAR(__Pyx_CyFunction_GetClassObj(m)); -#else - { - PyObject *cls = (PyObject*) ((PyCMethodObject *) (m))->mm_class; - ((PyCMethodObject *) (m))->mm_class = NULL; - Py_XDECREF(cls); - } -#endif -#endif - Py_CLEAR(m->defaults_tuple); - Py_CLEAR(m->defaults_kwdict); - Py_CLEAR(m->func_annotations); - Py_CLEAR(m->func_is_coroutine); - Py_CLEAR(m->defaults); - return 0; -} -static void __Pyx__CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - if (__Pyx_CyFunction_weakreflist(m) != NULL) - PyObject_ClearWeakRefs((PyObject *) m); - __Pyx_CyFunction_clear(m); - __Pyx_PyHeapTypeObject_GC_Del(m); -} -static void __Pyx_CyFunction_dealloc(__pyx_CyFunctionObject *m) -{ - PyObject_GC_UnTrack(m); - __Pyx__CyFunction_dealloc(m); -} -static int __Pyx_CyFunction_traverse(__pyx_CyFunctionObject *m, visitproc visit, void *arg) -{ - { - int e = __Pyx_call_type_traverse((PyObject*)m, 1, visit, arg); - if (e) return e; - } - Py_VISIT(m->func_closure); -#if CYTHON_COMPILING_IN_LIMITED_API - Py_VISIT(m->func); -#else - Py_VISIT(((PyCFunctionObject*)m)->m_module); -#endif - Py_VISIT(m->func_dict); - __Pyx_VISIT_CONST(m->func_name); - __Pyx_VISIT_CONST(m->func_qualname); - Py_VISIT(m->func_doc); - Py_VISIT(m->func_globals); - __Pyx_VISIT_CONST(m->func_code); -#if !CYTHON_COMPILING_IN_LIMITED_API - Py_VISIT(__Pyx_CyFunction_GetClassObj(m)); -#endif - Py_VISIT(m->defaults_tuple); - Py_VISIT(m->defaults_kwdict); - Py_VISIT(m->func_is_coroutine); - Py_VISIT(m->defaults); - return 0; -} -static PyObject* -__Pyx_CyFunction_repr(__pyx_CyFunctionObject *op) -{ - PyObject *repr; - __Pyx_BEGIN_CRITICAL_SECTION(op); - repr = PyUnicode_FromFormat("", - op->func_qualname, (void *)op); - __Pyx_END_CRITICAL_SECTION(); - return repr; -} -static PyObject * __Pyx_CyFunction_CallMethod(PyObject *func, PyObject *self, PyObject *arg, PyObject *kw) { -#if CYTHON_COMPILING_IN_LIMITED_API - PyObject *f = ((__pyx_CyFunctionObject*)func)->func; - PyCFunction meth; - int flags; - meth = PyCFunction_GetFunction(f); - if (unlikely(!meth)) return NULL; - flags = PyCFunction_GetFlags(f); - if (unlikely(flags < 0)) return NULL; -#else - PyCFunctionObject* f = (PyCFunctionObject*)func; - PyCFunction meth = f->m_ml->ml_meth; - int flags = f->m_ml->ml_flags; -#endif - Py_ssize_t size; - switch (flags & (METH_VARARGS | METH_KEYWORDS | METH_NOARGS | METH_O)) { - case METH_VARARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) - return (*meth)(self, arg); - break; - case METH_VARARGS | METH_KEYWORDS: - return (*(PyCFunctionWithKeywords)(void(*)(void))meth)(self, arg, kw); - case METH_NOARGS: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { -#if CYTHON_ASSUME_SAFE_SIZE - size = PyTuple_GET_SIZE(arg); -#else - size = PyTuple_Size(arg); - if (unlikely(size < 0)) return NULL; -#endif - if (likely(size == 0)) - return (*meth)(self, NULL); - __Pyx_CyFunction_raise_argument_count_error( - (__pyx_CyFunctionObject*)func, - "takes no arguments", size); - return NULL; - } - break; - case METH_O: - if (likely(kw == NULL || PyDict_Size(kw) == 0)) { -#if CYTHON_ASSUME_SAFE_SIZE - size = PyTuple_GET_SIZE(arg); -#else - size = PyTuple_Size(arg); - if (unlikely(size < 0)) return NULL; -#endif - if (likely(size == 1)) { - PyObject *result, *arg0; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - arg0 = PyTuple_GET_ITEM(arg, 0); - #else - arg0 = __Pyx_PySequence_ITEM(arg, 0); if (unlikely(!arg0)) return NULL; - #endif - result = (*meth)(self, arg0); - #if !(CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS) - Py_DECREF(arg0); - #endif - return result; - } - __Pyx_CyFunction_raise_argument_count_error( - (__pyx_CyFunctionObject*)func, - "takes exactly one argument", size); - return NULL; - } - break; - default: - PyErr_SetString(PyExc_SystemError, "Bad call flags for CyFunction"); - return NULL; - } - __Pyx_CyFunction_raise_type_error( - (__pyx_CyFunctionObject*)func, "takes no keyword arguments"); - return NULL; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *self, *result; -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)func)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)func)->m_self; -#endif - result = __Pyx_CyFunction_CallMethod(func, self, arg, kw); - return result; -} -static PyObject *__Pyx_CyFunction_CallAsMethod(PyObject *func, PyObject *args, PyObject *kw) { - PyObject *result; - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *) func; -#if CYTHON_METH_FASTCALL && (CYTHON_VECTORCALL || CYTHON_BACKPORT_VECTORCALL) - __pyx_vectorcallfunc vc = __Pyx_CyFunction_func_vectorcall(cyfunc); - if (vc) { -#if CYTHON_ASSUME_SAFE_MACROS && CYTHON_ASSUME_SAFE_SIZE - return __Pyx_PyVectorcall_FastCallDict(func, vc, &PyTuple_GET_ITEM(args, 0), (size_t)PyTuple_GET_SIZE(args), kw); -#else - (void) &__Pyx_PyVectorcall_FastCallDict; - return PyVectorcall_Call(func, args, kw); -#endif - } -#endif - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - Py_ssize_t argc; - PyObject *new_args; - PyObject *self; -#if CYTHON_ASSUME_SAFE_SIZE - argc = PyTuple_GET_SIZE(args); -#else - argc = PyTuple_Size(args); - if (unlikely(argc < 0)) return NULL; -#endif - new_args = PyTuple_GetSlice(args, 1, argc); - if (unlikely(!new_args)) - return NULL; - self = PyTuple_GetItem(args, 0); - if (unlikely(!self)) { - Py_DECREF(new_args); - PyErr_Format(PyExc_TypeError, - "unbound method %.200S() needs an argument", - cyfunc->func_qualname); - return NULL; - } - result = __Pyx_CyFunction_CallMethod(func, self, new_args, kw); - Py_DECREF(new_args); - } else { - result = __Pyx_CyFunction_Call(func, args, kw); - } - return result; -} -#if CYTHON_METH_FASTCALL && (CYTHON_VECTORCALL || CYTHON_BACKPORT_VECTORCALL) -static CYTHON_INLINE int __Pyx_CyFunction_Vectorcall_CheckArgs(__pyx_CyFunctionObject *cyfunc, Py_ssize_t nargs, PyObject *kwnames) -{ - int ret = 0; - if ((cyfunc->flags & __Pyx_CYFUNCTION_CCLASS) && !(cyfunc->flags & __Pyx_CYFUNCTION_STATICMETHOD)) { - if (unlikely(nargs < 1)) { - __Pyx_CyFunction_raise_type_error( - cyfunc, "needs an argument"); - return -1; - } - ret = 1; - } - if (unlikely(kwnames) && unlikely(__Pyx_PyTuple_GET_SIZE(kwnames))) { - __Pyx_CyFunction_raise_type_error( - cyfunc, "takes no keyword arguments"); - return -1; - } - return ret; -} -static PyObject * __Pyx_CyFunction_Vectorcall_NOARGS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; -#if CYTHON_COMPILING_IN_LIMITED_API - PyCFunction meth = PyCFunction_GetFunction(cyfunc->func); - if (unlikely(!meth)) return NULL; -#else - PyCFunction meth = ((PyCFunctionObject*)cyfunc)->m_ml->ml_meth; -#endif - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)cyfunc)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)cyfunc)->m_self; -#endif - break; - default: - return NULL; - } - if (unlikely(nargs != 0)) { - __Pyx_CyFunction_raise_argument_count_error( - cyfunc, "takes no arguments", nargs); - return NULL; - } - return meth(self, NULL); -} -static PyObject * __Pyx_CyFunction_Vectorcall_O(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; -#if CYTHON_COMPILING_IN_LIMITED_API - PyCFunction meth = PyCFunction_GetFunction(cyfunc->func); - if (unlikely(!meth)) return NULL; -#else - PyCFunction meth = ((PyCFunctionObject*)cyfunc)->m_ml->ml_meth; -#endif - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, kwnames)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)cyfunc)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)cyfunc)->m_self; -#endif - break; - default: - return NULL; - } - if (unlikely(nargs != 1)) { - __Pyx_CyFunction_raise_argument_count_error( - cyfunc, "takes exactly one argument", nargs); - return NULL; - } - return meth(self, args[0]); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; -#if CYTHON_COMPILING_IN_LIMITED_API - PyCFunction meth = PyCFunction_GetFunction(cyfunc->func); - if (unlikely(!meth)) return NULL; -#else - PyCFunction meth = ((PyCFunctionObject*)cyfunc)->m_ml->ml_meth; -#endif - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)cyfunc)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)cyfunc)->m_self; -#endif - break; - default: - return NULL; - } - return ((__Pyx_PyCFunctionFastWithKeywords)(void(*)(void))meth)(self, args, nargs, kwnames); -} -static PyObject * __Pyx_CyFunction_Vectorcall_FASTCALL_KEYWORDS_METHOD(PyObject *func, PyObject *const *args, size_t nargsf, PyObject *kwnames) -{ - __pyx_CyFunctionObject *cyfunc = (__pyx_CyFunctionObject *)func; - PyTypeObject *cls = (PyTypeObject *) __Pyx_CyFunction_GetClassObj(cyfunc); -#if CYTHON_BACKPORT_VECTORCALL - Py_ssize_t nargs = (Py_ssize_t)nargsf; -#else - Py_ssize_t nargs = PyVectorcall_NARGS(nargsf); -#endif - PyObject *self; -#if CYTHON_COMPILING_IN_LIMITED_API - PyCFunction meth = PyCFunction_GetFunction(cyfunc->func); - if (unlikely(!meth)) return NULL; -#else - PyCFunction meth = ((PyCFunctionObject*)cyfunc)->m_ml->ml_meth; -#endif - switch (__Pyx_CyFunction_Vectorcall_CheckArgs(cyfunc, nargs, NULL)) { - case 1: - self = args[0]; - args += 1; - nargs -= 1; - break; - case 0: -#if CYTHON_COMPILING_IN_LIMITED_API - self = PyCFunction_GetSelf(((__pyx_CyFunctionObject*)cyfunc)->func); - if (unlikely(!self) && PyErr_Occurred()) return NULL; -#else - self = ((PyCFunctionObject*)cyfunc)->m_self; -#endif - break; - default: - return NULL; - } - return ((__Pyx_PyCMethod)(void(*)(void))meth)(self, cls, args, (size_t)nargs, kwnames); -} -#endif -static PyType_Slot __pyx_CyFunctionType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_CyFunction_dealloc}, - {Py_tp_repr, (void *)__Pyx_CyFunction_repr}, - {Py_tp_call, (void *)__Pyx_CyFunction_CallAsMethod}, - {Py_tp_traverse, (void *)__Pyx_CyFunction_traverse}, - {Py_tp_clear, (void *)__Pyx_CyFunction_clear}, - {Py_tp_methods, (void *)__pyx_CyFunction_methods}, - {Py_tp_members, (void *)__pyx_CyFunction_members}, - {Py_tp_getset, (void *)__pyx_CyFunction_getsets}, - {Py_tp_descr_get, (void *)__Pyx_PyMethod_New}, - {0, 0}, -}; -static PyType_Spec __pyx_CyFunctionType_spec = { - __PYX_TYPE_MODULE_PREFIX "cython_function_or_method", - sizeof(__pyx_CyFunctionObject), - 0, -#ifdef Py_TPFLAGS_METHOD_DESCRIPTOR - Py_TPFLAGS_METHOD_DESCRIPTOR | -#endif -#if CYTHON_METH_FASTCALL -#if defined(Py_TPFLAGS_HAVE_VECTORCALL) - Py_TPFLAGS_HAVE_VECTORCALL | -#elif defined(_Py_TPFLAGS_HAVE_VECTORCALL) - _Py_TPFLAGS_HAVE_VECTORCALL | -#endif -#endif // CYTHON_METH_FASTCALL -#if PY_VERSION_HEX >= 0x030A0000 - Py_TPFLAGS_IMMUTABLETYPE | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_BASETYPE, - __pyx_CyFunctionType_slots -}; -static int __pyx_CyFunction_init(PyObject *module) { - __pyx_mstatetype *mstate = __Pyx_PyModule_GetState(module); - mstate->__pyx_CyFunctionType = __Pyx_FetchCommonTypeFromSpec( - mstate->__pyx_CommonTypesMetaclassType, module, &__pyx_CyFunctionType_spec, NULL); - if (unlikely(mstate->__pyx_CyFunctionType == NULL)) { - return -1; - } - return 0; -} -static CYTHON_INLINE PyObject *__Pyx_CyFunction_InitDefaults(PyObject *func, PyTypeObject *defaults_type) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults = PyObject_CallObject((PyObject*)defaults_type, NULL); // _PyObject_New(defaults_type); - if (unlikely(!m->defaults)) - return NULL; - return m->defaults; -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsTuple(PyObject *func, PyObject *tuple) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_tuple = tuple; - Py_INCREF(tuple); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetDefaultsKwDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->defaults_kwdict = dict; - Py_INCREF(dict); -} -static CYTHON_INLINE void __Pyx_CyFunction_SetAnnotationsDict(PyObject *func, PyObject *dict) { - __pyx_CyFunctionObject *m = (__pyx_CyFunctionObject *) func; - m->func_annotations = dict; - Py_INCREF(dict); -} - -/* CythonFunction */ -static PyObject *__Pyx_CyFunction_New(PyMethodDef *ml, int flags, PyObject* qualname, - PyObject *closure, PyObject *module, PyObject* globals, PyObject* code) { - PyObject *op = __Pyx_CyFunction_Init( - PyObject_GC_New(__pyx_CyFunctionObject, __pyx_mstate_global->__pyx_CyFunctionType), - ml, flags, qualname, closure, module, globals, code - ); - if (likely(op)) { - PyObject_GC_Track(op); - } - return op; -} - -/* CLineInTraceback */ -#if CYTHON_CLINE_IN_TRACEBACK && CYTHON_CLINE_IN_TRACEBACK_RUNTIME -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - CYTHON_MAYBE_UNUSED_VAR(tstate); - if (unlikely(!__pyx_mstate_global->__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_mstate_global->__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __Pyx_BEGIN_CRITICAL_SECTION(*cython_runtime_dict); - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_mstate_global->__pyx_n_u_cline_in_traceback)) - Py_XINCREF(use_cline); - __Pyx_END_CRITICAL_SECTION(); - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStrNoError(__pyx_mstate_global->__pyx_cython_runtime, __pyx_mstate_global->__pyx_n_u_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_INCREF(use_cline); - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_mstate_global->__pyx_cython_runtime, __pyx_mstate_global->__pyx_n_u_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - Py_XDECREF(use_cline); - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static __Pyx_CachedCodeObjectType *__pyx__find_code_object(struct __Pyx_CodeObjectCache *code_cache, int code_line) { - __Pyx_CachedCodeObjectType* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!code_cache->entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(code_cache->entries, code_cache->count, code_line); - if (unlikely(pos >= code_cache->count) || unlikely(code_cache->entries[pos].code_line != code_line)) { - return NULL; - } - code_object = code_cache->entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static __Pyx_CachedCodeObjectType *__pyx_find_code_object(int code_line) { -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING && !CYTHON_ATOMICS - (void)__pyx__find_code_object; - return NULL; // Most implementation should have atomics. But otherwise, don't make it thread-safe, just miss. -#else - struct __Pyx_CodeObjectCache *code_cache = &__pyx_mstate_global->__pyx_code_cache; -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - __pyx_nonatomic_int_type old_count = __pyx_atomic_incr_acq_rel(&code_cache->accessor_count); - if (old_count < 0) { - __pyx_atomic_decr_acq_rel(&code_cache->accessor_count); - return NULL; - } -#endif - __Pyx_CachedCodeObjectType *result = __pyx__find_code_object(code_cache, code_line); -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - __pyx_atomic_decr_acq_rel(&code_cache->accessor_count); -#endif - return result; -#endif -} -static void __pyx__insert_code_object(struct __Pyx_CodeObjectCache *code_cache, int code_line, __Pyx_CachedCodeObjectType* code_object) -{ - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = code_cache->entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - code_cache->entries = entries; - code_cache->max_count = 64; - code_cache->count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(code_cache->entries, code_cache->count, code_line); - if ((pos < code_cache->count) && unlikely(code_cache->entries[pos].code_line == code_line)) { - __Pyx_CachedCodeObjectType* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_INCREF(code_object); - Py_DECREF(tmp); - return; - } - if (code_cache->count == code_cache->max_count) { - int new_max = code_cache->max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - code_cache->entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - code_cache->entries = entries; - code_cache->max_count = new_max; - } - for (i=code_cache->count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - code_cache->count++; - Py_INCREF(code_object); -} -static void __pyx_insert_code_object(int code_line, __Pyx_CachedCodeObjectType* code_object) { -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING && !CYTHON_ATOMICS - (void)__pyx__insert_code_object; - return; // Most implementation should have atomics. But otherwise, don't make it thread-safe, just fail. -#else - struct __Pyx_CodeObjectCache *code_cache = &__pyx_mstate_global->__pyx_code_cache; -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - __pyx_nonatomic_int_type expected = 0; - if (!__pyx_atomic_int_cmp_exchange(&code_cache->accessor_count, &expected, INT_MIN)) { - return; - } -#endif - __pyx__insert_code_object(code_cache, code_line, code_object); -#if CYTHON_COMPILING_IN_CPYTHON_FREETHREADING - __pyx_atomic_sub(&code_cache->accessor_count, INT_MIN); -#endif -#endif -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 && !CYTHON_COMPILING_IN_LIMITED_API && !defined(PYPY_VERSION) - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#if CYTHON_COMPILING_IN_LIMITED_API -static PyObject *__Pyx_PyCode_Replace_For_AddTraceback(PyObject *code, PyObject *scratch_dict, - PyObject *firstlineno, PyObject *name) { - PyObject *replace = NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "co_firstlineno", firstlineno))) return NULL; - if (unlikely(PyDict_SetItemString(scratch_dict, "co_name", name))) return NULL; - replace = PyObject_GetAttrString(code, "replace"); - if (likely(replace)) { - PyObject *result = PyObject_Call(replace, __pyx_mstate_global->__pyx_empty_tuple, scratch_dict); - Py_DECREF(replace); - return result; - } - PyErr_Clear(); - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyObject *code_object = NULL, *py_py_line = NULL, *py_funcname = NULL, *dict = NULL; - PyObject *replace = NULL, *getframe = NULL, *frame = NULL; - PyObject *exc_type, *exc_value, *exc_traceback; - int success = 0; - if (c_line) { - (void) __pyx_cfilenm; - (void) __Pyx_CLineForTraceback(__Pyx_PyThreadState_Current, c_line); - } - PyErr_Fetch(&exc_type, &exc_value, &exc_traceback); - code_object = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!code_object) { - code_object = Py_CompileString("_getframe()", filename, Py_eval_input); - if (unlikely(!code_object)) goto bad; - py_py_line = PyLong_FromLong(py_line); - if (unlikely(!py_py_line)) goto bad; - py_funcname = PyUnicode_FromString(funcname); - if (unlikely(!py_funcname)) goto bad; - dict = PyDict_New(); - if (unlikely(!dict)) goto bad; - { - PyObject *old_code_object = code_object; - code_object = __Pyx_PyCode_Replace_For_AddTraceback(code_object, dict, py_py_line, py_funcname); - Py_DECREF(old_code_object); - } - if (unlikely(!code_object)) goto bad; - __pyx_insert_code_object(c_line ? -c_line : py_line, code_object); - } else { - dict = PyDict_New(); - } - getframe = PySys_GetObject("_getframe"); - if (unlikely(!getframe)) goto bad; - if (unlikely(PyDict_SetItemString(dict, "_getframe", getframe))) goto bad; - frame = PyEval_EvalCode(code_object, dict, dict); - if (unlikely(!frame) || frame == Py_None) goto bad; - success = 1; - bad: - PyErr_Restore(exc_type, exc_value, exc_traceback); - Py_XDECREF(code_object); - Py_XDECREF(py_py_line); - Py_XDECREF(py_funcname); - Py_XDECREF(dict); - Py_XDECREF(replace); - if (success) { - PyTraceBack_Here( - (struct _frame*)frame); - } - Py_XDECREF(frame); -} -#else -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - if (c_line) { - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - } - py_code = PyCode_NewEmpty(filename, funcname, py_line); - Py_XDECREF(py_funcname); - return py_code; -bad: - Py_XDECREF(py_funcname); - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_mstate_global->__pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} -#endif - -/* Declarations */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) - #ifdef __cplusplus - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - return ::std::complex< double >(x, y); - } - #else - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - return x + y*(__pyx_t_double_complex)_Complex_I; - } - #endif -#else - static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double x, double y) { - __pyx_t_double_complex z; - z.real = x; - z.imag = y; - return z; - } -#endif - -/* Arithmetic */ -#if CYTHON_CCOMPLEX && (1) && (!0 || __cplusplus) -#else - static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - return (a.real == b.real) && (a.imag == b.imag); - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real + b.real; - z.imag = a.imag + b.imag; - return z; - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real - b.real; - z.imag = a.imag - b.imag; - return z; - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - z.real = a.real * b.real - a.imag * b.imag; - z.imag = a.real * b.imag + a.imag * b.real; - return z; - } - #if 1 - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - if (b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); - } else if (fabs(b.real) >= fabs(b.imag)) { - if (b.real == 0 && b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.imag); - } else { - double r = b.imag / b.real; - double s = (double)(1.0) / (b.real + b.imag * r); - return __pyx_t_double_complex_from_parts( - (a.real + a.imag * r) * s, (a.imag - a.real * r) * s); - } - } else { - double r = b.real / b.imag; - double s = (double)(1.0) / (b.imag + b.real * r); - return __pyx_t_double_complex_from_parts( - (a.real * r + a.imag) * s, (a.imag * r - a.real) * s); - } - } - #else - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - if (b.imag == 0) { - return __pyx_t_double_complex_from_parts(a.real / b.real, a.imag / b.real); - } else { - double denom = b.real * b.real + b.imag * b.imag; - return __pyx_t_double_complex_from_parts( - (a.real * b.real + a.imag * b.imag) / denom, - (a.imag * b.real - a.real * b.imag) / denom); - } - } - #endif - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex a) { - __pyx_t_double_complex z; - z.real = -a.real; - z.imag = -a.imag; - return z; - } - static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex a) { - return (a.real == 0) && (a.imag == 0); - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex a) { - __pyx_t_double_complex z; - z.real = a.real; - z.imag = -a.imag; - return z; - } - #if 1 - static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex z) { - #if !defined(HAVE_HYPOT) || defined(_MSC_VER) - return sqrt(z.real*z.real + z.imag*z.imag); - #else - return hypot(z.real, z.imag); - #endif - } - static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex a, __pyx_t_double_complex b) { - __pyx_t_double_complex z; - double r, lnr, theta, z_r, z_theta; - if (b.imag == 0 && b.real == (int)b.real) { - if (b.real < 0) { - double denom = a.real * a.real + a.imag * a.imag; - a.real = a.real / denom; - a.imag = -a.imag / denom; - b.real = -b.real; - } - switch ((int)b.real) { - case 0: - z.real = 1; - z.imag = 0; - return z; - case 1: - return a; - case 2: - return __Pyx_c_prod_double(a, a); - case 3: - z = __Pyx_c_prod_double(a, a); - return __Pyx_c_prod_double(z, a); - case 4: - z = __Pyx_c_prod_double(a, a); - return __Pyx_c_prod_double(z, z); - } - } - if (a.imag == 0) { - if (a.real == 0) { - return a; - } else if ((b.imag == 0) && (a.real >= 0)) { - z.real = pow(a.real, b.real); - z.imag = 0; - return z; - } else if (a.real > 0) { - r = a.real; - theta = 0; - } else { - r = -a.real; - theta = atan2(0.0, -1.0); - } - } else { - r = __Pyx_c_abs_double(a); - theta = atan2(a.imag, a.real); - } - lnr = log(r); - z_r = exp(lnr * b.real - theta * b.imag); - z_theta = theta * b.real + lnr * b.imag; - z.real = z_r * cos(z_theta); - z.imag = z_r * sin(z_theta); - return z; - } - #endif -#endif - -/* FromPy */ -static __pyx_t_double_complex __Pyx_PyComplex_As___pyx_t_double_complex(PyObject* o) { -#if CYTHON_COMPILING_IN_LIMITED_API - double real=-1.0, imag=-1.0; - real = PyComplex_RealAsDouble(o); - if (unlikely(real == -1.0 && PyErr_Occurred())) goto end; - imag = PyComplex_ImagAsDouble(o); - end: - return __pyx_t_double_complex_from_parts( - (double)real, (double)imag - ); -#else - Py_complex cval; -#if !CYTHON_COMPILING_IN_PYPY && !CYTHON_COMPILING_IN_GRAAL - if (PyComplex_CheckExact(o)) - cval = ((PyComplexObject *)o)->cval; - else -#endif - cval = PyComplex_AsCComplex(o); - return __pyx_t_double_complex_from_parts( - (double)cval.real, - (double)cval.imag); -#endif -} - -/* CIntFromPyVerify */ -#define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* CIntFromPy */ -static CYTHON_INLINE int __Pyx_PyLong_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (unlikely(!PyLong_Check(x))) { - int val; - PyObject *tmp = __Pyx_PyNumber_Long(x); - if (!tmp) return (int) -1; - val = __Pyx_PyLong_As_int(tmp); - Py_DECREF(tmp); - return val; - } - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 2 * PyLong_SHIFT)) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 3 * PyLong_SHIFT)) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) >= 4 * PyLong_SHIFT)) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(int) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(int, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(int) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(int) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(int) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(int) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(int) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(int) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(int) - 1 > 4 * PyLong_SHIFT)) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(int) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(int) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - int val; - int ret = -1; -#if PY_VERSION_HEX >= 0x030d00A6 && !CYTHON_COMPILING_IN_LIMITED_API - Py_ssize_t bytes_copied = PyLong_AsNativeBytes( - x, &val, sizeof(val), Py_ASNATIVEBYTES_NATIVE_ENDIAN | (is_unsigned ? Py_ASNATIVEBYTES_UNSIGNED_BUFFER | Py_ASNATIVEBYTES_REJECT_NEGATIVE : 0)); - if (unlikely(bytes_copied == -1)) { - } else if (unlikely(bytes_copied > (Py_ssize_t) sizeof(val))) { - goto raise_overflow; - } else { - ret = 0; - } -#elif PY_VERSION_HEX < 0x030d0000 && !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)x, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *v; - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (likely(PyLong_CheckExact(x))) { - v = __Pyx_NewRef(x); - } else { - v = PyNumber_Long(x); - if (unlikely(!v)) return (int) -1; - assert(PyLong_CheckExact(v)); - } - { - int result = PyObject_RichCompareBool(v, Py_False, Py_LT); - if (unlikely(result < 0)) { - Py_DECREF(v); - return (int) -1; - } - is_negative = result == 1; - } - if (is_unsigned && unlikely(is_negative)) { - Py_DECREF(v); - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - Py_DECREF(v); - if (unlikely(!stepval)) - return (int) -1; - } else { - stepval = v; - } - v = NULL; - val = (int) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(int) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - long idigit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - val |= ((int) idigit) << bits; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - } - Py_DECREF(shift); shift = NULL; - Py_DECREF(mask); mask = NULL; - { - long idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(int) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((int) idigit) << bits; - } - if (!is_unsigned) { - if (unlikely(val & (((int) 1) << (sizeof(int) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - if (unlikely(ret)) - return (int) -1; - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* PyObjectVectorCallKwBuilder */ -#if CYTHON_VECTORCALL -static int __Pyx_VectorcallBuilder_AddArg(PyObject *key, PyObject *value, PyObject *builder, PyObject **args, int n) { - (void)__Pyx_PyObject_FastCallDict; - if (__Pyx_PyTuple_SET_ITEM(builder, n, key) != (0)) return -1; - Py_INCREF(key); - args[n] = value; - return 0; -} -CYTHON_UNUSED static int __Pyx_VectorcallBuilder_AddArg_Check(PyObject *key, PyObject *value, PyObject *builder, PyObject **args, int n) { - (void)__Pyx_VectorcallBuilder_AddArgStr; - if (unlikely(!PyUnicode_Check(key))) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - return -1; - } - return __Pyx_VectorcallBuilder_AddArg(key, value, builder, args, n); -} -static int __Pyx_VectorcallBuilder_AddArgStr(const char *key, PyObject *value, PyObject *builder, PyObject **args, int n) { - PyObject *pyKey = PyUnicode_FromString(key); - if (!pyKey) return -1; - return __Pyx_VectorcallBuilder_AddArg(pyKey, value, builder, args, n); -} -#else // CYTHON_VECTORCALL -CYTHON_UNUSED static int __Pyx_VectorcallBuilder_AddArg_Check(PyObject *key, PyObject *value, PyObject *builder, CYTHON_UNUSED PyObject **args, CYTHON_UNUSED int n) { - if (unlikely(!PyUnicode_Check(key))) { - PyErr_SetString(PyExc_TypeError, "keywords must be strings"); - return -1; - } - return PyDict_SetItem(builder, key, value); -} -#endif - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyLong_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyLong_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#if defined(HAVE_LONG_LONG) && !CYTHON_COMPILING_IN_PYPY - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyLong_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - unsigned char *bytes = (unsigned char *)&value; -#if !CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX >= 0x030d00A4 - if (is_unsigned) { - return PyLong_FromUnsignedNativeBytes(bytes, sizeof(value), -1); - } else { - return PyLong_FromNativeBytes(bytes, sizeof(value), -1); - } -#elif !CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030d0000 - int one = 1; int little = (int)*(unsigned char *)&one; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); -#else - int one = 1; int little = (int)*(unsigned char *)&one; - PyObject *from_bytes, *result = NULL, *kwds = NULL; - PyObject *py_bytes = NULL, *order_str = NULL; - from_bytes = PyObject_GetAttrString((PyObject*)&PyLong_Type, "from_bytes"); - if (!from_bytes) return NULL; - py_bytes = PyBytes_FromStringAndSize((char*)bytes, sizeof(long)); - if (!py_bytes) goto limited_bad; - order_str = PyUnicode_FromString(little ? "little" : "big"); - if (!order_str) goto limited_bad; - { - PyObject *args[3+(CYTHON_VECTORCALL ? 1 : 0)] = { NULL, py_bytes, order_str }; - if (!is_unsigned) { - kwds = __Pyx_MakeVectorcallBuilderKwds(1); - if (!kwds) goto limited_bad; - if (__Pyx_VectorcallBuilder_AddArgStr("signed", __Pyx_NewRef(Py_True), kwds, args+3, 0) < 0) goto limited_bad; - } - result = __Pyx_Object_Vectorcall_CallFromBuilder(from_bytes, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, kwds); - } - limited_bad: - Py_XDECREF(kwds); - Py_XDECREF(order_str); - Py_XDECREF(py_bytes); - Py_XDECREF(from_bytes); - return result; -#endif - } -} - -/* CIntToPy */ -static CYTHON_INLINE PyObject* __Pyx_PyLong_From_int(int value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyLong_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#if defined(HAVE_LONG_LONG) && !CYTHON_COMPILING_IN_PYPY - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyLong_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - unsigned char *bytes = (unsigned char *)&value; -#if !CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX >= 0x030d00A4 - if (is_unsigned) { - return PyLong_FromUnsignedNativeBytes(bytes, sizeof(value), -1); - } else { - return PyLong_FromNativeBytes(bytes, sizeof(value), -1); - } -#elif !CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX < 0x030d0000 - int one = 1; int little = (int)*(unsigned char *)&one; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); -#else - int one = 1; int little = (int)*(unsigned char *)&one; - PyObject *from_bytes, *result = NULL, *kwds = NULL; - PyObject *py_bytes = NULL, *order_str = NULL; - from_bytes = PyObject_GetAttrString((PyObject*)&PyLong_Type, "from_bytes"); - if (!from_bytes) return NULL; - py_bytes = PyBytes_FromStringAndSize((char*)bytes, sizeof(int)); - if (!py_bytes) goto limited_bad; - order_str = PyUnicode_FromString(little ? "little" : "big"); - if (!order_str) goto limited_bad; - { - PyObject *args[3+(CYTHON_VECTORCALL ? 1 : 0)] = { NULL, py_bytes, order_str }; - if (!is_unsigned) { - kwds = __Pyx_MakeVectorcallBuilderKwds(1); - if (!kwds) goto limited_bad; - if (__Pyx_VectorcallBuilder_AddArgStr("signed", __Pyx_NewRef(Py_True), kwds, args+3, 0) < 0) goto limited_bad; - } - result = __Pyx_Object_Vectorcall_CallFromBuilder(from_bytes, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET, kwds); - } - limited_bad: - Py_XDECREF(kwds); - Py_XDECREF(order_str); - Py_XDECREF(py_bytes); - Py_XDECREF(from_bytes); - return result; -#endif - } -} - -/* FormatTypeName */ -#if CYTHON_COMPILING_IN_LIMITED_API && __PYX_LIMITED_VERSION_HEX < 0x030d0000 -static __Pyx_TypeName -__Pyx_PyType_GetFullyQualifiedName(PyTypeObject* tp) -{ - PyObject *module = NULL, *name = NULL, *result = NULL; - #if __PYX_LIMITED_VERSION_HEX < 0x030b0000 - name = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_mstate_global->__pyx_n_u_qualname); - #else - name = PyType_GetQualName(tp); - #endif - if (unlikely(name == NULL) || unlikely(!PyUnicode_Check(name))) goto bad; - module = __Pyx_PyObject_GetAttrStr((PyObject *)tp, - __pyx_mstate_global->__pyx_n_u_module); - if (unlikely(module == NULL) || unlikely(!PyUnicode_Check(module))) goto bad; - if (PyUnicode_CompareWithASCIIString(module, "builtins") == 0) { - result = name; - name = NULL; - goto done; - } - result = PyUnicode_FromFormat("%U.%U", module, name); - if (unlikely(result == NULL)) goto bad; - done: - Py_XDECREF(name); - Py_XDECREF(module); - return result; - bad: - PyErr_Clear(); - if (name) { - result = name; - name = NULL; - } else { - result = __Pyx_NewRef(__pyx_mstate_global->__pyx_kp_u__2); - } - goto done; -} -#endif - -/* CIntFromPy */ -static CYTHON_INLINE long __Pyx_PyLong_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (unlikely(!PyLong_Check(x))) { - long val; - PyObject *tmp = __Pyx_PyNumber_Long(x); - if (!tmp) return (long) -1; - val = __Pyx_PyLong_As_long(tmp); - Py_DECREF(tmp); - return val; - } - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - if (unlikely(__Pyx_PyLong_IsNeg(x))) { - goto raise_neg_overflow; - } else if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_upylong, __Pyx_PyLong_CompactValueUnsigned(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_DigitCount(x)) { - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 2 * PyLong_SHIFT)) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 3 * PyLong_SHIFT)) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) >= 4 * PyLong_SHIFT)) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030C00A7 - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if ((sizeof(long) <= sizeof(unsigned long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(unsigned PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - if (__Pyx_PyLong_IsCompact(x)) { - __PYX_VERIFY_RETURN_INT(long, __Pyx_compact_pylong, __Pyx_PyLong_CompactValue(x)) - } else { - const digit* digits = __Pyx_PyLong_Digits(x); - assert(__Pyx_PyLong_DigitCount(x) > 1); - switch (__Pyx_PyLong_SignedDigitCount(x)) { - case -2: - if ((8 * sizeof(long) - 1 > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if ((8 * sizeof(long) > 1 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 2 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if ((8 * sizeof(long) - 1 > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if ((8 * sizeof(long) > 2 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 3 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if ((8 * sizeof(long) - 1 > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if ((8 * sizeof(long) > 3 * PyLong_SHIFT)) { - if ((8 * sizeof(unsigned long) > 4 * PyLong_SHIFT)) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if ((8 * sizeof(long) - 1 > 4 * PyLong_SHIFT)) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } - } -#endif - if ((sizeof(long) <= sizeof(long))) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if ((sizeof(long) <= sizeof(PY_LONG_LONG))) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { - long val; - int ret = -1; -#if PY_VERSION_HEX >= 0x030d00A6 && !CYTHON_COMPILING_IN_LIMITED_API - Py_ssize_t bytes_copied = PyLong_AsNativeBytes( - x, &val, sizeof(val), Py_ASNATIVEBYTES_NATIVE_ENDIAN | (is_unsigned ? Py_ASNATIVEBYTES_UNSIGNED_BUFFER | Py_ASNATIVEBYTES_REJECT_NEGATIVE : 0)); - if (unlikely(bytes_copied == -1)) { - } else if (unlikely(bytes_copied > (Py_ssize_t) sizeof(val))) { - goto raise_overflow; - } else { - ret = 0; - } -#elif PY_VERSION_HEX < 0x030d0000 && !(CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API) || defined(_PyLong_AsByteArray) - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - ret = _PyLong_AsByteArray((PyLongObject *)x, - bytes, sizeof(val), - is_little, !is_unsigned); -#else - PyObject *v; - PyObject *stepval = NULL, *mask = NULL, *shift = NULL; - int bits, remaining_bits, is_negative = 0; - int chunk_size = (sizeof(long) < 8) ? 30 : 62; - if (likely(PyLong_CheckExact(x))) { - v = __Pyx_NewRef(x); - } else { - v = PyNumber_Long(x); - if (unlikely(!v)) return (long) -1; - assert(PyLong_CheckExact(v)); - } - { - int result = PyObject_RichCompareBool(v, Py_False, Py_LT); - if (unlikely(result < 0)) { - Py_DECREF(v); - return (long) -1; - } - is_negative = result == 1; - } - if (is_unsigned && unlikely(is_negative)) { - Py_DECREF(v); - goto raise_neg_overflow; - } else if (is_negative) { - stepval = PyNumber_Invert(v); - Py_DECREF(v); - if (unlikely(!stepval)) - return (long) -1; - } else { - stepval = v; - } - v = NULL; - val = (long) 0; - mask = PyLong_FromLong((1L << chunk_size) - 1); if (unlikely(!mask)) goto done; - shift = PyLong_FromLong(chunk_size); if (unlikely(!shift)) goto done; - for (bits = 0; bits < (int) sizeof(long) * 8 - chunk_size; bits += chunk_size) { - PyObject *tmp, *digit; - long idigit; - digit = PyNumber_And(stepval, mask); - if (unlikely(!digit)) goto done; - idigit = PyLong_AsLong(digit); - Py_DECREF(digit); - if (unlikely(idigit < 0)) goto done; - val |= ((long) idigit) << bits; - tmp = PyNumber_Rshift(stepval, shift); - if (unlikely(!tmp)) goto done; - Py_DECREF(stepval); stepval = tmp; - } - Py_DECREF(shift); shift = NULL; - Py_DECREF(mask); mask = NULL; - { - long idigit = PyLong_AsLong(stepval); - if (unlikely(idigit < 0)) goto done; - remaining_bits = ((int) sizeof(long) * 8) - bits - (is_unsigned ? 0 : 1); - if (unlikely(idigit >= (1L << remaining_bits))) - goto raise_overflow; - val |= ((long) idigit) << bits; - } - if (!is_unsigned) { - if (unlikely(val & (((long) 1) << (sizeof(long) * 8 - 1)))) - goto raise_overflow; - if (is_negative) - val = ~val; - } - ret = 0; - done: - Py_XDECREF(shift); - Py_XDECREF(mask); - Py_XDECREF(stepval); -#endif - if (unlikely(ret)) - return (long) -1; - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = __Pyx_PyType_GetSlot(a, tp_base, PyTypeObject*); - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -static CYTHON_INLINE int __Pyx_IsAnySubtype2(PyTypeObject *cls, PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (cls == a || cls == b) return 1; - mro = cls->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - PyObject *base = PyTuple_GET_ITEM(mro, i); - if (base == (PyObject *)a || base == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(cls, a) || __Pyx_InBases(cls, b); -} -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - if (exc_type1) { - return __Pyx_IsAnySubtype2((PyTypeObject*)err, (PyTypeObject*)exc_type1, (PyTypeObject*)exc_type2); - } else { - return __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } -} -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); - for (i=0; i= 0x030B00a4 - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_value = exc_info->exc_value; - exc_info->exc_value = *value; - if (tmp_value == NULL || tmp_value == Py_None) { - Py_XDECREF(tmp_value); - tmp_value = NULL; - tmp_type = NULL; - tmp_tb = NULL; - } else { - tmp_type = (PyObject*) Py_TYPE(tmp_value); - Py_INCREF(tmp_type); - #if CYTHON_COMPILING_IN_CPYTHON - tmp_tb = ((PyBaseExceptionObject*) tmp_value)->traceback; - Py_XINCREF(tmp_tb); - #else - tmp_tb = PyException_GetTraceback(tmp_value); - #endif - } - #elif CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* PyObjectCall2Args */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args[3] = {NULL, arg1, arg2}; - return __Pyx_PyObject_FastCall(function, args+1, 2 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); -} - -/* PyObjectCallMethod1 */ -#if !(CYTHON_VECTORCALL && (__PYX_LIMITED_VERSION_HEX >= 0x030C0000 || (!CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX >= 0x03090000))) -static PyObject* __Pyx__PyObject_CallMethod1(PyObject* method, PyObject* arg) { - PyObject *result = __Pyx_PyObject_CallOneArg(method, arg); - Py_DECREF(method); - return result; -} -#endif -static PyObject* __Pyx_PyObject_CallMethod1(PyObject* obj, PyObject* method_name, PyObject* arg) { -#if CYTHON_VECTORCALL && (__PYX_LIMITED_VERSION_HEX >= 0x030C0000 || (!CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX >= 0x03090000)) - PyObject *args[2] = {obj, arg}; - (void) __Pyx_PyObject_GetMethod; - (void) __Pyx_PyObject_CallOneArg; - (void) __Pyx_PyObject_Call2Args; - return PyObject_VectorcallMethod(method_name, args, 2 | PY_VECTORCALL_ARGUMENTS_OFFSET, NULL); -#else - PyObject *method = NULL, *result; - int is_method = __Pyx_PyObject_GetMethod(obj, method_name, &method); - if (likely(is_method)) { - result = __Pyx_PyObject_Call2Args(method, obj, arg); - Py_DECREF(method); - return result; - } - if (unlikely(!method)) return NULL; - return __Pyx__PyObject_CallMethod1(method, arg); -#endif -} - -/* ReturnWithStopIteration */ -static void __Pyx__ReturnWithStopIteration(PyObject* value, int async); -static CYTHON_INLINE void __Pyx_ReturnWithStopIteration(PyObject* value, int async, int iternext) { - if (value == Py_None) { - if (async || !iternext) - PyErr_SetNone(async ? PyExc_StopAsyncIteration : PyExc_StopIteration); - return; - } - __Pyx__ReturnWithStopIteration(value, async); -} -static void __Pyx__ReturnWithStopIteration(PyObject* value, int async) { -#if CYTHON_COMPILING_IN_CPYTHON - __Pyx_PyThreadState_declare -#endif - PyObject *exc; - PyObject *exc_type = async ? PyExc_StopAsyncIteration : PyExc_StopIteration; -#if CYTHON_COMPILING_IN_CPYTHON - if ((PY_VERSION_HEX >= (0x030C00A6)) || unlikely(PyTuple_Check(value) || PyExceptionInstance_Check(value))) { - if (PY_VERSION_HEX >= (0x030e00A1)) { - exc = __Pyx_PyObject_CallOneArg(exc_type, value); - } else { - PyObject *args_tuple = PyTuple_New(1); - if (unlikely(!args_tuple)) return; - Py_INCREF(value); - PyTuple_SET_ITEM(args_tuple, 0, value); - exc = PyObject_Call(exc_type, args_tuple, NULL); - Py_DECREF(args_tuple); - } - if (unlikely(!exc)) return; - } else { - Py_INCREF(value); - exc = value; - } - #if CYTHON_FAST_THREAD_STATE - __Pyx_PyThreadState_assign - #if CYTHON_USE_EXC_INFO_STACK - if (!__pyx_tstate->exc_info->exc_value) - #else - if (!__pyx_tstate->exc_type) - #endif - { - Py_INCREF(exc_type); - __Pyx_ErrRestore(exc_type, exc, NULL); - return; - } - #endif -#else - exc = __Pyx_PyObject_CallOneArg(exc_type, value); - if (unlikely(!exc)) return; -#endif - PyErr_SetObject(exc_type, exc); - Py_DECREF(exc); -} - -/* CoroutineBase */ -#if !CYTHON_COMPILING_IN_LIMITED_API -#include -#if PY_VERSION_HEX >= 0x030b00a6 && !defined(PYPY_VERSION) - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -#endif // CYTHON_COMPILING_IN_LIMITED_API -static CYTHON_INLINE void -__Pyx_Coroutine_Undelegate(__pyx_CoroutineObject *gen) { -#if CYTHON_USE_AM_SEND - gen->yieldfrom_am_send = NULL; -#endif - Py_CLEAR(gen->yieldfrom); -} -static int __Pyx_PyGen__FetchStopIterationValue(PyThreadState *__pyx_tstate, PyObject **pvalue) { - PyObject *et, *ev, *tb; - PyObject *value = NULL; - CYTHON_UNUSED_VAR(__pyx_tstate); - __Pyx_ErrFetch(&et, &ev, &tb); - if (!et) { - Py_XDECREF(tb); - Py_XDECREF(ev); - Py_INCREF(Py_None); - *pvalue = Py_None; - return 0; - } - if (likely(et == PyExc_StopIteration)) { - if (!ev) { - Py_INCREF(Py_None); - value = Py_None; - } - else if (likely(__Pyx_IS_TYPE(ev, (PyTypeObject*)PyExc_StopIteration))) { - #if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_GRAAL - value = PyObject_GetAttr(ev, __pyx_mstate_global->__pyx_n_u_value); - if (unlikely(!value)) goto limited_api_failure; - #else - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); - #endif - Py_DECREF(ev); - } - else if (unlikely(PyTuple_Check(ev))) { - Py_ssize_t tuple_size = __Pyx_PyTuple_GET_SIZE(ev); - #if !CYTHON_ASSUME_SAFE_SIZE - if (unlikely(tuple_size < 0)) { - Py_XDECREF(tb); - Py_DECREF(ev); - Py_DECREF(et); - return -1; - } - #endif - if (tuple_size >= 1) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - value = PyTuple_GET_ITEM(ev, 0); - Py_INCREF(value); -#elif CYTHON_ASSUME_SAFE_MACROS - value = PySequence_ITEM(ev, 0); -#else - value = PySequence_GetItem(ev, 0); - if (!value) goto limited_api_failure; -#endif - } else { - Py_INCREF(Py_None); - value = Py_None; - } - Py_DECREF(ev); - } - else if (!__Pyx_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration)) { - value = ev; - } - if (likely(value)) { - Py_XDECREF(tb); - Py_DECREF(et); - *pvalue = value; - return 0; - } - } else if (!__Pyx_PyErr_GivenExceptionMatches(et, PyExc_StopIteration)) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - PyErr_NormalizeException(&et, &ev, &tb); - if (unlikely(!PyObject_TypeCheck(ev, (PyTypeObject*)PyExc_StopIteration))) { - __Pyx_ErrRestore(et, ev, tb); - return -1; - } - Py_XDECREF(tb); - Py_DECREF(et); -#if CYTHON_COMPILING_IN_LIMITED_API - value = PyObject_GetAttr(ev, __pyx_mstate_global->__pyx_n_u_value); -#else - value = ((PyStopIterationObject *)ev)->value; - Py_INCREF(value); -#endif - Py_DECREF(ev); -#if CYTHON_COMPILING_IN_LIMITED_API - if (unlikely(!value)) return -1; -#endif - *pvalue = value; - return 0; -#if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_GRAAL || !CYTHON_ASSUME_SAFE_MACROS - limited_api_failure: - Py_XDECREF(et); - Py_XDECREF(tb); - Py_XDECREF(ev); - return -1; -#endif -} -static CYTHON_INLINE -__Pyx_PySendResult __Pyx_Coroutine_status_from_result(PyObject **retval) { - if (*retval) { - return PYGEN_NEXT; - } else if (likely(__Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, retval) == 0)) { - return PYGEN_RETURN; - } else { - return PYGEN_ERROR; - } -} -static CYTHON_INLINE -void __Pyx_Coroutine_ExceptionClear(__Pyx_ExcInfoStruct *exc_state) { -#if PY_VERSION_HEX >= 0x030B00a4 - Py_CLEAR(exc_state->exc_value); -#else - PyObject *t, *v, *tb; - t = exc_state->exc_type; - v = exc_state->exc_value; - tb = exc_state->exc_traceback; - exc_state->exc_type = NULL; - exc_state->exc_value = NULL; - exc_state->exc_traceback = NULL; - Py_XDECREF(t); - Py_XDECREF(v); - Py_XDECREF(tb); -#endif -} -#define __Pyx_Coroutine_AlreadyRunningError(gen) (__Pyx__Coroutine_AlreadyRunningError(gen), (PyObject*)NULL) -static void __Pyx__Coroutine_AlreadyRunningError(__pyx_CoroutineObject *gen) { - const char *msg; - CYTHON_MAYBE_UNUSED_VAR(gen); - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check((PyObject*)gen)) { - msg = "coroutine already executing"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact((PyObject*)gen)) { - msg = "async generator already executing"; - #endif - } else { - msg = "generator already executing"; - } - PyErr_SetString(PyExc_ValueError, msg); -} -static void __Pyx_Coroutine_AlreadyTerminatedError(PyObject *gen, PyObject *value, int closing) { - CYTHON_MAYBE_UNUSED_VAR(gen); - CYTHON_MAYBE_UNUSED_VAR(closing); - #ifdef __Pyx_Coroutine_USED - if (!closing && __Pyx_Coroutine_Check(gen)) { - PyErr_SetString(PyExc_RuntimeError, "cannot reuse already awaited coroutine"); - } else - #endif - if (value) { - #ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(gen)) - PyErr_SetNone(PyExc_StopAsyncIteration); - else - #endif - PyErr_SetNone(PyExc_StopIteration); - } -} -static -__Pyx_PySendResult __Pyx_Coroutine_SendEx(__pyx_CoroutineObject *self, PyObject *value, PyObject **result, int closing) { - __Pyx_PyThreadState_declare - PyThreadState *tstate; - __Pyx_ExcInfoStruct *exc_state; - PyObject *retval; - assert(__Pyx_Coroutine_get_is_running(self)); // Callers should ensure is_running - if (unlikely(self->resume_label == -1)) { - __Pyx_Coroutine_AlreadyTerminatedError((PyObject*)self, value, closing); - return PYGEN_ERROR; - } -#if CYTHON_FAST_THREAD_STATE - __Pyx_PyThreadState_assign - tstate = __pyx_tstate; -#else - tstate = __Pyx_PyThreadState_Current; -#endif - exc_state = &self->gi_exc_state; - if (exc_state->exc_value) { - #if CYTHON_COMPILING_IN_LIMITED_API || CYTHON_COMPILING_IN_PYPY - #else - PyObject *exc_tb; - #if PY_VERSION_HEX >= 0x030B00a4 && !CYTHON_COMPILING_IN_CPYTHON - exc_tb = PyException_GetTraceback(exc_state->exc_value); - #elif PY_VERSION_HEX >= 0x030B00a4 - exc_tb = ((PyBaseExceptionObject*) exc_state->exc_value)->traceback; - #else - exc_tb = exc_state->exc_traceback; - #endif - if (exc_tb) { - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - assert(f->f_back == NULL); - #if PY_VERSION_HEX >= 0x030B00A1 - f->f_back = PyThreadState_GetFrame(tstate); - #else - Py_XINCREF(tstate->frame); - f->f_back = tstate->frame; - #endif - #if PY_VERSION_HEX >= 0x030B00a4 && !CYTHON_COMPILING_IN_CPYTHON - Py_DECREF(exc_tb); - #endif - } - #endif - } -#if CYTHON_USE_EXC_INFO_STACK - exc_state->previous_item = tstate->exc_info; - tstate->exc_info = exc_state; -#else - if (exc_state->exc_type) { - __Pyx_ExceptionSwap(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } else { - __Pyx_Coroutine_ExceptionClear(exc_state); - __Pyx_ExceptionSave(&exc_state->exc_type, &exc_state->exc_value, &exc_state->exc_traceback); - } -#endif - retval = self->body(self, tstate, value); -#if CYTHON_USE_EXC_INFO_STACK - exc_state = &self->gi_exc_state; - tstate->exc_info = exc_state->previous_item; - exc_state->previous_item = NULL; - __Pyx_Coroutine_ResetFrameBackpointer(exc_state); -#endif - *result = retval; - if (self->resume_label == -1) { - return likely(retval) ? PYGEN_RETURN : PYGEN_ERROR; - } - return PYGEN_NEXT; -} -static CYTHON_INLINE void __Pyx_Coroutine_ResetFrameBackpointer(__Pyx_ExcInfoStruct *exc_state) { -#if CYTHON_COMPILING_IN_PYPY || CYTHON_COMPILING_IN_LIMITED_API - CYTHON_UNUSED_VAR(exc_state); -#else - PyObject *exc_tb; - #if PY_VERSION_HEX >= 0x030B00a4 - if (!exc_state->exc_value) return; - exc_tb = PyException_GetTraceback(exc_state->exc_value); - #else - exc_tb = exc_state->exc_traceback; - #endif - if (likely(exc_tb)) { - PyTracebackObject *tb = (PyTracebackObject *) exc_tb; - PyFrameObject *f = tb->tb_frame; - Py_CLEAR(f->f_back); - #if PY_VERSION_HEX >= 0x030B00a4 - Py_DECREF(exc_tb); - #endif - } -#endif -} -#define __Pyx_Coroutine_MethodReturnFromResult(gen, result, retval, iternext)\ - ((result) == PYGEN_NEXT ? (retval) : __Pyx__Coroutine_MethodReturnFromResult(gen, result, retval, iternext)) -static PyObject * -__Pyx__Coroutine_MethodReturnFromResult(PyObject* gen, __Pyx_PySendResult result, PyObject *retval, int iternext) { - CYTHON_MAYBE_UNUSED_VAR(gen); - if (likely(result == PYGEN_RETURN)) { - int is_async = 0; - #ifdef __Pyx_AsyncGen_USED - is_async = __Pyx_AsyncGen_CheckExact(gen); - #endif - __Pyx_ReturnWithStopIteration(retval, is_async, iternext); - Py_XDECREF(retval); - } - return NULL; -} -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE -PyObject *__Pyx_PyGen_Send(PyGenObject *gen, PyObject *arg) { -#if PY_VERSION_HEX <= 0x030A00A1 - return _PyGen_Send(gen, arg); -#else - PyObject *result; - if (PyIter_Send((PyObject*)gen, arg ? arg : Py_None, &result) == PYGEN_RETURN) { - if (PyAsyncGen_CheckExact(gen)) { - assert(result == Py_None); - PyErr_SetNone(PyExc_StopAsyncIteration); - } - else if (result == Py_None) { - PyErr_SetNone(PyExc_StopIteration); - } - else { -#if PY_VERSION_HEX < 0x030d00A1 - _PyGen_SetStopIterationValue(result); -#else - if (!PyTuple_Check(result) && !PyExceptionInstance_Check(result)) { - PyErr_SetObject(PyExc_StopIteration, result); - } else { - PyObject *exc = __Pyx_PyObject_CallOneArg(PyExc_StopIteration, result); - if (likely(exc != NULL)) { - PyErr_SetObject(PyExc_StopIteration, exc); - Py_DECREF(exc); - } - } -#endif - } - Py_DECREF(result); - result = NULL; - } - return result; -#endif -} -#endif -static CYTHON_INLINE __Pyx_PySendResult -__Pyx_Coroutine_FinishDelegation(__pyx_CoroutineObject *gen, PyObject** retval) { - __Pyx_PySendResult result; - PyObject *val = NULL; - assert(__Pyx_Coroutine_get_is_running(gen)); - __Pyx_Coroutine_Undelegate(gen); - __Pyx_PyGen__FetchStopIterationValue(__Pyx_PyThreadState_Current, &val); - result = __Pyx_Coroutine_SendEx(gen, val, retval, 0); - Py_XDECREF(val); - return result; -} -#if CYTHON_USE_AM_SEND -static __Pyx_PySendResult -__Pyx_Coroutine_SendToDelegate(__pyx_CoroutineObject *gen, __Pyx_pyiter_sendfunc gen_am_send, PyObject *value, PyObject **retval) { - PyObject *ret = NULL; - __Pyx_PySendResult delegate_result, result; - assert(__Pyx_Coroutine_get_is_running(gen)); - delegate_result = gen_am_send(gen->yieldfrom, value, &ret); - if (delegate_result == PYGEN_NEXT) { - assert (ret != NULL); - *retval = ret; - return PYGEN_NEXT; - } - assert (delegate_result != PYGEN_ERROR || ret == NULL); - __Pyx_Coroutine_Undelegate(gen); - result = __Pyx_Coroutine_SendEx(gen, ret, retval, 0); - Py_XDECREF(ret); - return result; -} -#endif -static PyObject *__Pyx_Coroutine_Send(PyObject *self, PyObject *value) { - PyObject *retval = NULL; - __Pyx_PySendResult result = __Pyx_Coroutine_AmSend(self, value, &retval); - return __Pyx_Coroutine_MethodReturnFromResult(self, result, retval, 0); -} -static __Pyx_PySendResult -__Pyx_Coroutine_AmSend(PyObject *self, PyObject *value, PyObject **retval) { - __Pyx_PySendResult result; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - if (unlikely(__Pyx_Coroutine_test_and_set_is_running(gen))) { - *retval = __Pyx_Coroutine_AlreadyRunningError(gen); - return PYGEN_ERROR; - } - #if CYTHON_USE_AM_SEND - if (gen->yieldfrom_am_send) { - result = __Pyx_Coroutine_SendToDelegate(gen, gen->yieldfrom_am_send, value, retval); - } else - #endif - if (gen->yieldfrom) { - PyObject *yf = gen->yieldfrom; - PyObject *ret; - #if !CYTHON_USE_AM_SEND - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - ret = __Pyx_Coroutine_Send(yf, value); - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - ret = __Pyx_async_gen_asend_send(yf, value); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - if (PyCoro_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, value == Py_None ? NULL : value); - } else - #endif - #endif - { - #if !CYTHON_COMPILING_IN_LIMITED_API || __PYX_LIMITED_VERSION_HEX >= 0x03080000 - if (value == Py_None && PyIter_Check(yf)) - ret = __Pyx_PyIter_Next_Plain(yf); - else - #endif - ret = __Pyx_PyObject_CallMethod1(yf, __pyx_mstate_global->__pyx_n_u_send, value); - } - if (likely(ret)) { - __Pyx_Coroutine_unset_is_running(gen); - *retval = ret; - return PYGEN_NEXT; - } - result = __Pyx_Coroutine_FinishDelegation(gen, retval); - } else { - result = __Pyx_Coroutine_SendEx(gen, value, retval, 0); - } - __Pyx_Coroutine_unset_is_running(gen); - return result; -} -static int __Pyx_Coroutine_CloseIter(__pyx_CoroutineObject *gen, PyObject *yf) { - __Pyx_PySendResult result; - PyObject *retval = NULL; - CYTHON_UNUSED_VAR(gen); - assert(__Pyx_Coroutine_get_is_running(gen)); - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - result = __Pyx_Coroutine_Close(yf, &retval); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_Check(yf)) { - result = __Pyx_Coroutine_Close(yf, &retval); - } else - if (__Pyx_CoroutineAwait_CheckExact(yf)) { - result = __Pyx_CoroutineAwait_Close((__pyx_CoroutineAwaitObject*)yf); - } else - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_PyAsyncGenASend_CheckExact(yf)) { - retval = __Pyx_async_gen_asend_close(yf, NULL); - result = PYGEN_RETURN; - } else - if (__pyx_PyAsyncGenAThrow_CheckExact(yf)) { - retval = __Pyx_async_gen_athrow_close(yf, NULL); - result = PYGEN_RETURN; - } else - #endif - { - PyObject *meth; - result = PYGEN_RETURN; - meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_mstate_global->__pyx_n_u_close); - if (unlikely(!meth)) { - if (unlikely(PyErr_Occurred())) { - PyErr_WriteUnraisable(yf); - } - } else { - retval = __Pyx_PyObject_CallNoArg(meth); - Py_DECREF(meth); - if (unlikely(!retval)) { - result = PYGEN_ERROR; - } - } - } - Py_XDECREF(retval); - return result == PYGEN_ERROR ? -1 : 0; -} -static PyObject *__Pyx_Generator_Next(PyObject *self) { - __Pyx_PySendResult result; - PyObject *retval = NULL; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - if (unlikely(__Pyx_Coroutine_test_and_set_is_running(gen))) { - return __Pyx_Coroutine_AlreadyRunningError(gen); - } - #if CYTHON_USE_AM_SEND - if (gen->yieldfrom_am_send) { - result = __Pyx_Coroutine_SendToDelegate(gen, gen->yieldfrom_am_send, Py_None, &retval); - } else - #endif - if (gen->yieldfrom) { - PyObject *yf = gen->yieldfrom; - PyObject *ret; - #ifdef __Pyx_Generator_USED - if (__Pyx_Generator_CheckExact(yf)) { - ret = __Pyx_Generator_Next(yf); - } else - #endif - #ifdef __Pyx_Coroutine_USED - if (__Pyx_Coroutine_CheckExact(yf)) { - ret = __Pyx_Coroutine_Send(yf, Py_None); - } else - #endif - #if CYTHON_COMPILING_IN_CPYTHON && (PY_VERSION_HEX < 0x030A00A3 || !CYTHON_USE_AM_SEND) - if (PyGen_CheckExact(yf)) { - ret = __Pyx_PyGen_Send((PyGenObject*)yf, NULL); - } else - #endif - ret = __Pyx_PyIter_Next_Plain(yf); - if (likely(ret)) { - __Pyx_Coroutine_unset_is_running(gen); - return ret; - } - result = __Pyx_Coroutine_FinishDelegation(gen, &retval); - } else { - result = __Pyx_Coroutine_SendEx(gen, Py_None, &retval, 0); - } - __Pyx_Coroutine_unset_is_running(gen); - return __Pyx_Coroutine_MethodReturnFromResult(self, result, retval, 1); -} -static PyObject *__Pyx_Coroutine_Close_Method(PyObject *self, PyObject *arg) { - PyObject *retval = NULL; - __Pyx_PySendResult result; - CYTHON_UNUSED_VAR(arg); - result = __Pyx_Coroutine_Close(self, &retval); - if (unlikely(result == PYGEN_ERROR)) - return NULL; - Py_XDECREF(retval); - Py_RETURN_NONE; -} -static __Pyx_PySendResult -__Pyx_Coroutine_Close(PyObject *self, PyObject **retval) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - __Pyx_PySendResult result; - PyObject *yf; - int err = 0; - if (unlikely(__Pyx_Coroutine_test_and_set_is_running(gen))) { - *retval = __Pyx_Coroutine_AlreadyRunningError(gen); - return PYGEN_ERROR; - } - yf = gen->yieldfrom; - if (yf) { - Py_INCREF(yf); - err = __Pyx_Coroutine_CloseIter(gen, yf); - __Pyx_Coroutine_Undelegate(gen); - Py_DECREF(yf); - } - if (err == 0) - PyErr_SetNone(PyExc_GeneratorExit); - result = __Pyx_Coroutine_SendEx(gen, NULL, retval, 1); - if (result == PYGEN_ERROR) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_Coroutine_unset_is_running(gen); - if (!__Pyx_PyErr_Occurred()) { - return PYGEN_RETURN; - } else if (likely(__Pyx_PyErr_ExceptionMatches2(PyExc_GeneratorExit, PyExc_StopIteration))) { - __Pyx_PyErr_Clear(); - return PYGEN_RETURN; - } - return PYGEN_ERROR; - } else if (likely(result == PYGEN_RETURN && *retval == Py_None)) { - __Pyx_Coroutine_unset_is_running(gen); - return PYGEN_RETURN; - } else { - const char *msg; - Py_DECREF(*retval); - *retval = NULL; - if ((0)) { - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_Coroutine_Check(self)) { - msg = "coroutine ignored GeneratorExit"; - #endif - #ifdef __Pyx_AsyncGen_USED - } else if (__Pyx_AsyncGen_CheckExact(self)) { - msg = "async generator ignored GeneratorExit"; - #endif - } else { - msg = "generator ignored GeneratorExit"; - } - PyErr_SetString(PyExc_RuntimeError, msg); - __Pyx_Coroutine_unset_is_running(gen); - return PYGEN_ERROR; - } -} -static PyObject *__Pyx__Coroutine_Throw(PyObject *self, PyObject *typ, PyObject *val, PyObject *tb, - PyObject *args, int close_on_genexit) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject *yf; - if (unlikely(__Pyx_Coroutine_test_and_set_is_running(gen))) - return __Pyx_Coroutine_AlreadyRunningError(gen); - yf = gen->yieldfrom; - if (yf) { - __Pyx_PySendResult result; - PyObject *ret; - Py_INCREF(yf); - if (__Pyx_PyErr_GivenExceptionMatches(typ, PyExc_GeneratorExit) && close_on_genexit) { - int err = __Pyx_Coroutine_CloseIter(gen, yf); - Py_DECREF(yf); - __Pyx_Coroutine_Undelegate(gen); - if (err < 0) - goto propagate_exception; - goto throw_here; - } - if (0 - #ifdef __Pyx_Generator_USED - || __Pyx_Generator_CheckExact(yf) - #endif - #ifdef __Pyx_Coroutine_USED - || __Pyx_Coroutine_Check(yf) - #endif - ) { - ret = __Pyx__Coroutine_Throw(yf, typ, val, tb, args, close_on_genexit); - #ifdef __Pyx_Coroutine_USED - } else if (__Pyx_CoroutineAwait_CheckExact(yf)) { - ret = __Pyx__Coroutine_Throw(((__pyx_CoroutineAwaitObject*)yf)->coroutine, typ, val, tb, args, close_on_genexit); - #endif - } else { - PyObject *meth = __Pyx_PyObject_GetAttrStrNoError(yf, __pyx_mstate_global->__pyx_n_u_throw); - if (unlikely(!meth)) { - Py_DECREF(yf); - if (unlikely(PyErr_Occurred())) { - __Pyx_Coroutine_unset_is_running(gen); - return NULL; - } - __Pyx_Coroutine_Undelegate(gen); - goto throw_here; - } - if (likely(args)) { - ret = __Pyx_PyObject_Call(meth, args, NULL); - } else { - PyObject *cargs[4] = {NULL, typ, val, tb}; - ret = __Pyx_PyObject_FastCall(meth, cargs+1, 3 | __Pyx_PY_VECTORCALL_ARGUMENTS_OFFSET); - } - Py_DECREF(meth); - } - Py_DECREF(yf); - if (ret) { - __Pyx_Coroutine_unset_is_running(gen); - return ret; - } - result = __Pyx_Coroutine_FinishDelegation(gen, &ret); - __Pyx_Coroutine_unset_is_running(gen); - return __Pyx_Coroutine_MethodReturnFromResult(self, result, ret, 0); - } -throw_here: - __Pyx_Raise(typ, val, tb, NULL); -propagate_exception: - { - PyObject *retval = NULL; - __Pyx_PySendResult result = __Pyx_Coroutine_SendEx(gen, NULL, &retval, 0); - __Pyx_Coroutine_unset_is_running(gen); - return __Pyx_Coroutine_MethodReturnFromResult(self, result, retval, 0); - } -} -static PyObject *__Pyx_Coroutine_Throw(PyObject *self, PyObject *args) { - PyObject *typ; - PyObject *val = NULL; - PyObject *tb = NULL; - if (unlikely(!PyArg_UnpackTuple(args, "throw", 1, 3, &typ, &val, &tb))) - return NULL; - return __Pyx__Coroutine_Throw(self, typ, val, tb, args, 1); -} -static CYTHON_INLINE int __Pyx_Coroutine_traverse_excstate(__Pyx_ExcInfoStruct *exc_state, visitproc visit, void *arg) { -#if PY_VERSION_HEX >= 0x030B00a4 - Py_VISIT(exc_state->exc_value); -#else - Py_VISIT(exc_state->exc_type); - Py_VISIT(exc_state->exc_value); - Py_VISIT(exc_state->exc_traceback); -#endif - return 0; -} -static int __Pyx_Coroutine_traverse(__pyx_CoroutineObject *gen, visitproc visit, void *arg) { - { - int e = __Pyx_call_type_traverse((PyObject*)gen, 1, visit, arg); - if (e) return e; - } - Py_VISIT(gen->closure); - Py_VISIT(gen->classobj); - Py_VISIT(gen->yieldfrom); - return __Pyx_Coroutine_traverse_excstate(&gen->gi_exc_state, visit, arg); -} -static int __Pyx_Coroutine_clear(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - Py_CLEAR(gen->closure); - Py_CLEAR(gen->classobj); - __Pyx_Coroutine_Undelegate(gen); - __Pyx_Coroutine_ExceptionClear(&gen->gi_exc_state); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - Py_CLEAR(((__pyx_PyAsyncGenObject*)gen)->ag_finalizer); - } -#endif - Py_CLEAR(gen->gi_code); - Py_CLEAR(gen->gi_frame); - Py_CLEAR(gen->gi_name); - Py_CLEAR(gen->gi_qualname); - Py_CLEAR(gen->gi_modulename); - return 0; -} -static void __Pyx_Coroutine_dealloc(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - PyObject_GC_UnTrack(gen); - if (gen->gi_weakreflist != NULL) - PyObject_ClearWeakRefs(self); - if (gen->resume_label >= 0) { - PyObject_GC_Track(self); -#if CYTHON_USE_TP_FINALIZE - if (unlikely(PyObject_CallFinalizerFromDealloc(self))) -#else - { - destructor del = __Pyx_PyObject_GetSlot(gen, tp_del, destructor); - if (del) del(self); - } - if (unlikely(Py_REFCNT(self) > 0)) -#endif - { - return; - } - PyObject_GC_UnTrack(self); - } -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - /* We have to handle this case for asynchronous generators - right here, because this code has to be between UNTRACK - and GC_Del. */ - Py_CLEAR(((__pyx_PyAsyncGenObject*)self)->ag_finalizer); - } -#endif - __Pyx_Coroutine_clear(self); - __Pyx_PyHeapTypeObject_GC_Del(gen); -} -#if CYTHON_USE_TP_FINALIZE -static void __Pyx_Coroutine_del(PyObject *self) { - PyObject *error_type, *error_value, *error_traceback; - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject *) self; - __Pyx_PyThreadState_declare - if (gen->resume_label < 0) { - return; - } - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&error_type, &error_value, &error_traceback); -#ifdef __Pyx_AsyncGen_USED - if (__Pyx_AsyncGen_CheckExact(self)) { - __pyx_PyAsyncGenObject *agen = (__pyx_PyAsyncGenObject*)self; - PyObject *finalizer = agen->ag_finalizer; - if (finalizer && !agen->ag_closed) { - PyObject *res = __Pyx_PyObject_CallOneArg(finalizer, self); - if (unlikely(!res)) { - PyErr_WriteUnraisable(self); - } else { - Py_DECREF(res); - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); - return; - } - } -#endif - if (unlikely(gen->resume_label == 0 && !error_value)) { -#ifdef __Pyx_Coroutine_USED -#ifdef __Pyx_Generator_USED - if (!__Pyx_Generator_CheckExact(self)) -#endif - { - PyObject_GC_UnTrack(self); - if (unlikely(PyErr_WarnFormat(PyExc_RuntimeWarning, 1, "coroutine '%.50S' was never awaited", gen->gi_qualname) < 0)) - PyErr_WriteUnraisable(self); - PyObject_GC_Track(self); - } -#endif - } else { - PyObject *retval = NULL; - __Pyx_PySendResult result = __Pyx_Coroutine_Close(self, &retval); - if (result == PYGEN_ERROR) { - PyErr_WriteUnraisable(self); - } else { - Py_XDECREF(retval); - } - } - __Pyx_ErrRestore(error_type, error_value, error_traceback); -} -#endif -static PyObject * -__Pyx_Coroutine_get_name(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_name; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_name(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL || !PyUnicode_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__name__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_name, value); - return 0; -} -static PyObject * -__Pyx_Coroutine_get_qualname(__pyx_CoroutineObject *self, void *context) -{ - PyObject *name = self->gi_qualname; - CYTHON_UNUSED_VAR(context); - if (unlikely(!name)) name = Py_None; - Py_INCREF(name); - return name; -} -static int -__Pyx_Coroutine_set_qualname(__pyx_CoroutineObject *self, PyObject *value, void *context) -{ - CYTHON_UNUSED_VAR(context); - if (unlikely(value == NULL || !PyUnicode_Check(value))) { - PyErr_SetString(PyExc_TypeError, - "__qualname__ must be set to a string object"); - return -1; - } - Py_INCREF(value); - __Pyx_Py_XDECREF_SET(self->gi_qualname, value); - return 0; -} -static PyObject * -__Pyx__Coroutine_get_frame(__pyx_CoroutineObject *self) -{ -#if !CYTHON_COMPILING_IN_LIMITED_API - PyObject *frame; - #if PY_VERSION_HEX >= 0x030d0000 - Py_BEGIN_CRITICAL_SECTION(self); - #endif - frame = self->gi_frame; - if (!frame) { - if (unlikely(!self->gi_code)) { - Py_RETURN_NONE; - } - PyObject *globals = PyDict_New(); - if (unlikely(!globals)) return NULL; - frame = (PyObject *) PyFrame_New( - PyThreadState_Get(), /*PyThreadState *tstate,*/ - (PyCodeObject*) self->gi_code, /*PyCodeObject *code,*/ - globals, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - Py_DECREF(globals); - if (unlikely(!frame)) - return NULL; - if (unlikely(self->gi_frame)) { - Py_DECREF(frame); - frame = self->gi_frame; - } else { - self->gi_frame = frame; - } - } - Py_INCREF(frame); - #if PY_VERSION_HEX >= 0x030d0000 - Py_END_CRITICAL_SECTION(); - #endif - return frame; -#else - CYTHON_UNUSED_VAR(self); - Py_RETURN_NONE; -#endif -} -static PyObject * -__Pyx_Coroutine_get_frame(__pyx_CoroutineObject *self, void *context) { - CYTHON_UNUSED_VAR(context); - PyObject *frame = self->gi_frame; - if (frame) - return __Pyx_NewRef(frame); - return __Pyx__Coroutine_get_frame(self); -} -static __pyx_CoroutineObject *__Pyx__Coroutine_New( - PyTypeObject* type, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - __pyx_CoroutineObject *gen = PyObject_GC_New(__pyx_CoroutineObject, type); - if (unlikely(!gen)) - return NULL; - return __Pyx__Coroutine_NewInit(gen, body, code, closure, name, qualname, module_name); -} -static __pyx_CoroutineObject *__Pyx__Coroutine_NewInit( - __pyx_CoroutineObject *gen, __pyx_coroutine_body_t body, PyObject *code, PyObject *closure, - PyObject *name, PyObject *qualname, PyObject *module_name) { - gen->body = body; - gen->closure = closure; - Py_XINCREF(closure); - gen->is_running = 0; - gen->resume_label = 0; - gen->classobj = NULL; - gen->yieldfrom = NULL; - gen->yieldfrom_am_send = NULL; - #if PY_VERSION_HEX >= 0x030B00a4 && !CYTHON_COMPILING_IN_LIMITED_API - gen->gi_exc_state.exc_value = NULL; - #else - gen->gi_exc_state.exc_type = NULL; - gen->gi_exc_state.exc_value = NULL; - gen->gi_exc_state.exc_traceback = NULL; - #endif -#if CYTHON_USE_EXC_INFO_STACK - gen->gi_exc_state.previous_item = NULL; -#endif - gen->gi_weakreflist = NULL; - Py_XINCREF(qualname); - gen->gi_qualname = qualname; - Py_XINCREF(name); - gen->gi_name = name; - Py_XINCREF(module_name); - gen->gi_modulename = module_name; - Py_XINCREF(code); - gen->gi_code = code; - gen->gi_frame = NULL; - PyObject_GC_Track(gen); - return gen; -} -static char __Pyx_Coroutine_test_and_set_is_running(__pyx_CoroutineObject *gen) { - char result; - #if PY_VERSION_HEX >= 0x030d0000 && !CYTHON_COMPILING_IN_LIMITED_API - Py_BEGIN_CRITICAL_SECTION(gen); - #endif - result = gen->is_running; - gen->is_running = 1; - #if PY_VERSION_HEX >= 0x030d0000 && !CYTHON_COMPILING_IN_LIMITED_API - Py_END_CRITICAL_SECTION(); - #endif - return result; -} -static void __Pyx_Coroutine_unset_is_running(__pyx_CoroutineObject *gen) { - #if PY_VERSION_HEX >= 0x030d0000 && !CYTHON_COMPILING_IN_LIMITED_API - Py_BEGIN_CRITICAL_SECTION(gen); - #endif - assert(gen->is_running); - gen->is_running = 0; - #if PY_VERSION_HEX >= 0x030d0000 && !CYTHON_COMPILING_IN_LIMITED_API - Py_END_CRITICAL_SECTION(); - #endif -} -static char __Pyx_Coroutine_get_is_running(__pyx_CoroutineObject *gen) { - char result; - #if PY_VERSION_HEX >= 0x030d0000 && !CYTHON_COMPILING_IN_LIMITED_API - Py_BEGIN_CRITICAL_SECTION(gen); - #endif - result = gen->is_running; - #if PY_VERSION_HEX >= 0x030d0000 && !CYTHON_COMPILING_IN_LIMITED_API - Py_END_CRITICAL_SECTION(); - #endif - return result; -} -static PyObject *__Pyx_Coroutine_get_is_running_getter(PyObject *gen, void *closure) { - CYTHON_UNUSED_VAR(closure); - char result = __Pyx_Coroutine_get_is_running((__pyx_CoroutineObject*)gen); - if (result) Py_RETURN_TRUE; - else Py_RETURN_FALSE; -} -#if __PYX_HAS_PY_AM_SEND == 2 -static void __Pyx_SetBackportTypeAmSend(PyTypeObject *type, __Pyx_PyAsyncMethodsStruct *static_amsend_methods, __Pyx_pyiter_sendfunc am_send) { - Py_ssize_t ptr_offset = (char*)(type->tp_as_async) - (char*)type; - if (ptr_offset < 0 || ptr_offset > type->tp_basicsize) { - return; - } - memcpy((void*)static_amsend_methods, (void*)(type->tp_as_async), sizeof(*type->tp_as_async)); - static_amsend_methods->am_send = am_send; - type->tp_as_async = __Pyx_SlotTpAsAsync(static_amsend_methods); -} -#endif -static PyObject *__Pyx_Coroutine_fail_reduce_ex(PyObject *self, PyObject *arg) { - CYTHON_UNUSED_VAR(arg); - __Pyx_TypeName self_type_name = __Pyx_PyType_GetFullyQualifiedName(Py_TYPE((PyObject*)self)); - PyErr_Format(PyExc_TypeError, "cannot pickle '" __Pyx_FMT_TYPENAME "' object", - self_type_name); - __Pyx_DECREF_TypeName(self_type_name); - return NULL; -} - -/* Generator */ -static PyMethodDef __pyx_Generator_methods[] = { - {"send", (PyCFunction) __Pyx_Coroutine_Send, METH_O, - PyDoc_STR("send(arg) -> send 'arg' into generator,\nreturn next yielded value or raise StopIteration.")}, - {"throw", (PyCFunction) __Pyx_Coroutine_Throw, METH_VARARGS, - PyDoc_STR("throw(typ[,val[,tb]]) -> raise exception in generator,\nreturn next yielded value or raise StopIteration.")}, - {"close", (PyCFunction) __Pyx_Coroutine_Close_Method, METH_NOARGS, - PyDoc_STR("close() -> raise GeneratorExit inside generator.")}, - {"__reduce_ex__", (PyCFunction) __Pyx_Coroutine_fail_reduce_ex, METH_O, 0}, - {"__reduce__", (PyCFunction) __Pyx_Coroutine_fail_reduce_ex, METH_NOARGS, 0}, - {0, 0, 0, 0} -}; -static PyMemberDef __pyx_Generator_memberlist[] = { - {"gi_yieldfrom", T_OBJECT, offsetof(__pyx_CoroutineObject, yieldfrom), READONLY, - PyDoc_STR("object being iterated by 'yield from', or None")}, - {"gi_code", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_code), READONLY, NULL}, - {"__module__", T_OBJECT, offsetof(__pyx_CoroutineObject, gi_modulename), 0, 0}, - {"__weaklistoffset__", T_PYSSIZET, offsetof(__pyx_CoroutineObject, gi_weakreflist), READONLY, 0}, - {0, 0, 0, 0, 0} -}; -static PyGetSetDef __pyx_Generator_getsets[] = { - {"__name__", (getter)__Pyx_Coroutine_get_name, (setter)__Pyx_Coroutine_set_name, - PyDoc_STR("name of the generator"), 0}, - {"__qualname__", (getter)__Pyx_Coroutine_get_qualname, (setter)__Pyx_Coroutine_set_qualname, - PyDoc_STR("qualified name of the generator"), 0}, - {"gi_frame", (getter)__Pyx_Coroutine_get_frame, NULL, - PyDoc_STR("Frame of the generator"), 0}, - {"gi_running", __Pyx_Coroutine_get_is_running_getter, NULL, NULL, NULL}, - {0, 0, 0, 0, 0} -}; -static PyType_Slot __pyx_GeneratorType_slots[] = { - {Py_tp_dealloc, (void *)__Pyx_Coroutine_dealloc}, - {Py_tp_traverse, (void *)__Pyx_Coroutine_traverse}, - {Py_tp_iter, (void *)PyObject_SelfIter}, - {Py_tp_iternext, (void *)__Pyx_Generator_Next}, - {Py_tp_methods, (void *)__pyx_Generator_methods}, - {Py_tp_members, (void *)__pyx_Generator_memberlist}, - {Py_tp_getset, (void *)__pyx_Generator_getsets}, - {Py_tp_getattro, (void *) PyObject_GenericGetAttr}, -#if CYTHON_USE_TP_FINALIZE - {Py_tp_finalize, (void *)__Pyx_Coroutine_del}, -#endif -#if __PYX_HAS_PY_AM_SEND == 1 - {Py_am_send, (void *)__Pyx_Coroutine_AmSend}, -#endif - {0, 0}, -}; -static PyType_Spec __pyx_GeneratorType_spec = { - __PYX_TYPE_MODULE_PREFIX "generator", - sizeof(__pyx_CoroutineObject), - 0, -#if PY_VERSION_HEX >= 0x030A0000 - Py_TPFLAGS_IMMUTABLETYPE | -#endif - Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC | Py_TPFLAGS_HAVE_FINALIZE | __Pyx_TPFLAGS_HAVE_AM_SEND, - __pyx_GeneratorType_slots -}; -#if __PYX_HAS_PY_AM_SEND == 2 -static __Pyx_PyAsyncMethodsStruct __pyx_Generator_as_async; -#endif -static int __pyx_Generator_init(PyObject *module) { - __pyx_mstatetype *mstate = __Pyx_PyModule_GetState(module); - mstate->__pyx_GeneratorType = __Pyx_FetchCommonTypeFromSpec( - mstate->__pyx_CommonTypesMetaclassType, module, &__pyx_GeneratorType_spec, NULL); - if (unlikely(!mstate->__pyx_GeneratorType)) { - return -1; - } -#if __PYX_HAS_PY_AM_SEND == 2 - __Pyx_SetBackportTypeAmSend(mstate->__pyx_GeneratorType, &__pyx_Generator_as_async, &__Pyx_Coroutine_AmSend); -#endif - return 0; -} -static PyObject *__Pyx_Generator_GetInlinedResult(PyObject *self) { - __pyx_CoroutineObject *gen = (__pyx_CoroutineObject*) self; - PyObject *retval = NULL; - if (unlikely(__Pyx_Coroutine_test_and_set_is_running(gen))) { - return __Pyx_Coroutine_AlreadyRunningError(gen); - } - __Pyx_PySendResult result = __Pyx_Coroutine_SendEx(gen, Py_None, &retval, 0); - __Pyx_Coroutine_unset_is_running(gen); - (void) result; - assert (result == PYGEN_RETURN || result == PYGEN_ERROR); - assert ((result == PYGEN_RETURN && retval != NULL) || (result == PYGEN_ERROR && retval == NULL)); - return retval; -} - -/* GetRuntimeVersion */ -static unsigned long __Pyx_get_runtime_version(void) { -#if __PYX_LIMITED_VERSION_HEX >= 0x030b0000 - return Py_Version & ~0xFFUL; -#else - static unsigned long __Pyx_cached_runtime_version = 0; - if (__Pyx_cached_runtime_version == 0) { - const char* rt_version = Py_GetVersion(); - unsigned long version = 0; - unsigned long factor = 0x01000000UL; - unsigned int digit = 0; - int i = 0; - while (factor) { - while ('0' <= rt_version[i] && rt_version[i] <= '9') { - digit = digit * 10 + (unsigned int) (rt_version[i] - '0'); - ++i; - } - version += factor * digit; - if (rt_version[i] != '.') - break; - digit = 0; - factor >>= 8; - ++i; - } - __Pyx_cached_runtime_version = version; - } - return __Pyx_cached_runtime_version; -#endif -} - -/* CheckBinaryVersion */ -static int __Pyx_check_binary_version(unsigned long ct_version, unsigned long rt_version, int allow_newer) { - const unsigned long MAJOR_MINOR = 0xFFFF0000UL; - if ((rt_version & MAJOR_MINOR) == (ct_version & MAJOR_MINOR)) - return 0; - if (likely(allow_newer && (rt_version & MAJOR_MINOR) > (ct_version & MAJOR_MINOR))) - return 1; - { - char message[200]; - PyOS_snprintf(message, sizeof(message), - "compile time Python version %d.%d " - "of module '%.100s' " - "%s " - "runtime version %d.%d", - (int) (ct_version >> 24), (int) ((ct_version >> 16) & 0xFF), - __Pyx_MODULE_NAME, - (allow_newer) ? "was newer than" : "does not match", - (int) (rt_version >> 24), (int) ((rt_version >> 16) & 0xFF) - ); - return PyErr_WarnEx(NULL, message, 1); - } -} - -/* NewCodeObj */ -#if CYTHON_COMPILING_IN_LIMITED_API - static PyObject* __Pyx__PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *exception_table = NULL; - PyObject *types_module=NULL, *code_type=NULL, *result=NULL; - #if __PYX_LIMITED_VERSION_HEX < 0x030b0000 - PyObject *version_info; - PyObject *py_minor_version = NULL; - #endif - long minor_version = 0; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - #if __PYX_LIMITED_VERSION_HEX >= 0x030b0000 - minor_version = 11; - #else - if (!(version_info = PySys_GetObject("version_info"))) goto end; - if (!(py_minor_version = PySequence_GetItem(version_info, 1))) goto end; - minor_version = PyLong_AsLong(py_minor_version); - Py_DECREF(py_minor_version); - if (minor_version == -1 && PyErr_Occurred()) goto end; - #endif - if (!(types_module = PyImport_ImportModule("types"))) goto end; - if (!(code_type = PyObject_GetAttrString(types_module, "CodeType"))) goto end; - if (minor_version <= 7) { - (void)p; - result = PyObject_CallFunction(code_type, "iiiiiOOOOOOiOOO", a, k, l, s, f, code, - c, n, v, fn, name, fline, lnos, fv, cell); - } else if (minor_version <= 10) { - result = PyObject_CallFunction(code_type, "iiiiiiOOOOOOiOOO", a,p, k, l, s, f, code, - c, n, v, fn, name, fline, lnos, fv, cell); - } else { - if (!(exception_table = PyBytes_FromStringAndSize(NULL, 0))) goto end; - result = PyObject_CallFunction(code_type, "iiiiiiOOOOOOOiOOOO", a,p, k, l, s, f, code, - c, n, v, fn, name, name, fline, lnos, exception_table, fv, cell); - } - end: - Py_XDECREF(code_type); - Py_XDECREF(exception_table); - Py_XDECREF(types_module); - if (type) { - PyErr_Restore(type, value, traceback); - } - return result; - } -#elif PY_VERSION_HEX >= 0x030B0000 - static PyCodeObject* __Pyx__PyCode_New(int a, int p, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyCodeObject *result; - result = - #if PY_VERSION_HEX >= 0x030C0000 - PyUnstable_Code_NewWithPosOnlyArgs - #else - PyCode_NewWithPosOnlyArgs - #endif - (a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, name, fline, lnos, __pyx_mstate_global->__pyx_empty_bytes); - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030c00A1 - if (likely(result)) - result->_co_firsttraceable = 0; - #endif - return result; - } -#elif PY_VERSION_HEX >= 0x030800B2 && !CYTHON_COMPILING_IN_PYPY - #define __Pyx__PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_NewWithPosOnlyArgs(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#else - #define __Pyx__PyCode_New(a, p, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif -static PyObject* __Pyx_PyCode_New( - const __Pyx_PyCode_New_function_description descr, - PyObject * const *varnames, - PyObject *filename, - PyObject *funcname, - const char *line_table, - PyObject *tuple_dedup_map -) { - PyObject *code_obj = NULL, *varnames_tuple_dedup = NULL, *code_bytes = NULL, *line_table_bytes = NULL; - Py_ssize_t var_count = (Py_ssize_t) descr.nlocals; - PyObject *varnames_tuple = PyTuple_New(var_count); - if (unlikely(!varnames_tuple)) return NULL; - for (Py_ssize_t i=0; i < var_count; i++) { - Py_INCREF(varnames[i]); - if (__Pyx_PyTuple_SET_ITEM(varnames_tuple, i, varnames[i]) != (0)) goto done; - } - #if CYTHON_COMPILING_IN_LIMITED_API - varnames_tuple_dedup = PyDict_GetItem(tuple_dedup_map, varnames_tuple); - if (!varnames_tuple_dedup) { - if (unlikely(PyDict_SetItem(tuple_dedup_map, varnames_tuple, varnames_tuple) < 0)) goto done; - varnames_tuple_dedup = varnames_tuple; - } - #else - varnames_tuple_dedup = PyDict_SetDefault(tuple_dedup_map, varnames_tuple, varnames_tuple); - if (unlikely(!varnames_tuple_dedup)) goto done; - #endif - #if CYTHON_AVOID_BORROWED_REFS - Py_INCREF(varnames_tuple_dedup); - #endif - if (__PYX_LIMITED_VERSION_HEX >= (0x030b0000) && line_table != NULL - && !CYTHON_COMPILING_IN_GRAAL) { - line_table_bytes = PyBytes_FromStringAndSize(line_table, descr.line_table_length); - if (unlikely(!line_table_bytes)) goto done; - Py_ssize_t code_len = (descr.line_table_length * 2 + 4) & ~3; - code_bytes = PyBytes_FromStringAndSize(NULL, code_len); - if (unlikely(!code_bytes)) goto done; - char* c_code_bytes = PyBytes_AsString(code_bytes); - if (unlikely(!c_code_bytes)) goto done; - memset(c_code_bytes, 0, (size_t) code_len); - } - code_obj = (PyObject*) __Pyx__PyCode_New( - (int) descr.argcount, - (int) descr.num_posonly_args, - (int) descr.num_kwonly_args, - (int) descr.nlocals, - 0, - (int) descr.flags, - code_bytes ? code_bytes : __pyx_mstate_global->__pyx_empty_bytes, - __pyx_mstate_global->__pyx_empty_tuple, - __pyx_mstate_global->__pyx_empty_tuple, - varnames_tuple_dedup, - __pyx_mstate_global->__pyx_empty_tuple, - __pyx_mstate_global->__pyx_empty_tuple, - filename, - funcname, - (int) descr.first_line, - (__PYX_LIMITED_VERSION_HEX >= (0x030b0000) && line_table_bytes) ? line_table_bytes : __pyx_mstate_global->__pyx_empty_bytes - ); -done: - Py_XDECREF(code_bytes); - Py_XDECREF(line_table_bytes); - #if CYTHON_AVOID_BORROWED_REFS - Py_XDECREF(varnames_tuple_dedup); - #endif - Py_DECREF(varnames_tuple); - return code_obj; -} - -/* InitStrings */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry const *t, PyObject **target, const char* const* encoding_names) { - while (t->s) { - PyObject *str; - if (t->is_unicode) { - if (t->intern) { - str = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - str = PyUnicode_Decode(t->s, t->n - 1, encoding_names[t->encoding], NULL); - } else { - str = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - str = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - if (!str) - return -1; - *target = str; - if (PyObject_Hash(str) == -1) - return -1; - ++t; - ++target; - } - return 0; -} - -#include -static CYTHON_INLINE Py_ssize_t __Pyx_ssize_strlen(const char *s) { - size_t len = strlen(s); - if (unlikely(len > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, "byte string is too long"); - return -1; - } - return (Py_ssize_t) len; -} -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - Py_ssize_t len = __Pyx_ssize_strlen(c_str); - if (unlikely(len < 0)) return NULL; - return __Pyx_PyUnicode_FromStringAndSize(c_str, len); -} -static CYTHON_INLINE PyObject* __Pyx_PyByteArray_FromString(const char* c_str) { - Py_ssize_t len = __Pyx_ssize_strlen(c_str); - if (unlikely(len < 0)) return NULL; - return PyByteArray_FromStringAndSize(c_str, len); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if CYTHON_COMPILING_IN_LIMITED_API - { - const char* result; - Py_ssize_t unicode_length; - CYTHON_MAYBE_UNUSED_VAR(unicode_length); // only for __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - #if __PYX_LIMITED_VERSION_HEX < 0x030A0000 - if (unlikely(PyArg_Parse(o, "s#", &result, length) < 0)) return NULL; - #else - result = PyUnicode_AsUTF8AndSize(o, length); - #endif - #if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - unicode_length = PyUnicode_GetLength(o); - if (unlikely(unicode_length < 0)) return NULL; - if (unlikely(unicode_length != *length)) { - PyUnicode_AsASCIIString(o); - return NULL; - } - #endif - return result; - } -#else -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -#endif -} -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 - if (PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif - if (PyByteArray_Check(o)) { -#if (CYTHON_ASSUME_SAFE_SIZE && CYTHON_ASSUME_SAFE_MACROS) || (CYTHON_COMPILING_IN_PYPY && (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE))) - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); -#else - *length = PyByteArray_Size(o); - if (*length == -1) return NULL; - return PyByteArray_AsString(o); -#endif - } else - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_LongWrongResultType(PyObject* result) { - __Pyx_TypeName result_type_name = __Pyx_PyType_GetFullyQualifiedName(Py_TYPE(result)); - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME "). " - "The ability to return an instance of a strict subclass of int is deprecated, " - "and may be removed in a future version of Python.", - result_type_name)) { - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; - } - __Pyx_DECREF_TypeName(result_type_name); - return result; - } - PyErr_Format(PyExc_TypeError, - "__int__ returned non-int (type " __Pyx_FMT_TYPENAME ")", - result_type_name); - __Pyx_DECREF_TypeName(result_type_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_Long(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - PyObject *res = NULL; - if (likely(PyLong_Check(x))) - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - if (likely(m && m->nb_int)) { - res = m->nb_int(x); - } -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Long(x); - } -#endif - if (likely(res)) { - if (unlikely(!PyLong_CheckExact(res))) { - return __Pyx_PyNumber_LongWrongResultType(res); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(__Pyx_PyLong_IsCompact(b))) { - return __Pyx_PyLong_CompactValue(b); - } else { - const digit* digits = __Pyx_PyLong_Digits(b); - const Py_ssize_t size = __Pyx_PyLong_SignedDigitCount(b); - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyLong_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyLong_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject *__Pyx_Owned_Py_None(int b) { - CYTHON_UNUSED_VAR(b); - return __Pyx_NewRef(Py_None); -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyLong_FromSize_t(size_t ival) { - return PyLong_FromSize_t(ival); -} - - -/* MultiPhaseInitModuleState */ -#if CYTHON_PEP489_MULTI_PHASE_INIT && CYTHON_USE_MODULE_STATE -#ifndef CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE -#if (CYTHON_COMPILING_IN_LIMITED_API || PY_VERSION_HEX >= 0x030C0000) - #define CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE 1 -#else - #define CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE 0 -#endif -#endif -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE && !CYTHON_ATOMICS -#error "Module state with PEP489 requires atomics. Currently that's one of\ - C11, C++11, gcc atomic intrinsics or MSVC atomic intrinsics" -#endif -#if !CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE -#define __Pyx_ModuleStateLookup_Lock() -#define __Pyx_ModuleStateLookup_Unlock() -#elif !CYTHON_COMPILING_IN_LIMITED_API && PY_VERSION_HEX >= 0x030d0000 -static PyMutex __Pyx_ModuleStateLookup_mutex = {0}; -#define __Pyx_ModuleStateLookup_Lock() PyMutex_Lock(&__Pyx_ModuleStateLookup_mutex) -#define __Pyx_ModuleStateLookup_Unlock() PyMutex_Unlock(&__Pyx_ModuleStateLookup_mutex) -#elif defined(__cplusplus) && __cplusplus >= 201103L -#include -static std::mutex __Pyx_ModuleStateLookup_mutex; -#define __Pyx_ModuleStateLookup_Lock() __Pyx_ModuleStateLookup_mutex.lock() -#define __Pyx_ModuleStateLookup_Unlock() __Pyx_ModuleStateLookup_mutex.unlock() -#elif defined(__STDC_VERSION__) && (__STDC_VERSION__ > 201112L) && !defined(__STDC_NO_THREADS__) -#include -static mtx_t __Pyx_ModuleStateLookup_mutex; -static once_flag __Pyx_ModuleStateLookup_mutex_once_flag = ONCE_FLAG_INIT; -static void __Pyx_ModuleStateLookup_initialize_mutex(void) { - mtx_init(&__Pyx_ModuleStateLookup_mutex, mtx_plain); -} -#define __Pyx_ModuleStateLookup_Lock()\ - call_once(&__Pyx_ModuleStateLookup_mutex_once_flag, __Pyx_ModuleStateLookup_initialize_mutex);\ - mtx_lock(&__Pyx_ModuleStateLookup_mutex) -#define __Pyx_ModuleStateLookup_Unlock() mtx_unlock(&__Pyx_ModuleStateLookup_mutex) -#elif defined(HAVE_PTHREAD_H) -#include -static pthread_mutex_t __Pyx_ModuleStateLookup_mutex = PTHREAD_MUTEX_INITIALIZER; -#define __Pyx_ModuleStateLookup_Lock() pthread_mutex_lock(&__Pyx_ModuleStateLookup_mutex) -#define __Pyx_ModuleStateLookup_Unlock() pthread_mutex_unlock(&__Pyx_ModuleStateLookup_mutex) -#elif defined(_WIN32) -#include // synchapi.h on its own doesn't work -static SRWLOCK __Pyx_ModuleStateLookup_mutex = SRWLOCK_INIT; -#define __Pyx_ModuleStateLookup_Lock() AcquireSRWLockExclusive(&__Pyx_ModuleStateLookup_mutex) -#define __Pyx_ModuleStateLookup_Unlock() ReleaseSRWLockExclusive(&__Pyx_ModuleStateLookup_mutex) -#else -#error "No suitable lock available for CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE.\ - Requires C standard >= C11, or C++ standard >= C++11,\ - or pthreads, or the Windows 32 API, or Python >= 3.13." -#endif -typedef struct { - int64_t id; - PyObject *module; -} __Pyx_InterpreterIdAndModule; -typedef struct { - char interpreter_id_as_index; - Py_ssize_t count; - Py_ssize_t allocated; - __Pyx_InterpreterIdAndModule table[1]; -} __Pyx_ModuleStateLookupData; -#define __PYX_MODULE_STATE_LOOKUP_SMALL_SIZE 32 -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE -static __pyx_atomic_int_type __Pyx_ModuleStateLookup_read_counter = 0; -#endif -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE -static __pyx_atomic_ptr_type __Pyx_ModuleStateLookup_data = 0; -#else -static __Pyx_ModuleStateLookupData* __Pyx_ModuleStateLookup_data = NULL; -#endif -static __Pyx_InterpreterIdAndModule* __Pyx_State_FindModuleStateLookupTableLowerBound( - __Pyx_InterpreterIdAndModule* table, - Py_ssize_t count, - int64_t interpreterId) { - __Pyx_InterpreterIdAndModule* begin = table; - __Pyx_InterpreterIdAndModule* end = begin + count; - if (begin->id == interpreterId) { - return begin; - } - while ((end - begin) > __PYX_MODULE_STATE_LOOKUP_SMALL_SIZE) { - __Pyx_InterpreterIdAndModule* halfway = begin + (end - begin)/2; - if (halfway->id == interpreterId) { - return halfway; - } - if (halfway->id < interpreterId) { - begin = halfway; - } else { - end = halfway; - } - } - for (; begin < end; ++begin) { - if (begin->id >= interpreterId) return begin; - } - return begin; -} -static PyObject *__Pyx_State_FindModule(CYTHON_UNUSED void* dummy) { - int64_t interpreter_id = PyInterpreterState_GetID(__Pyx_PyInterpreterState_Get()); - if (interpreter_id == -1) return NULL; -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE - __Pyx_ModuleStateLookupData* data = (__Pyx_ModuleStateLookupData*)__pyx_atomic_pointer_load_relaxed(&__Pyx_ModuleStateLookup_data); - { - __pyx_atomic_incr_acq_rel(&__Pyx_ModuleStateLookup_read_counter); - if (likely(data)) { - __Pyx_ModuleStateLookupData* new_data = (__Pyx_ModuleStateLookupData*)__pyx_atomic_pointer_load_acquire(&__Pyx_ModuleStateLookup_data); - if (likely(data == new_data)) { - goto read_finished; - } - } - __pyx_atomic_decr_acq_rel(&__Pyx_ModuleStateLookup_read_counter); - __Pyx_ModuleStateLookup_Lock(); - __pyx_atomic_incr_relaxed(&__Pyx_ModuleStateLookup_read_counter); - data = (__Pyx_ModuleStateLookupData*)__pyx_atomic_pointer_load_relaxed(&__Pyx_ModuleStateLookup_data); - __Pyx_ModuleStateLookup_Unlock(); - } - read_finished:; -#else - __Pyx_ModuleStateLookupData* data = __Pyx_ModuleStateLookup_data; -#endif - __Pyx_InterpreterIdAndModule* found = NULL; - if (unlikely(!data)) goto end; - if (data->interpreter_id_as_index) { - if (interpreter_id < data->count) { - found = data->table+interpreter_id; - } - } else { - found = __Pyx_State_FindModuleStateLookupTableLowerBound( - data->table, data->count, interpreter_id); - } - end: - { - PyObject *result=NULL; - if (found && found->id == interpreter_id) { - result = found->module; - } -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE - __pyx_atomic_decr_acq_rel(&__Pyx_ModuleStateLookup_read_counter); -#endif - return result; - } -} -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE -static void __Pyx_ModuleStateLookup_wait_until_no_readers(void) { - while (__pyx_atomic_load(&__Pyx_ModuleStateLookup_read_counter) != 0); -} -#else -#define __Pyx_ModuleStateLookup_wait_until_no_readers() -#endif -static int __Pyx_State_AddModuleInterpIdAsIndex(__Pyx_ModuleStateLookupData **old_data, PyObject* module, int64_t interpreter_id) { - Py_ssize_t to_allocate = (*old_data)->allocated; - while (to_allocate <= interpreter_id) { - if (to_allocate == 0) to_allocate = 1; - else to_allocate *= 2; - } - __Pyx_ModuleStateLookupData *new_data = *old_data; - if (to_allocate != (*old_data)->allocated) { - new_data = (__Pyx_ModuleStateLookupData *)realloc( - *old_data, - sizeof(__Pyx_ModuleStateLookupData)+(to_allocate-1)*sizeof(__Pyx_InterpreterIdAndModule)); - if (!new_data) { - PyErr_NoMemory(); - return -1; - } - for (Py_ssize_t i = new_data->allocated; i < to_allocate; ++i) { - new_data->table[i].id = i; - new_data->table[i].module = NULL; - } - new_data->allocated = to_allocate; - } - new_data->table[interpreter_id].module = module; - if (new_data->count < interpreter_id+1) { - new_data->count = interpreter_id+1; - } - *old_data = new_data; - return 0; -} -static void __Pyx_State_ConvertFromInterpIdAsIndex(__Pyx_ModuleStateLookupData *data) { - __Pyx_InterpreterIdAndModule *read = data->table; - __Pyx_InterpreterIdAndModule *write = data->table; - __Pyx_InterpreterIdAndModule *end = read + data->count; - for (; readmodule) { - write->id = read->id; - write->module = read->module; - ++write; - } - } - data->count = write - data->table; - for (; writeid = 0; - write->module = NULL; - } - data->interpreter_id_as_index = 0; -} -static int __Pyx_State_AddModule(PyObject* module, CYTHON_UNUSED void* dummy) { - int64_t interpreter_id = PyInterpreterState_GetID(__Pyx_PyInterpreterState_Get()); - if (interpreter_id == -1) return -1; - int result = 0; - __Pyx_ModuleStateLookup_Lock(); -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE - __Pyx_ModuleStateLookupData *old_data = (__Pyx_ModuleStateLookupData *) - __pyx_atomic_pointer_exchange(&__Pyx_ModuleStateLookup_data, 0); -#else - __Pyx_ModuleStateLookupData *old_data = __Pyx_ModuleStateLookup_data; -#endif - __Pyx_ModuleStateLookupData *new_data = old_data; - if (!new_data) { - new_data = (__Pyx_ModuleStateLookupData *)calloc(1, sizeof(__Pyx_ModuleStateLookupData)); - if (!new_data) { - result = -1; - PyErr_NoMemory(); - goto end; - } - new_data->allocated = 1; - new_data->interpreter_id_as_index = 1; - } - __Pyx_ModuleStateLookup_wait_until_no_readers(); - if (new_data->interpreter_id_as_index) { - if (interpreter_id < __PYX_MODULE_STATE_LOOKUP_SMALL_SIZE) { - result = __Pyx_State_AddModuleInterpIdAsIndex(&new_data, module, interpreter_id); - goto end; - } - __Pyx_State_ConvertFromInterpIdAsIndex(new_data); - } - { - Py_ssize_t insert_at = 0; - { - __Pyx_InterpreterIdAndModule* lower_bound = __Pyx_State_FindModuleStateLookupTableLowerBound( - new_data->table, new_data->count, interpreter_id); - assert(lower_bound); - insert_at = lower_bound - new_data->table; - if (unlikely(insert_at < new_data->count && lower_bound->id == interpreter_id)) { - lower_bound->module = module; - goto end; // already in table, nothing more to do - } - } - if (new_data->count+1 >= new_data->allocated) { - Py_ssize_t to_allocate = (new_data->count+1)*2; - new_data = - (__Pyx_ModuleStateLookupData*)realloc( - new_data, - sizeof(__Pyx_ModuleStateLookupData) + - (to_allocate-1)*sizeof(__Pyx_InterpreterIdAndModule)); - if (!new_data) { - result = -1; - new_data = old_data; - PyErr_NoMemory(); - goto end; - } - new_data->allocated = to_allocate; - } - ++new_data->count; - int64_t last_id = interpreter_id; - PyObject *last_module = module; - for (Py_ssize_t i=insert_at; icount; ++i) { - int64_t current_id = new_data->table[i].id; - new_data->table[i].id = last_id; - last_id = current_id; - PyObject *current_module = new_data->table[i].module; - new_data->table[i].module = last_module; - last_module = current_module; - } - } - end: -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE - __pyx_atomic_pointer_exchange(&__Pyx_ModuleStateLookup_data, new_data); -#else - __Pyx_ModuleStateLookup_data = new_data; -#endif - __Pyx_ModuleStateLookup_Unlock(); - return result; -} -static int __Pyx_State_RemoveModule(CYTHON_UNUSED void* dummy) { - int64_t interpreter_id = PyInterpreterState_GetID(__Pyx_PyInterpreterState_Get()); - if (interpreter_id == -1) return -1; - __Pyx_ModuleStateLookup_Lock(); -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE - __Pyx_ModuleStateLookupData *data = (__Pyx_ModuleStateLookupData *) - __pyx_atomic_pointer_exchange(&__Pyx_ModuleStateLookup_data, 0); -#else - __Pyx_ModuleStateLookupData *data = __Pyx_ModuleStateLookup_data; -#endif - if (data->interpreter_id_as_index) { - if (interpreter_id < data->count) { - data->table[interpreter_id].module = NULL; - } - goto done; - } - { - __Pyx_ModuleStateLookup_wait_until_no_readers(); - __Pyx_InterpreterIdAndModule* lower_bound = __Pyx_State_FindModuleStateLookupTableLowerBound( - data->table, data->count, interpreter_id); - if (!lower_bound) goto done; - if (lower_bound->id != interpreter_id) goto done; - __Pyx_InterpreterIdAndModule *end = data->table+data->count; - for (;lower_boundid = (lower_bound+1)->id; - lower_bound->module = (lower_bound+1)->module; - } - } - --data->count; - if (data->count == 0) { - free(data); - data = NULL; - } - done: -#if CYTHON_MODULE_STATE_LOOKUP_THREAD_SAFE - __pyx_atomic_pointer_exchange(&__Pyx_ModuleStateLookup_data, data); -#else - __Pyx_ModuleStateLookup_data = data; -#endif - __Pyx_ModuleStateLookup_Unlock(); - return 0; -} -#endif - -/* #### Code section: utility_code_pragmas_end ### */ -#ifdef _MSC_VER -#pragma warning( pop ) -#endif - - - -/* #### Code section: end ### */ -#endif /* Py_PYTHON_H */ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.cpython-312-x86_64-linux-gnu.so b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.cpython-312-x86_64-linux-gnu.so deleted file mode 100755 index a5b19957..00000000 Binary files a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.cpython-312-x86_64-linux-gnu.so and /dev/null differ diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.py deleted file mode 100644 index 150c03fb..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/cu2qu.py +++ /dev/null @@ -1,563 +0,0 @@ -# cython: language_level=3 -# distutils: define_macros=CYTHON_TRACE_NOGIL=1 - -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -try: - import cython -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython -COMPILED = cython.compiled - -import math - -from .errors import Error as Cu2QuError, ApproxNotFoundError - - -__all__ = ["curve_to_quadratic", "curves_to_quadratic"] - -MAX_N = 100 - -NAN = float("NaN") - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(v1=cython.complex, v2=cython.complex, result=cython.double) -def dot(v1, v2): - """Return the dot product of two vectors. - - Args: - v1 (complex): First vector. - v2 (complex): Second vector. - - Returns: - double: Dot product. - """ - result = (v1 * v2.conjugate()).real - # When vectors are perpendicular (i.e. dot product is 0), the above expression may - # yield slightly different results when running in pure Python vs C/Cython, - # both of which are correct within IEEE-754 floating-point precision. - # It's probably due to the different order of operations and roundings in each - # implementation. Because we are using the result in a denominator and catching - # ZeroDivisionError (see `calc_intersect`), it's best to normalize the result here. - if abs(result) < 1e-15: - result = 0.0 - return result - - -@cython.cfunc -@cython.locals(z=cython.complex, den=cython.double) -@cython.locals(zr=cython.double, zi=cython.double) -def _complex_div_by_real(z, den): - """Divide complex by real using Python's method (two separate divisions). - - This ensures bit-exact compatibility with Python's complex division, - avoiding C's multiply-by-reciprocal optimization that can cause 1 ULP differences - on some platforms/compilers (e.g. clang on macOS arm64). - - https://github.com/fonttools/fonttools/issues/3928 - """ - zr = z.real - zi = z.imag - return complex(zr / den, zi / den) - - -@cython.cfunc -@cython.inline -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals( - _1=cython.complex, _2=cython.complex, _3=cython.complex, _4=cython.complex -) -def calc_cubic_points(a, b, c, d): - _1 = d - _2 = _complex_div_by_real(c, 3.0) + d - _3 = _complex_div_by_real(b + c, 3.0) + _2 - _4 = a + d + c + b - return _1, _2, _3, _4 - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -def calc_cubic_parameters(p0, p1, p2, p3): - c = (p1 - p0) * 3.0 - b = (p2 - p1) * 3.0 - c - d = p0 - a = p3 - d - c - b - return a, b, c, d - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -def split_cubic_into_n_iter(p0, p1, p2, p3, n): - """Split a cubic Bezier into n equal parts. - - Splits the curve into `n` equal parts by curve time. - (t=0..1/n, t=1/n..2/n, ...) - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - An iterator yielding the control points (four complex values) of the - subcurves. - """ - # Hand-coded special-cases - if n == 2: - return iter(split_cubic_into_two(p0, p1, p2, p3)) - if n == 3: - return iter(split_cubic_into_three(p0, p1, p2, p3)) - if n == 4: - a, b = split_cubic_into_two(p0, p1, p2, p3) - return iter( - split_cubic_into_two(a[0], a[1], a[2], a[3]) - + split_cubic_into_two(b[0], b[1], b[2], b[3]) - ) - if n == 6: - a, b = split_cubic_into_two(p0, p1, p2, p3) - return iter( - split_cubic_into_three(a[0], a[1], a[2], a[3]) - + split_cubic_into_three(b[0], b[1], b[2], b[3]) - ) - - return _split_cubic_into_n_gen(p0, p1, p2, p3, n) - - -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, - n=cython.int, -) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals( - dt=cython.double, delta_2=cython.double, delta_3=cython.double, i=cython.int -) -@cython.locals( - a1=cython.complex, b1=cython.complex, c1=cython.complex, d1=cython.complex -) -def _split_cubic_into_n_gen(p0, p1, p2, p3, n): - a, b, c, d = calc_cubic_parameters(p0, p1, p2, p3) - dt = 1 / n - delta_2 = dt * dt - delta_3 = dt * delta_2 - for i in range(n): - t1 = i * dt - t1_2 = t1 * t1 - # calc new a, b, c and d - a1 = a * delta_3 - b1 = (3 * a * t1 + b) * delta_2 - c1 = (2 * b * t1 + c + 3 * a * t1_2) * dt - d1 = a * t1 * t1_2 + b * t1_2 + c * t1 + d - yield calc_cubic_points(a1, b1, c1, d1) - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, p1=cython.complex, p2=cython.complex, p3=cython.complex -) -@cython.locals(mid=cython.complex, deriv3=cython.complex) -def split_cubic_into_two(p0, p1, p2, p3): - """Split a cubic Bezier into two equal parts. - - Splits the curve into two equal parts at t = 0.5 - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - tuple: Two cubic Beziers (each expressed as a tuple of four complex - values). - """ - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return ( - (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - ) - - -@cython.cfunc -@cython.inline -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals( - mid1=cython.complex, - deriv1=cython.complex, - mid2=cython.complex, - deriv2=cython.complex, -) -def split_cubic_into_three(p0, p1, p2, p3): - """Split a cubic Bezier into three equal parts. - - Splits the curve into three equal parts at t = 1/3 and t = 2/3 - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - tuple: Three cubic Beziers (each expressed as a tuple of four complex - values). - """ - mid1 = (8 * p0 + 12 * p1 + 6 * p2 + p3) * (1 / 27) - deriv1 = (p3 + 3 * p2 - 4 * p0) * (1 / 27) - mid2 = (p0 + 6 * p1 + 12 * p2 + 8 * p3) * (1 / 27) - deriv2 = (4 * p3 - 3 * p1 - p0) * (1 / 27) - return ( - (p0, (2 * p0 + p1) / 3.0, mid1 - deriv1, mid1), - (mid1, mid1 + deriv1, mid2 - deriv2, mid2), - (mid2, mid2 + deriv2, (p2 + 2 * p3) / 3.0, p3), - ) - - -@cython.cfunc -@cython.inline -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(_p1=cython.complex, _p2=cython.complex) -def cubic_approx_control(t, p0, p1, p2, p3): - """Approximate a cubic Bezier using a quadratic one. - - Args: - t (double): Position of control point. - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - - Returns: - complex: Location of candidate control point on quadratic curve. - """ - _p1 = p0 + (p1 - p0) * 1.5 - _p2 = p3 + (p2 - p3) * 1.5 - return _p1 + (_p2 - _p1) * t - - -@cython.cfunc -@cython.inline -@cython.returns(cython.complex) -@cython.locals(a=cython.complex, b=cython.complex, c=cython.complex, d=cython.complex) -@cython.locals(ab=cython.complex, cd=cython.complex, p=cython.complex, h=cython.double) -def calc_intersect(a, b, c, d): - """Calculate the intersection of two lines. - - Args: - a (complex): Start point of first line. - b (complex): End point of first line. - c (complex): Start point of second line. - d (complex): End point of second line. - - Returns: - complex: Location of intersection if one present, ``complex(NaN,NaN)`` - if no intersection was found. - """ - ab = b - a - cd = d - c - p = ab * 1j - try: - h = dot(p, a - c) / dot(p, cd) - except ZeroDivisionError: - # if 3 or 4 points are equal, we do have an intersection despite the zero-div: - # return one of the off-curves so that the algorithm can attempt a one-curve - # solution if it's within tolerance: - # https://github.com/linebender/kurbo/pull/484 - if b == c and (a == b or c == d): - return b - return complex(NAN, NAN) - return c + cd * h - - -@cython.cfunc -@cython.returns(cython.int) -@cython.locals( - tolerance=cython.double, - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(mid=cython.complex, deriv3=cython.complex) -def cubic_farthest_fit_inside(p0, p1, p2, p3, tolerance): - """Check if a cubic Bezier lies within a given distance of the origin. - - "Origin" means *the* origin (0,0), not the start of the curve. Note that no - checks are made on the start and end positions of the curve; this function - only checks the inside of the curve. - - Args: - p0 (complex): Start point of curve. - p1 (complex): First handle of curve. - p2 (complex): Second handle of curve. - p3 (complex): End point of curve. - tolerance (double): Distance from origin. - - Returns: - bool: True if the cubic Bezier ``p`` entirely lies within a distance - ``tolerance`` of the origin, False otherwise. - """ - # First check p2 then p1, as p2 has higher error early on. - if abs(p2) <= tolerance and abs(p1) <= tolerance: - return True - - # Split. - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - if abs(mid) > tolerance: - return False - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return cubic_farthest_fit_inside( - p0, (p0 + p1) * 0.5, mid - deriv3, mid, tolerance - ) and cubic_farthest_fit_inside(mid, mid + deriv3, (p2 + p3) * 0.5, p3, tolerance) - - -@cython.cfunc -@cython.inline -@cython.locals(tolerance=cython.double) -@cython.locals( - q1=cython.complex, - c0=cython.complex, - c1=cython.complex, - c2=cython.complex, - c3=cython.complex, -) -def cubic_approx_quadratic(cubic, tolerance): - """Approximate a cubic Bezier with a single quadratic within a given tolerance. - - Args: - cubic (sequence): Four complex numbers representing control points of - the cubic Bezier curve. - tolerance (double): Permitted deviation from the original curve. - - Returns: - Three complex numbers representing control points of the quadratic - curve if it fits within the given tolerance, or ``None`` if no suitable - curve could be calculated. - """ - - q1 = calc_intersect(cubic[0], cubic[1], cubic[2], cubic[3]) - if math.isnan(q1.imag): - return None - c0 = cubic[0] - c3 = cubic[3] - c1 = c0 + (q1 - c0) * (2 / 3) - c2 = c3 + (q1 - c3) * (2 / 3) - if not cubic_farthest_fit_inside(0, c1 - cubic[1], c2 - cubic[2], 0, tolerance): - return None - return c0, q1, c3 - - -@cython.cfunc -@cython.locals(n=cython.int, tolerance=cython.double) -@cython.locals(i=cython.int) -@cython.locals(all_quadratic=cython.int) -@cython.locals( - c0=cython.complex, c1=cython.complex, c2=cython.complex, c3=cython.complex -) -@cython.locals( - q0=cython.complex, - q1=cython.complex, - next_q1=cython.complex, - q2=cython.complex, - d1=cython.complex, -) -def cubic_approx_spline(cubic, n, tolerance, all_quadratic): - """Approximate a cubic Bezier curve with a spline of n quadratics. - - Args: - cubic (sequence): Four complex numbers representing control points of - the cubic Bezier curve. - n (int): Number of quadratic Bezier curves in the spline. - tolerance (double): Permitted deviation from the original curve. - - Returns: - A list of ``n+2`` complex numbers, representing control points of the - quadratic spline if it fits within the given tolerance, or ``None`` if - no suitable spline could be calculated. - """ - - if n == 1: - return cubic_approx_quadratic(cubic, tolerance) - if n == 2 and all_quadratic == False: - return cubic - - cubics = split_cubic_into_n_iter(cubic[0], cubic[1], cubic[2], cubic[3], n) - - # calculate the spline of quadratics and check errors at the same time. - next_cubic = next(cubics) - next_q1 = cubic_approx_control( - 0, next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - ) - q2 = cubic[0] - d1 = 0j - spline = [cubic[0], next_q1] - for i in range(1, n + 1): - # Current cubic to convert - c0, c1, c2, c3 = next_cubic - - # Current quadratic approximation of current cubic - q0 = q2 - q1 = next_q1 - if i < n: - next_cubic = next(cubics) - next_q1 = cubic_approx_control( - i / (n - 1), next_cubic[0], next_cubic[1], next_cubic[2], next_cubic[3] - ) - spline.append(next_q1) - q2 = (q1 + next_q1) * 0.5 - else: - q2 = c3 - - # End-point deltas - d0 = d1 - d1 = q2 - c3 - - if abs(d1) > tolerance or not cubic_farthest_fit_inside( - d0, - q0 + (q1 - q0) * (2 / 3) - c1, - q2 + (q1 - q2) * (2 / 3) - c2, - d1, - tolerance, - ): - return None - spline.append(cubic[3]) - - return spline - - -@cython.locals(max_err=cython.double) -@cython.locals(n=cython.int) -@cython.locals(all_quadratic=cython.int) -def curve_to_quadratic(curve, max_err, all_quadratic=True): - """Approximate a cubic Bezier curve with a spline of n quadratics. - - Args: - cubic (sequence): Four 2D tuples representing control points of - the cubic Bezier curve. - max_err (double): Permitted deviation from the original curve. - all_quadratic (bool): If True (default) returned value is a - quadratic spline. If False, it's either a single quadratic - curve or a single cubic curve. - - Returns: - If all_quadratic is True: A list of 2D tuples, representing - control points of the quadratic spline if it fits within the - given tolerance, or ``None`` if no suitable spline could be - calculated. - - If all_quadratic is False: Either a quadratic curve (if length - of output is 3), or a cubic curve (if length of output is 4). - """ - - curve = [complex(*p) for p in curve] - - for n in range(1, MAX_N + 1): - spline = cubic_approx_spline(curve, n, max_err, all_quadratic) - if spline is not None: - # done. go home - return [(s.real, s.imag) for s in spline] - - raise ApproxNotFoundError(curve) - - -@cython.locals(l=cython.int, last_i=cython.int, i=cython.int) -@cython.locals(all_quadratic=cython.int) -def curves_to_quadratic(curves, max_errors, all_quadratic=True): - """Return quadratic Bezier splines approximating the input cubic Beziers. - - Args: - curves: A sequence of *n* curves, each curve being a sequence of four - 2D tuples. - max_errors: A sequence of *n* floats representing the maximum permissible - deviation from each of the cubic Bezier curves. - all_quadratic (bool): If True (default) returned values are a - quadratic spline. If False, they are either a single quadratic - curve or a single cubic curve. - - Example:: - - >>> curves_to_quadratic( [ - ... [ (50,50), (100,100), (150,100), (200,50) ], - ... [ (75,50), (120,100), (150,75), (200,60) ] - ... ], [1,1] ) - [[(50.0, 50.0), (75.0, 75.0), (125.0, 91.66666666666666), (175.0, 75.0), (200.0, 50.0)], [(75.0, 50.0), (97.5, 75.0), (135.41666666666666, 82.08333333333333), (175.0, 67.5), (200.0, 60.0)]] - - The returned splines have "implied oncurve points" suitable for use in - TrueType ``glif`` outlines - i.e. in the first spline returned above, - the first quadratic segment runs from (50,50) to - ( (75 + 125)/2 , (120 + 91.666..)/2 ) = (100, 83.333...). - - Returns: - If all_quadratic is True, a list of splines, each spline being a list - of 2D tuples. - - If all_quadratic is False, a list of curves, each curve being a quadratic - (length 3), or cubic (length 4). - - Raises: - fontTools.cu2qu.Errors.ApproxNotFoundError: if no suitable approximation - can be found for all curves with the given parameters. - """ - - curves = [[complex(*p) for p in curve] for curve in curves] - assert len(max_errors) == len(curves) - - l = len(curves) - splines = [None] * l - last_i = i = 0 - n = 1 - while True: - spline = cubic_approx_spline(curves[i], n, max_errors[i], all_quadratic) - if spline is None: - if n == MAX_N: - break - n += 1 - last_i = i - continue - splines[i] = spline - i = (i + 1) % l - if i == last_i: - # done. go home - return [[(s.real, s.imag) for s in spline] for spline in splines] - - raise ApproxNotFoundError(curves) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/errors.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/errors.py deleted file mode 100644 index fa3dc429..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/errors.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -class Error(Exception): - """Base Cu2Qu exception class for all other errors.""" - - -class ApproxNotFoundError(Error): - def __init__(self, curve): - message = "no approximation found: %s" % curve - super().__init__(message) - self.curve = curve - - -class UnequalZipLengthsError(Error): - pass - - -class IncompatibleGlyphsError(Error): - def __init__(self, glyphs): - assert len(glyphs) > 1 - self.glyphs = glyphs - names = set(repr(g.name) for g in glyphs) - if len(names) > 1: - self.combined_name = "{%s}" % ", ".join(sorted(names)) - else: - self.combined_name = names.pop() - - def __repr__(self): - return "<%s %s>" % (type(self).__name__, self.combined_name) - - -class IncompatibleSegmentNumberError(IncompatibleGlyphsError): - def __str__(self): - return "Glyphs named %s have different number of segments" % ( - self.combined_name - ) - - -class IncompatibleSegmentTypesError(IncompatibleGlyphsError): - def __init__(self, glyphs, segments): - IncompatibleGlyphsError.__init__(self, glyphs) - self.segments = segments - - def __str__(self): - lines = [] - ndigits = len(str(max(self.segments))) - for i, tags in sorted(self.segments.items()): - lines.append( - "%s: (%s)" % (str(i).rjust(ndigits), ", ".join(repr(t) for t in tags)) - ) - return "Glyphs named %s have incompatible segment types:\n %s" % ( - self.combined_name, - "\n ".join(lines), - ) - - -class IncompatibleFontsError(Error): - def __init__(self, glyph_errors): - self.glyph_errors = glyph_errors - - def __str__(self): - return "fonts contains incompatible glyphs: %s" % ( - ", ".join(repr(g) for g in sorted(self.glyph_errors.keys())) - ) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/ufo.py b/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/ufo.py deleted file mode 100644 index 7a6dbc67..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/cu2qu/ufo.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright 2015 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -"""Converts cubic bezier curves to quadratic splines. - -Conversion is performed such that the quadratic splines keep the same end-curve -tangents as the original cubics. The approach is iterative, increasing the -number of segments for a spline until the error gets below a bound. - -Respective curves from multiple fonts will be converted at once to ensure that -the resulting splines are interpolation-compatible. -""" - -import logging -from fontTools.pens.basePen import AbstractPen -from fontTools.pens.pointPen import PointToSegmentPen -from fontTools.pens.reverseContourPen import ReverseContourPen - -from . import curves_to_quadratic -from .errors import ( - UnequalZipLengthsError, - IncompatibleSegmentNumberError, - IncompatibleSegmentTypesError, - IncompatibleGlyphsError, - IncompatibleFontsError, -) - - -__all__ = ["fonts_to_quadratic", "font_to_quadratic"] - -# The default approximation error below is a relative value (1/1000 of the EM square). -# Later on, we convert it to absolute font units by multiplying it by a font's UPEM -# (see fonts_to_quadratic). -DEFAULT_MAX_ERR = 0.001 -CURVE_TYPE_LIB_KEY = "com.github.googlei18n.cu2qu.curve_type" - -logger = logging.getLogger(__name__) - - -_zip = zip - - -def zip(*args): - """Ensure each argument to zip has the same length. Also make sure a list is - returned for python 2/3 compatibility. - """ - - if len(set(len(a) for a in args)) != 1: - raise UnequalZipLengthsError(*args) - return list(_zip(*args)) - - -class GetSegmentsPen(AbstractPen): - """Pen to collect segments into lists of points for conversion. - - Curves always include their initial on-curve point, so some points are - duplicated between segments. - """ - - def __init__(self): - self._last_pt = None - self.segments = [] - - def _add_segment(self, tag, *args): - if tag in ["move", "line", "qcurve", "curve"]: - self._last_pt = args[-1] - self.segments.append((tag, args)) - - def moveTo(self, pt): - self._add_segment("move", pt) - - def lineTo(self, pt): - self._add_segment("line", pt) - - def qCurveTo(self, *points): - self._add_segment("qcurve", self._last_pt, *points) - - def curveTo(self, *points): - self._add_segment("curve", self._last_pt, *points) - - def closePath(self): - self._add_segment("close") - - def endPath(self): - self._add_segment("end") - - def addComponent(self, glyphName, transformation): - pass - - -def _get_segments(glyph): - """Get a glyph's segments as extracted by GetSegmentsPen.""" - - pen = GetSegmentsPen() - # glyph.draw(pen) - # We can't simply draw the glyph with the pen, but we must initialize the - # PointToSegmentPen explicitly with outputImpliedClosingLine=True. - # By default PointToSegmentPen does not outputImpliedClosingLine -- unless - # last and first point on closed contour are duplicated. Because we are - # converting multiple glyphs at the same time, we want to make sure - # this function returns the same number of segments, whether or not - # the last and first point overlap. - # https://github.com/googlefonts/fontmake/issues/572 - # https://github.com/fonttools/fonttools/pull/1720 - pointPen = PointToSegmentPen(pen, outputImpliedClosingLine=True) - glyph.drawPoints(pointPen) - return pen.segments - - -def _set_segments(glyph, segments, reverse_direction): - """Draw segments as extracted by GetSegmentsPen back to a glyph.""" - - glyph.clearContours() - pen = glyph.getPen() - if reverse_direction: - pen = ReverseContourPen(pen) - for tag, args in segments: - if tag == "move": - pen.moveTo(*args) - elif tag == "line": - pen.lineTo(*args) - elif tag == "curve": - pen.curveTo(*args[1:]) - elif tag == "qcurve": - pen.qCurveTo(*args[1:]) - elif tag == "close": - pen.closePath() - elif tag == "end": - pen.endPath() - else: - raise AssertionError('Unhandled segment type "%s"' % tag) - - -def _segments_to_quadratic(segments, max_err, stats, all_quadratic=True): - """Return quadratic approximations of cubic segments.""" - - assert all(s[0] == "curve" for s in segments), "Non-cubic given to convert" - - new_points = curves_to_quadratic([s[1] for s in segments], max_err, all_quadratic) - n = len(new_points[0]) - assert all(len(s) == n for s in new_points[1:]), "Converted incompatibly" - - spline_length = str(n - 2) - stats[spline_length] = stats.get(spline_length, 0) + 1 - - if all_quadratic or n == 3: - return [("qcurve", p) for p in new_points] - else: - return [("curve", p) for p in new_points] - - -def _glyphs_to_quadratic(glyphs, max_err, reverse_direction, stats, all_quadratic=True): - """Do the actual conversion of a set of compatible glyphs, after arguments - have been set up. - - Return True if the glyphs were modified, else return False. - """ - - try: - segments_by_location = zip(*[_get_segments(g) for g in glyphs]) - except UnequalZipLengthsError: - raise IncompatibleSegmentNumberError(glyphs) - if not any(segments_by_location): - return False - - # always modify input glyphs if reverse_direction is True - glyphs_modified = reverse_direction - - new_segments_by_location = [] - incompatible = {} - for i, segments in enumerate(segments_by_location): - tag = segments[0][0] - if not all(s[0] == tag for s in segments[1:]): - incompatible[i] = [s[0] for s in segments] - elif tag == "curve": - new_segments = _segments_to_quadratic( - segments, max_err, stats, all_quadratic - ) - if all_quadratic or new_segments != segments: - glyphs_modified = True - segments = new_segments - new_segments_by_location.append(segments) - - if glyphs_modified: - new_segments_by_glyph = zip(*new_segments_by_location) - for glyph, new_segments in zip(glyphs, new_segments_by_glyph): - _set_segments(glyph, new_segments, reverse_direction) - - if incompatible: - raise IncompatibleSegmentTypesError(glyphs, segments=incompatible) - return glyphs_modified - - -def glyphs_to_quadratic( - glyphs, max_err=None, reverse_direction=False, stats=None, all_quadratic=True -): - """Convert the curves of a set of compatible of glyphs to quadratic. - - All curves will be converted to quadratic at once, ensuring interpolation - compatibility. If this is not required, calling glyphs_to_quadratic with one - glyph at a time may yield slightly more optimized results. - - Return True if glyphs were modified, else return False. - - Raises IncompatibleGlyphsError if glyphs have non-interpolatable outlines. - """ - if stats is None: - stats = {} - - if not max_err: - # assume 1000 is the default UPEM - max_err = DEFAULT_MAX_ERR * 1000 - - if isinstance(max_err, (list, tuple)): - max_errors = max_err - else: - max_errors = [max_err] * len(glyphs) - assert len(max_errors) == len(glyphs) - - return _glyphs_to_quadratic( - glyphs, max_errors, reverse_direction, stats, all_quadratic - ) - - -def fonts_to_quadratic( - fonts, - max_err_em=None, - max_err=None, - reverse_direction=False, - stats=None, - dump_stats=False, - remember_curve_type=True, - all_quadratic=True, -): - """Convert the curves of a collection of fonts to quadratic. - - All curves will be converted to quadratic at once, ensuring interpolation - compatibility. If this is not required, calling fonts_to_quadratic with one - font at a time may yield slightly more optimized results. - - Return the set of modified glyph names if any, else return an empty set. - - By default, cu2qu stores the curve type in the fonts' lib, under a private - key "com.github.googlei18n.cu2qu.curve_type", and will not try to convert - them again if the curve type is already set to "quadratic". - Setting 'remember_curve_type' to False disables this optimization. - - Raises IncompatibleFontsError if same-named glyphs from different fonts - have non-interpolatable outlines. - """ - - if remember_curve_type: - curve_types = {f.lib.get(CURVE_TYPE_LIB_KEY, "cubic") for f in fonts} - if len(curve_types) == 1: - curve_type = next(iter(curve_types)) - if curve_type in ("quadratic", "mixed"): - logger.info("Curves already converted to quadratic") - return False - elif curve_type == "cubic": - pass # keep converting - else: - raise NotImplementedError(curve_type) - elif len(curve_types) > 1: - # going to crash later if they do differ - logger.warning("fonts may contain different curve types") - - if stats is None: - stats = {} - - if max_err_em and max_err: - raise TypeError("Only one of max_err and max_err_em can be specified.") - if not (max_err_em or max_err): - max_err_em = DEFAULT_MAX_ERR - - if isinstance(max_err, (list, tuple)): - assert len(max_err) == len(fonts) - max_errors = max_err - elif max_err: - max_errors = [max_err] * len(fonts) - - if isinstance(max_err_em, (list, tuple)): - assert len(fonts) == len(max_err_em) - max_errors = [f.info.unitsPerEm * e for f, e in zip(fonts, max_err_em)] - elif max_err_em: - max_errors = [f.info.unitsPerEm * max_err_em for f in fonts] - - modified = set() - glyph_errors = {} - for name in set().union(*(f.keys() for f in fonts)): - glyphs = [] - cur_max_errors = [] - for font, error in zip(fonts, max_errors): - if name in font: - glyphs.append(font[name]) - cur_max_errors.append(error) - try: - if _glyphs_to_quadratic( - glyphs, cur_max_errors, reverse_direction, stats, all_quadratic - ): - modified.add(name) - except IncompatibleGlyphsError as exc: - logger.error(exc) - glyph_errors[name] = exc - - if glyph_errors: - raise IncompatibleFontsError(glyph_errors) - - if modified and dump_stats: - spline_lengths = sorted(stats.keys()) - logger.info( - "New spline lengths: %s" - % (", ".join("%s: %d" % (l, stats[l]) for l in spline_lengths)) - ) - - if remember_curve_type: - for font in fonts: - curve_type = font.lib.get(CURVE_TYPE_LIB_KEY, "cubic") - new_curve_type = "quadratic" if all_quadratic else "mixed" - if curve_type != new_curve_type: - font.lib[CURVE_TYPE_LIB_KEY] = new_curve_type - return modified - - -def glyph_to_quadratic(glyph, **kwargs): - """Convenience wrapper around glyphs_to_quadratic, for just one glyph. - Return True if the glyph was modified, else return False. - """ - - return glyphs_to_quadratic([glyph], **kwargs) - - -def font_to_quadratic(font, **kwargs): - """Convenience wrapper around fonts_to_quadratic, for just one font. - Return the set of modified glyph names if any, else return empty set. - """ - - return fonts_to_quadratic([font], **kwargs) diff --git a/pptx-env/lib/python3.12/site-packages/fontTools/designspaceLib/__init__.py b/pptx-env/lib/python3.12/site-packages/fontTools/designspaceLib/__init__.py deleted file mode 100644 index 661f3405..00000000 --- a/pptx-env/lib/python3.12/site-packages/fontTools/designspaceLib/__init__.py +++ /dev/null @@ -1,3338 +0,0 @@ -""" - designSpaceDocument - - - Read and write designspace files -""" - -from __future__ import annotations - -import collections -import copy -import itertools -import math -import os -import posixpath -from io import BytesIO, StringIO -from textwrap import indent -from typing import Any, Dict, List, MutableMapping, Optional, Tuple, Union, cast - -from fontTools.misc import etree as ET -from fontTools.misc import plistlib -from fontTools.misc.loggingTools import LogMixin -from fontTools.misc.textTools import tobytes, tostr - - -__all__ = [ - "AxisDescriptor", - "AxisLabelDescriptor", - "AxisMappingDescriptor", - "BaseDocReader", - "BaseDocWriter", - "DesignSpaceDocument", - "DesignSpaceDocumentError", - "DiscreteAxisDescriptor", - "InstanceDescriptor", - "LocationLabelDescriptor", - "RangeAxisSubsetDescriptor", - "RuleDescriptor", - "SourceDescriptor", - "ValueAxisSubsetDescriptor", - "VariableFontDescriptor", -] - -# ElementTree allows to find namespace-prefixed elements, but not attributes -# so we have to do it ourselves for 'xml:lang' -XML_NS = "{http://www.w3.org/XML/1998/namespace}" -XML_LANG = XML_NS + "lang" - - -def posix(path): - """Normalize paths using forward slash to work also on Windows.""" - new_path = posixpath.join(*path.split(os.path.sep)) - if path.startswith("/"): - # The above transformation loses absolute paths - new_path = "/" + new_path - elif path.startswith(r"\\"): - # The above transformation loses leading slashes of UNC path mounts - new_path = "//" + new_path - return new_path - - -def posixpath_property(private_name): - """Generate a propery that holds a path always using forward slashes.""" - - def getter(self): - # Normal getter - return getattr(self, private_name) - - def setter(self, value): - # The setter rewrites paths using forward slashes - if value is not None: - value = posix(value) - setattr(self, private_name, value) - - return property(getter, setter) - - -class DesignSpaceDocumentError(Exception): - def __init__(self, msg, obj=None): - self.msg = msg - self.obj = obj - - def __str__(self): - return str(self.msg) + (": %r" % self.obj if self.obj is not None else "") - - -class AsDictMixin(object): - def asdict(self): - d = {} - for attr, value in self.__dict__.items(): - if attr.startswith("_"): - continue - if hasattr(value, "asdict"): - value = value.asdict() - elif isinstance(value, list): - value = [v.asdict() if hasattr(v, "asdict") else v for v in value] - d[attr] = value - return d - - -class SimpleDescriptor(AsDictMixin): - """Containers for a bunch of attributes""" - - # XXX this is ugly. The 'print' is inappropriate here, and instead of - # assert, it should simply return True/False - def compare(self, other): - # test if this object contains the same data as the other - for attr in self._attrs: - try: - assert getattr(self, attr) == getattr(other, attr) - except AssertionError: - print( - "failed attribute", - attr, - getattr(self, attr), - "!=", - getattr(other, attr), - ) - - def __repr__(self): - attrs = [f"{a}={repr(getattr(self, a))}," for a in self._attrs] - attrs = indent("\n".join(attrs), " ") - return f"{self.__class__.__name__}(\n{attrs}\n)" - - -class SourceDescriptor(SimpleDescriptor): - """Simple container for data related to the source - - .. code:: python - - doc = DesignSpaceDocument() - s1 = SourceDescriptor() - s1.path = masterPath1 - s1.name = "master.ufo1" - s1.font = defcon.Font("master.ufo1") - s1.location = dict(weight=0) - s1.familyName = "MasterFamilyName" - s1.styleName = "MasterStyleNameOne" - s1.localisedFamilyName = dict(fr="CaractΓ¨re") - s1.mutedGlyphNames.append("A") - s1.mutedGlyphNames.append("Z") - doc.addSource(s1) - - """ - - flavor = "source" - _attrs = [ - "filename", - "path", - "name", - "layerName", - "location", - "copyLib", - "copyGroups", - "copyFeatures", - "muteKerning", - "muteInfo", - "mutedGlyphNames", - "familyName", - "styleName", - "localisedFamilyName", - ] - - filename = posixpath_property("_filename") - path = posixpath_property("_path") - - def __init__( - self, - *, - filename=None, - path=None, - font=None, - name=None, - location=None, - designLocation=None, - layerName=None, - familyName=None, - styleName=None, - localisedFamilyName=None, - copyLib=False, - copyInfo=False, - copyGroups=False, - copyFeatures=False, - muteKerning=False, - muteInfo=False, - mutedGlyphNames=None, - ): - self.filename = filename - """string. A relative path to the source file, **as it is in the document**. - - MutatorMath + VarLib. - """ - self.path = path - """The absolute path, calculated from filename.""" - - self.font = font - """Any Python object. Optional. Points to a representation of this - source font that is loaded in memory, as a Python object (e.g. a - ``defcon.Font`` or a ``fontTools.ttFont.TTFont``). - - The default document reader will not fill-in this attribute, and the - default writer will not use this attribute. It is up to the user of - ``designspaceLib`` to either load the resource identified by - ``filename`` and store it in this field, or write the contents of - this field to the disk and make ```filename`` point to that. - """ - - self.name = name - """string. Optional. Unique identifier name for this source. - - MutatorMath + varLib. - """ - - self.designLocation = ( - designLocation if designLocation is not None else location or {} - ) - """dict. Axis values for this source, in design space coordinates. - - MutatorMath + varLib. - - This may be only part of the full design location. - See :meth:`getFullDesignLocation()` - - .. versionadded:: 5.0 - """ - - self.layerName = layerName - """string. The name of the layer in the source to look for - outline data. Default ``None`` which means ``foreground``. - """ - self.familyName = familyName - """string. Family name of this source. Though this data - can be extracted from the font, it can be efficient to have it right - here. - - varLib. - """ - self.styleName = styleName - """string. Style name of this source. Though this data - can be extracted from the font, it can be efficient to have it right - here. - - varLib. - """ - self.localisedFamilyName = localisedFamilyName or {} - """dict. A dictionary of localised family name strings, keyed by - language code. - - If present, will be used to build localized names for all instances. - - .. versionadded:: 5.0 - """ - - self.copyLib = copyLib - """bool. Indicates if the contents of the font.lib need to - be copied to the instances. - - MutatorMath. - - .. deprecated:: 5.0 - """ - self.copyInfo = copyInfo - """bool. Indicates if the non-interpolating font.info needs - to be copied to the instances. - - MutatorMath. - - .. deprecated:: 5.0 - """ - self.copyGroups = copyGroups - """bool. Indicates if the groups need to be copied to the - instances. - - MutatorMath. - - .. deprecated:: 5.0 - """ - self.copyFeatures = copyFeatures - """bool. Indicates if the feature text needs to be - copied to the instances. - - MutatorMath. - - .. deprecated:: 5.0 - """ - self.muteKerning = muteKerning - """bool. Indicates if the kerning data from this source - needs to be muted (i.e. not be part of the calculations). - - MutatorMath only. - """ - self.muteInfo = muteInfo - """bool. Indicated if the interpolating font.info data for - this source needs to be muted. - - MutatorMath only. - """ - self.mutedGlyphNames = mutedGlyphNames or [] - """list. Glyphnames that need to be muted in the - instances. - - MutatorMath only. - """ - - @property - def location(self): - """dict. Axis values for this source, in design space coordinates. - - MutatorMath + varLib. - - .. deprecated:: 5.0 - Use the more explicit alias for this property :attr:`designLocation`. - """ - return self.designLocation - - @location.setter - def location(self, location: Optional[SimpleLocationDict]): - self.designLocation = location or {} - - def setFamilyName(self, familyName, languageCode="en"): - """Setter for :attr:`localisedFamilyName` - - .. versionadded:: 5.0 - """ - self.localisedFamilyName[languageCode] = tostr(familyName) - - def getFamilyName(self, languageCode="en"): - """Getter for :attr:`localisedFamilyName` - - .. versionadded:: 5.0 - """ - return self.localisedFamilyName.get(languageCode) - - def getFullDesignLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict: - """Get the complete design location of this source, from its - :attr:`designLocation` and the document's axis defaults. - - .. versionadded:: 5.0 - """ - result: SimpleLocationDict = {} - for axis in doc.axes: - if axis.name in self.designLocation: - result[axis.name] = self.designLocation[axis.name] - else: - result[axis.name] = axis.map_forward(axis.default) - return result - - -class RuleDescriptor(SimpleDescriptor): - """Represents the rule descriptor element: a set of glyph substitutions to - trigger conditionally in some parts of the designspace. - - .. code:: python - - r1 = RuleDescriptor() - r1.name = "unique.rule.name" - r1.conditionSets.append([dict(name="weight", minimum=-10, maximum=10), dict(...)]) - r1.conditionSets.append([dict(...), dict(...)]) - r1.subs.append(("a", "a.alt")) - - .. code:: xml - - - - - - - - - - - - - - """ - - _attrs = ["name", "conditionSets", "subs"] # what do we need here - - def __init__(self, *, name=None, conditionSets=None, subs=None): - self.name = name - """string. Unique name for this rule. Can be used to reference this rule data.""" - # list of lists of dict(name='aaaa', minimum=0, maximum=1000) - self.conditionSets = conditionSets or [] - """a list of conditionsets. - - - Each conditionset is a list of conditions. - - Each condition is a dict with ``name``, ``minimum`` and ``maximum`` keys. - """ - # list of substitutions stored as tuples of glyphnames ("a", "a.alt") - self.subs = subs or [] - """list of substitutions. - - - Each substitution is stored as tuples of glyphnames, e.g. ("a", "a.alt"). - - Note: By default, rules are applied first, before other text - shaping/OpenType layout, as they are part of the - `Required Variation Alternates OpenType feature `_. - See ref:`rules-element` Β§ Attributes. - """ - - -def evaluateRule(rule, location): - """Return True if any of the rule's conditionsets matches the given location.""" - return any(evaluateConditions(c, location) for c in rule.conditionSets) - - -def evaluateConditions(conditions, location): - """Return True if all the conditions matches the given location. - - - If a condition has no minimum, check for < maximum. - - If a condition has no maximum, check for > minimum. - """ - for cd in conditions: - value = location[cd["name"]] - if cd.get("minimum") is None: - if value > cd["maximum"]: - return False - elif cd.get("maximum") is None: - if cd["minimum"] > value: - return False - elif not cd["minimum"] <= value <= cd["maximum"]: - return False - return True - - -def processRules(rules, location, glyphNames): - """Apply these rules at this location to these glyphnames. - - Return a new list of glyphNames with substitutions applied. - - - rule order matters - """ - newNames = [] - for rule in rules: - if evaluateRule(rule, location): - for name in glyphNames: - swap = False - for a, b in rule.subs: - if name == a: - swap = True - break - if swap: - newNames.append(b) - else: - newNames.append(name) - glyphNames = newNames - newNames = [] - return glyphNames - - -AnisotropicLocationDict = Dict[str, Union[float, Tuple[float, float]]] -SimpleLocationDict = Dict[str, float] - - -class AxisMappingDescriptor(SimpleDescriptor): - """Represents the axis mapping element: mapping an input location - to an output location in the designspace. - - .. code:: python - - m1 = AxisMappingDescriptor() - m1.inputLocation = {"weight": 900, "width": 150} - m1.outputLocation = {"weight": 870} - - .. code:: xml - - - - - - - - - - - - - """ - - _attrs = ["inputLocation", "outputLocation"] - - def __init__( - self, - *, - inputLocation=None, - outputLocation=None, - description=None, - groupDescription=None, - ): - self.inputLocation: SimpleLocationDict = inputLocation or {} - """dict. Axis values for the input of the mapping, in design space coordinates. - - varLib. - - .. versionadded:: 5.1 - """ - self.outputLocation: SimpleLocationDict = outputLocation or {} - """dict. Axis values for the output of the mapping, in design space coordinates. - - varLib. - - .. versionadded:: 5.1 - """ - self.description = description - """string. A description of the mapping. - - varLib. - - .. versionadded:: 5.2 - """ - self.groupDescription = groupDescription - """string. A description of the group of mappings. - - varLib. - - .. versionadded:: 5.2 - """ - - -class InstanceDescriptor(SimpleDescriptor): - """Simple container for data related to the instance - - - .. code:: python - - i2 = InstanceDescriptor() - i2.path = instancePath2 - i2.familyName = "InstanceFamilyName" - i2.styleName = "InstanceStyleName" - i2.name = "instance.ufo2" - # anisotropic location - i2.designLocation = dict(weight=500, width=(400,300)) - i2.postScriptFontName = "InstancePostscriptName" - i2.styleMapFamilyName = "InstanceStyleMapFamilyName" - i2.styleMapStyleName = "InstanceStyleMapStyleName" - i2.lib['com.coolDesignspaceApp.specimenText'] = 'Hamburgerwhatever' - doc.addInstance(i2) - """ - - flavor = "instance" - _defaultLanguageCode = "en" - _attrs = [ - "filename", - "path", - "name", - "locationLabel", - "designLocation", - "userLocation", - "familyName", - "styleName", - "postScriptFontName", - "styleMapFamilyName", - "styleMapStyleName", - "localisedFamilyName", - "localisedStyleName", - "localisedStyleMapFamilyName", - "localisedStyleMapStyleName", - "glyphs", - "kerning", - "info", - "lib", - ] - - filename = posixpath_property("_filename") - path = posixpath_property("_path") - - def __init__( - self, - *, - filename=None, - path=None, - font=None, - name=None, - location=None, - locationLabel=None, - designLocation=None, - userLocation=None, - familyName=None, - styleName=None, - postScriptFontName=None, - styleMapFamilyName=None, - styleMapStyleName=None, - localisedFamilyName=None, - localisedStyleName=None, - localisedStyleMapFamilyName=None, - localisedStyleMapStyleName=None, - glyphs=None, - kerning=True, - info=True, - lib=None, - ): - self.filename = filename - """string. Relative path to the instance file, **as it is - in the document**. The file may or may not exist. - - MutatorMath + VarLib. - """ - self.path = path - """string. Absolute path to the instance file, calculated from - the document path and the string in the filename attr. The file may - or may not exist. - - MutatorMath. - """ - self.font = font - """Same as :attr:`SourceDescriptor.font` - - .. seealso:: :attr:`SourceDescriptor.font` - """ - self.name = name - """string. Unique identifier name of the instance, used to - identify it if it needs to be referenced from elsewhere in the - document. - """ - self.locationLabel = locationLabel - """Name of a :class:`LocationLabelDescriptor`. If - provided, the instance should have the same location as the - LocationLabel. - - .. seealso:: - :meth:`getFullDesignLocation` - :meth:`getFullUserLocation` - - .. versionadded:: 5.0 - """ - self.designLocation: AnisotropicLocationDict = ( - designLocation if designLocation is not None else (location or {}) - ) - """dict. Axis values for this instance, in design space coordinates. - - MutatorMath + varLib. - - .. seealso:: This may be only part of the full location. See: - :meth:`getFullDesignLocation` - :meth:`getFullUserLocation` - - .. versionadded:: 5.0 - """ - self.userLocation: SimpleLocationDict = userLocation or {} - """dict. Axis values for this instance, in user space coordinates. - - MutatorMath + varLib. - - .. seealso:: This may be only part of the full location. See: - :meth:`getFullDesignLocation` - :meth:`getFullUserLocation` - - .. versionadded:: 5.0 - """ - self.familyName = familyName - """string. Family name of this instance. - - MutatorMath + varLib. - """ - self.styleName = styleName - """string. Style name of this instance. - - MutatorMath + varLib. - """ - self.postScriptFontName = postScriptFontName - """string. Postscript fontname for this instance. - - MutatorMath + varLib. - """ - self.styleMapFamilyName = styleMapFamilyName - """string. StyleMap familyname for this instance. - - MutatorMath + varLib. - """ - self.styleMapStyleName = styleMapStyleName - """string. StyleMap stylename for this instance. - - MutatorMath + varLib. - """ - self.localisedFamilyName = localisedFamilyName or {} - """dict. A dictionary of localised family name - strings, keyed by language code. - """ - self.localisedStyleName = localisedStyleName or {} - """dict. A dictionary of localised stylename - strings, keyed by language code. - """ - self.localisedStyleMapFamilyName = localisedStyleMapFamilyName or {} - """A dictionary of localised style map - familyname strings, keyed by language code. - """ - self.localisedStyleMapStyleName = localisedStyleMapStyleName or {} - """A dictionary of localised style map - stylename strings, keyed by language code. - """ - self.glyphs = glyphs or {} - """dict for special master definitions for glyphs. If glyphs - need special masters (to record the results of executed rules for - example). - - MutatorMath. - - .. deprecated:: 5.0 - Use rules or sparse sources instead. - """ - self.kerning = kerning - """ bool. Indicates if this instance needs its kerning - calculated. - - MutatorMath. - - .. deprecated:: 5.0 - """ - self.info = info - """bool. Indicated if this instance needs the interpolating - font.info calculated. - - .. deprecated:: 5.0 - """ - - self.lib = lib or {} - """Custom data associated with this instance.""" - - @property - def location(self): - """dict. Axis values for this instance. - - MutatorMath + varLib. - - .. deprecated:: 5.0 - Use the more explicit alias for this property :attr:`designLocation`. - """ - return self.designLocation - - @location.setter - def location(self, location: Optional[AnisotropicLocationDict]): - self.designLocation = location or {} - - def setStyleName(self, styleName, languageCode="en"): - """These methods give easier access to the localised names.""" - self.localisedStyleName[languageCode] = tostr(styleName) - - def getStyleName(self, languageCode="en"): - return self.localisedStyleName.get(languageCode) - - def setFamilyName(self, familyName, languageCode="en"): - self.localisedFamilyName[languageCode] = tostr(familyName) - - def getFamilyName(self, languageCode="en"): - return self.localisedFamilyName.get(languageCode) - - def setStyleMapStyleName(self, styleMapStyleName, languageCode="en"): - self.localisedStyleMapStyleName[languageCode] = tostr(styleMapStyleName) - - def getStyleMapStyleName(self, languageCode="en"): - return self.localisedStyleMapStyleName.get(languageCode) - - def setStyleMapFamilyName(self, styleMapFamilyName, languageCode="en"): - self.localisedStyleMapFamilyName[languageCode] = tostr(styleMapFamilyName) - - def getStyleMapFamilyName(self, languageCode="en"): - return self.localisedStyleMapFamilyName.get(languageCode) - - def clearLocation(self, axisName: Optional[str] = None): - """Clear all location-related fields. Ensures that - :attr:``designLocation`` and :attr:``userLocation`` are dictionaries - (possibly empty if clearing everything). - - In order to update the location of this instance wholesale, a user - should first clear all the fields, then change the field(s) for which - they have data. - - .. code:: python - - instance.clearLocation() - instance.designLocation = {'Weight': (34, 36.5), 'Width': 100} - instance.userLocation = {'Opsz': 16} - - In order to update a single axis location, the user should only clear - that axis, then edit the values: - - .. code:: python - - instance.clearLocation('Weight') - instance.designLocation['Weight'] = (34, 36.5) - - Args: - axisName: if provided, only clear the location for that axis. - - .. versionadded:: 5.0 - """ - self.locationLabel = None - if axisName is None: - self.designLocation = {} - self.userLocation = {} - else: - if self.designLocation is None: - self.designLocation = {} - if axisName in self.designLocation: - del self.designLocation[axisName] - if self.userLocation is None: - self.userLocation = {} - if axisName in self.userLocation: - del self.userLocation[axisName] - - def getLocationLabelDescriptor( - self, doc: "DesignSpaceDocument" - ) -> Optional[LocationLabelDescriptor]: - """Get the :class:`LocationLabelDescriptor` instance that matches - this instances's :attr:`locationLabel`. - - Raises if the named label can't be found. - - .. versionadded:: 5.0 - """ - if self.locationLabel is None: - return None - label = doc.getLocationLabel(self.locationLabel) - if label is None: - raise DesignSpaceDocumentError( - "InstanceDescriptor.getLocationLabelDescriptor(): " - f"unknown location label `{self.locationLabel}` in instance `{self.name}`." - ) - return label - - def getFullDesignLocation( - self, doc: "DesignSpaceDocument" - ) -> AnisotropicLocationDict: - """Get the complete design location of this instance, by combining data - from the various location fields, default axis values and mappings, and - top-level location labels. - - The source of truth for this instance's location is determined for each - axis independently by taking the first not-None field in this list: - - - ``locationLabel``: the location along this axis is the same as the - matching STAT format 4 label. No anisotropy. - - ``designLocation[axisName]``: the explicit design location along this - axis, possibly anisotropic. - - ``userLocation[axisName]``: the explicit user location along this - axis. No anisotropy. - - ``axis.default``: default axis value. No anisotropy. - - .. versionadded:: 5.0 - """ - label = self.getLocationLabelDescriptor(doc) - if label is not None: - return doc.map_forward(label.userLocation) # type: ignore - result: AnisotropicLocationDict = {} - for axis in doc.axes: - if axis.name in self.designLocation: - result[axis.name] = self.designLocation[axis.name] - elif axis.name in self.userLocation: - result[axis.name] = axis.map_forward(self.userLocation[axis.name]) - else: - result[axis.name] = axis.map_forward(axis.default) - return result - - def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict: - """Get the complete user location for this instance. - - .. seealso:: :meth:`getFullDesignLocation` - - .. versionadded:: 5.0 - """ - return doc.map_backward(self.getFullDesignLocation(doc)) - - -def tagForAxisName(name): - # try to find or make a tag name for this axis name - names = { - "weight": ("wght", dict(en="Weight")), - "width": ("wdth", dict(en="Width")), - "optical": ("opsz", dict(en="Optical Size")), - "slant": ("slnt", dict(en="Slant")), - "italic": ("ital", dict(en="Italic")), - } - if name.lower() in names: - return names[name.lower()] - if len(name) < 4: - tag = name + "*" * (4 - len(name)) - else: - tag = name[:4] - return tag, dict(en=name) - - -class AbstractAxisDescriptor(SimpleDescriptor): - flavor = "axis" - - def __init__( - self, - *, - tag=None, - name=None, - labelNames=None, - hidden=False, - map=None, - axisOrdering=None, - axisLabels=None, - ): - # opentype tag for this axis - self.tag = tag - """string. Four letter tag for this axis. Some might be - registered at the `OpenType - specification `__. - Privately-defined axis tags must begin with an uppercase letter and - use only uppercase letters or digits. - """ - # name of the axis used in locations - self.name = name - """string. Name of the axis as it is used in the location dicts. - - MutatorMath + varLib. - """ - # names for UI purposes, if this is not a standard axis, - self.labelNames = labelNames or {} - """dict. When defining a non-registered axis, it will be - necessary to define user-facing readable names for the axis. Keyed by - xml:lang code. Values are required to be ``unicode`` strings, even if - they only contain ASCII characters. - """ - self.hidden = hidden - """bool. Whether this axis should be hidden in user interfaces. - """ - self.map = map or [] - """list of input / output values that can describe a warp of user space - to design space coordinates. If no map values are present, it is assumed - user space is the same as design space, as in [(minimum, minimum), - (maximum, maximum)]. - - varLib. - """ - self.axisOrdering = axisOrdering - """STAT table field ``axisOrdering``. - - See: `OTSpec STAT Axis Record `_ - - .. versionadded:: 5.0 - """ - self.axisLabels: List[AxisLabelDescriptor] = axisLabels or [] - """STAT table entries for Axis Value Tables format 1, 2, 3. - - See: `OTSpec STAT Axis Value Tables `_ - - .. versionadded:: 5.0 - """ - - -class AxisDescriptor(AbstractAxisDescriptor): - """Simple container for the axis data. - - Add more localisations? - - .. code:: python - - a1 = AxisDescriptor() - a1.minimum = 1 - a1.maximum = 1000 - a1.default = 400 - a1.name = "weight" - a1.tag = "wght" - a1.labelNames['fa-IR'] = "Ω‚Ψ·Ψ±" - a1.labelNames['en'] = "Wéíght" - a1.map = [(1.0, 10.0), (400.0, 66.0), (1000.0, 990.0)] - a1.axisOrdering = 1 - a1.axisLabels = [ - AxisLabelDescriptor(name="Regular", userValue=400, elidable=True) - ] - doc.addAxis(a1) - """ - - _attrs = [ - "tag", - "name", - "maximum", - "minimum", - "default", - "map", - "axisOrdering", - "axisLabels", - ] - - def __init__( - self, - *, - tag=None, - name=None, - labelNames=None, - minimum=None, - default=None, - maximum=None, - hidden=False, - map=None, - axisOrdering=None, - axisLabels=None, - ): - super().__init__( - tag=tag, - name=name, - labelNames=labelNames, - hidden=hidden, - map=map, - axisOrdering=axisOrdering, - axisLabels=axisLabels, - ) - self.minimum = minimum - """number. The minimum value for this axis in user space. - - MutatorMath + varLib. - """ - self.maximum = maximum - """number. The maximum value for this axis in user space. - - MutatorMath + varLib. - """ - self.default = default - """number. The default value for this axis, i.e. when a new location is - created, this is the value this axis will get in user space. - - MutatorMath + varLib. - """ - - def serialize(self): - # output to a dict, used in testing - return dict( - tag=self.tag, - name=self.name, - labelNames=self.labelNames, - maximum=self.maximum, - minimum=self.minimum, - default=self.default, - hidden=self.hidden, - map=self.map, - axisOrdering=self.axisOrdering, - axisLabels=self.axisLabels, - ) - - def map_forward(self, v): - """Maps value from axis mapping's input (user) to output (design).""" - from fontTools.varLib.models import piecewiseLinearMap - - if not self.map: - return v - return piecewiseLinearMap(v, {k: v for k, v in self.map}) - - def map_backward(self, v): - """Maps value from axis mapping's output (design) to input (user).""" - from fontTools.varLib.models import piecewiseLinearMap - - if isinstance(v, tuple): - v = v[0] - if not self.map: - return v - return piecewiseLinearMap(v, {v: k for k, v in self.map}) - - -class DiscreteAxisDescriptor(AbstractAxisDescriptor): - """Container for discrete axis data. - - Use this for axes that do not interpolate. The main difference from a - continuous axis is that a continuous axis has a ``minimum`` and ``maximum``, - while a discrete axis has a list of ``values``. - - Example: an Italic axis with 2 stops, Roman and Italic, that are not - compatible. The axis still allows to bind together the full font family, - which is useful for the STAT table, however it can't become a variation - axis in a VF. - - .. code:: python - - a2 = DiscreteAxisDescriptor() - a2.values = [0, 1] - a2.default = 0 - a2.name = "Italic" - a2.tag = "ITAL" - a2.labelNames['fr'] = "Italique" - a2.map = [(0, 0), (1, -11)] - a2.axisOrdering = 2 - a2.axisLabels = [ - AxisLabelDescriptor(name="Roman", userValue=0, elidable=True) - ] - doc.addAxis(a2) - - .. versionadded:: 5.0 - """ - - flavor = "axis" - _attrs = ("tag", "name", "values", "default", "map", "axisOrdering", "axisLabels") - - def __init__( - self, - *, - tag=None, - name=None, - labelNames=None, - values=None, - default=None, - hidden=False, - map=None, - axisOrdering=None, - axisLabels=None, - ): - super().__init__( - tag=tag, - name=name, - labelNames=labelNames, - hidden=hidden, - map=map, - axisOrdering=axisOrdering, - axisLabels=axisLabels, - ) - self.default: float = default - """The default value for this axis, i.e. when a new location is - created, this is the value this axis will get in user space. - - However, this default value is less important than in continuous axes: - - - it doesn't define the "neutral" version of outlines from which - deltas would apply, as this axis does not interpolate. - - it doesn't provide the reference glyph set for the designspace, as - fonts at each value can have different glyph sets. - """ - self.values: List[float] = values or [] - """List of possible values for this axis. Contrary to continuous axes, - only the values in this list can be taken by the axis, nothing in-between. - """ - - def map_forward(self, value): - """Maps value from axis mapping's input to output. - - Returns value unchanged if no mapping entry is found. - - Note: for discrete axes, each value must have its mapping entry, if - you intend that value to be mapped. - """ - return next((v for k, v in self.map if k == value), value) - - def map_backward(self, value): - """Maps value from axis mapping's output to input. - - Returns value unchanged if no mapping entry is found. - - Note: for discrete axes, each value must have its mapping entry, if - you intend that value to be mapped. - """ - if isinstance(value, tuple): - value = value[0] - return next((k for k, v in self.map if v == value), value) - - -class AxisLabelDescriptor(SimpleDescriptor): - """Container for axis label data. - - Analogue of OpenType's STAT data for a single axis (formats 1, 2 and 3). - All values are user values. - See: `OTSpec STAT Axis value table, format 1, 2, 3 `_ - - The STAT format of the Axis value depends on which field are filled-in, - see :meth:`getFormat` - - .. versionadded:: 5.0 - """ - - flavor = "label" - _attrs = ( - "userMinimum", - "userValue", - "userMaximum", - "name", - "elidable", - "olderSibling", - "linkedUserValue", - "labelNames", - ) - - def __init__( - self, - *, - name, - userValue, - userMinimum=None, - userMaximum=None, - elidable=False, - olderSibling=False, - linkedUserValue=None, - labelNames=None, - ): - self.userMinimum: Optional[float] = userMinimum - """STAT field ``rangeMinValue`` (format 2).""" - self.userValue: float = userValue - """STAT field ``value`` (format 1, 3) or ``nominalValue`` (format 2).""" - self.userMaximum: Optional[float] = userMaximum - """STAT field ``rangeMaxValue`` (format 2).""" - self.name: str = name - """Label for this axis location, STAT field ``valueNameID``.""" - self.elidable: bool = elidable - """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``. - - See: `OTSpec STAT Flags `_ - """ - self.olderSibling: bool = olderSibling - """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``. - - See: `OTSpec STAT Flags `_ - """ - self.linkedUserValue: Optional[float] = linkedUserValue - """STAT field ``linkedValue`` (format 3).""" - self.labelNames: MutableMapping[str, str] = labelNames or {} - """User-facing translations of this location's label. Keyed by - ``xml:lang`` code. - """ - - def getFormat(self) -> int: - """Determine which format of STAT Axis value to use to encode this label. - - =========== ========= =========== =========== =============== - STAT Format userValue userMinimum userMaximum linkedUserValue - =========== ========= =========== =========== =============== - 1 βœ… ❌ ❌ ❌ - 2 βœ… βœ… βœ… ❌ - 3 βœ… ❌ ❌ βœ… - =========== ========= =========== =========== =============== - """ - if self.linkedUserValue is not None: - return 3 - if self.userMinimum is not None or self.userMaximum is not None: - return 2 - return 1 - - @property - def defaultName(self) -> str: - """Return the English name from :attr:`labelNames` or the :attr:`name`.""" - return self.labelNames.get("en") or self.name - - -class LocationLabelDescriptor(SimpleDescriptor): - """Container for location label data. - - Analogue of OpenType's STAT data for a free-floating location (format 4). - All values are user values. - - See: `OTSpec STAT Axis value table, format 4 `_ - - .. versionadded:: 5.0 - """ - - flavor = "label" - _attrs = ("name", "elidable", "olderSibling", "userLocation", "labelNames") - - def __init__( - self, - *, - name, - userLocation, - elidable=False, - olderSibling=False, - labelNames=None, - ): - self.name: str = name - """Label for this named location, STAT field ``valueNameID``.""" - self.userLocation: SimpleLocationDict = userLocation or {} - """Location in user coordinates along each axis. - - If an axis is not mentioned, it is assumed to be at its default location. - - .. seealso:: This may be only part of the full location. See: - :meth:`getFullUserLocation` - """ - self.elidable: bool = elidable - """STAT flag ``ELIDABLE_AXIS_VALUE_NAME``. - - See: `OTSpec STAT Flags `_ - """ - self.olderSibling: bool = olderSibling - """STAT flag ``OLDER_SIBLING_FONT_ATTRIBUTE``. - - See: `OTSpec STAT Flags `_ - """ - self.labelNames: Dict[str, str] = labelNames or {} - """User-facing translations of this location's label. Keyed by - xml:lang code. - """ - - @property - def defaultName(self) -> str: - """Return the English name from :attr:`labelNames` or the :attr:`name`.""" - return self.labelNames.get("en") or self.name - - def getFullUserLocation(self, doc: "DesignSpaceDocument") -> SimpleLocationDict: - """Get the complete user location of this label, by combining data - from the explicit user location and default axis values. - - .. versionadded:: 5.0 - """ - return { - axis.name: self.userLocation.get(axis.name, axis.default) - for axis in doc.axes - } - - -class VariableFontDescriptor(SimpleDescriptor): - """Container for variable fonts, sub-spaces of the Designspace. - - Use-cases: - - - From a single DesignSpace with discrete axes, define 1 variable font - per value on the discrete axes. Before version 5, you would have needed - 1 DesignSpace per such variable font, and a lot of data duplication. - - From a big variable font with many axes, define subsets of that variable - font that only include some axes and freeze other axes at a given location. - - .. versionadded:: 5.0 - """ - - flavor = "variable-font" - _attrs = ("filename", "axisSubsets", "lib") - - filename = posixpath_property("_filename") - - def __init__(self, *, name, filename=None, axisSubsets=None, lib=None): - self.name: str = name - """string, required. Name of this variable to identify it during the - build process and from other parts of the document, and also as a - filename in case the filename property is empty. - - VarLib. - """ - self.filename: str = filename - """string, optional. Relative path to the variable font file, **as it is - in the document**. The file may or may not exist. - - If not specified, the :attr:`name` will be used as a basename for the file. - """ - self.axisSubsets: List[ - Union[RangeAxisSubsetDescriptor, ValueAxisSubsetDescriptor] - ] = (axisSubsets or []) - """Axis subsets to include in this variable font. - - If an axis is not mentioned, assume that we only want the default - location of that axis (same as a :class:`ValueAxisSubsetDescriptor`). - """ - self.lib: MutableMapping[str, Any] = lib or {} - """Custom data associated with this variable font.""" - - -class RangeAxisSubsetDescriptor(SimpleDescriptor): - """Subset of a continuous axis to include in a variable font. - - .. versionadded:: 5.0 - """ - - flavor = "axis-subset" - _attrs = ("name", "userMinimum", "userDefault", "userMaximum") - - def __init__( - self, *, name, userMinimum=-math.inf, userDefault=None, userMaximum=math.inf - ): - self.name: str = name - """Name of the :class:`AxisDescriptor` to subset.""" - self.userMinimum: float = userMinimum - """New minimum value of the axis in the target variable font. - If not specified, assume the same minimum value as the full axis. - (default = ``-math.inf``) - """ - self.userDefault: Optional[float] = userDefault - """New default value of the axis in the target variable font. - If not specified, assume the same default value as the full axis. - (default = ``None``) - """ - self.userMaximum: float = userMaximum - """New maximum value of the axis in the target variable font. - If not specified, assume the same maximum value as the full axis. - (default = ``math.inf``) - """ - - -class ValueAxisSubsetDescriptor(SimpleDescriptor): - """Single value of a discrete or continuous axis to use in a variable font. - - .. versionadded:: 5.0 - """ - - flavor = "axis-subset" - _attrs = ("name", "userValue") - - def __init__(self, *, name, userValue): - self.name: str = name - """Name of the :class:`AxisDescriptor` or :class:`DiscreteAxisDescriptor` - to "snapshot" or "freeze". - """ - self.userValue: float = userValue - """Value in user coordinates at which to freeze the given axis.""" - - -class BaseDocWriter(object): - _whiteSpace = " " - axisDescriptorClass = AxisDescriptor - discreteAxisDescriptorClass = DiscreteAxisDescriptor - axisLabelDescriptorClass = AxisLabelDescriptor - axisMappingDescriptorClass = AxisMappingDescriptor - locationLabelDescriptorClass = LocationLabelDescriptor - ruleDescriptorClass = RuleDescriptor - sourceDescriptorClass = SourceDescriptor - variableFontDescriptorClass = VariableFontDescriptor - valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor - rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor - instanceDescriptorClass = InstanceDescriptor - - @classmethod - def getAxisDecriptor(cls): - return cls.axisDescriptorClass() - - @classmethod - def getAxisMappingDescriptor(cls): - return cls.axisMappingDescriptorClass() - - @classmethod - def getSourceDescriptor(cls): - return cls.sourceDescriptorClass() - - @classmethod - def getInstanceDescriptor(cls): - return cls.instanceDescriptorClass() - - @classmethod - def getRuleDescriptor(cls): - return cls.ruleDescriptorClass() - - def __init__(self, documentPath, documentObject: DesignSpaceDocument): - self.path = documentPath - self.documentObject = documentObject - self.effectiveFormatTuple = self._getEffectiveFormatTuple() - self.root = ET.Element("designspace") - - def write(self, pretty=True, encoding="UTF-8", xml_declaration=True): - self.root.attrib["format"] = ".".join(str(i) for i in self.effectiveFormatTuple) - - if ( - self.documentObject.axes - or self.documentObject.axisMappings - or self.documentObject.elidedFallbackName is not None - ): - axesElement = ET.Element("axes") - if self.documentObject.elidedFallbackName is not None: - axesElement.attrib["elidedfallbackname"] = ( - self.documentObject.elidedFallbackName - ) - self.root.append(axesElement) - for axisObject in self.documentObject.axes: - self._addAxis(axisObject) - - if self.documentObject.axisMappings: - mappingsElement = None - lastGroup = object() - for mappingObject in self.documentObject.axisMappings: - if getattr(mappingObject, "groupDescription", None) != lastGroup: - if mappingsElement is not None: - self.root.findall(".axes")[0].append(mappingsElement) - lastGroup = getattr(mappingObject, "groupDescription", None) - mappingsElement = ET.Element("mappings") - if lastGroup is not None: - mappingsElement.attrib["description"] = lastGroup - self._addAxisMapping(mappingsElement, mappingObject) - if mappingsElement is not None: - self.root.findall(".axes")[0].append(mappingsElement) - - if self.documentObject.locationLabels: - labelsElement = ET.Element("labels") - for labelObject in self.documentObject.locationLabels: - self._addLocationLabel(labelsElement, labelObject) - self.root.append(labelsElement) - - if self.documentObject.rules: - if getattr(self.documentObject, "rulesProcessingLast", False): - attributes = {"processing": "last"} - else: - attributes = {} - self.root.append(ET.Element("rules", attributes)) - for ruleObject in self.documentObject.rules: - self._addRule(ruleObject) - - if self.documentObject.sources: - self.root.append(ET.Element("sources")) - for sourceObject in self.documentObject.sources: - self._addSource(sourceObject) - - if self.documentObject.variableFonts: - variableFontsElement = ET.Element("variable-fonts") - for variableFont in self.documentObject.variableFonts: - self._addVariableFont(variableFontsElement, variableFont) - self.root.append(variableFontsElement) - - if self.documentObject.instances: - self.root.append(ET.Element("instances")) - for instanceObject in self.documentObject.instances: - self._addInstance(instanceObject) - - if self.documentObject.lib: - self._addLib(self.root, self.documentObject.lib, 2) - - tree = ET.ElementTree(self.root) - tree.write( - self.path, - encoding=encoding, - method="xml", - xml_declaration=xml_declaration, - pretty_print=pretty, - ) - - def _getEffectiveFormatTuple(self): - """Try to use the version specified in the document, or a sufficiently - recent version to be able to encode what the document contains. - """ - minVersion = self.documentObject.formatTuple - if ( - any( - hasattr(axis, "values") - or axis.axisOrdering is not None - or axis.axisLabels - for axis in self.documentObject.axes - ) - or self.documentObject.locationLabels - or any(source.localisedFamilyName for source in self.documentObject.sources) - or self.documentObject.variableFonts - or any( - instance.locationLabel or instance.userLocation - for instance in self.documentObject.instances - ) - ): - if minVersion < (5, 0): - minVersion = (5, 0) - if self.documentObject.axisMappings: - if minVersion < (5, 1): - minVersion = (5, 1) - return minVersion - - def _makeLocationElement(self, locationObject, name=None): - """Convert Location dict to a locationElement.""" - locElement = ET.Element("location") - if name is not None: - locElement.attrib["name"] = name - validatedLocation = self.documentObject.newDefaultLocation() - for axisName, axisValue in locationObject.items(): - if axisName in validatedLocation: - # only accept values we know - validatedLocation[axisName] = axisValue - for dimensionName, dimensionValue in validatedLocation.items(): - dimElement = ET.Element("dimension") - dimElement.attrib["name"] = dimensionName - if type(dimensionValue) == tuple: - dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue[0]) - dimElement.attrib["yvalue"] = self.intOrFloat(dimensionValue[1]) - else: - dimElement.attrib["xvalue"] = self.intOrFloat(dimensionValue) - locElement.append(dimElement) - return locElement, validatedLocation - - def intOrFloat(self, num): - if int(num) == num: - return "%d" % num - return ("%f" % num).rstrip("0").rstrip(".") - - def _addRule(self, ruleObject): - # if none of the conditions have minimum or maximum values, do not add the rule. - ruleElement = ET.Element("rule") - if ruleObject.name is not None: - ruleElement.attrib["name"] = ruleObject.name - for conditions in ruleObject.conditionSets: - conditionsetElement = ET.Element("conditionset") - for cond in conditions: - if cond.get("minimum") is None and cond.get("maximum") is None: - # neither is defined, don't add this condition - continue - conditionElement = ET.Element("condition") - conditionElement.attrib["name"] = cond.get("name") - if cond.get("minimum") is not None: - conditionElement.attrib["minimum"] = self.intOrFloat( - cond.get("minimum") - ) - if cond.get("maximum") is not None: - conditionElement.attrib["maximum"] = self.intOrFloat( - cond.get("maximum") - ) - conditionsetElement.append(conditionElement) - if len(conditionsetElement): - ruleElement.append(conditionsetElement) - for sub in ruleObject.subs: - subElement = ET.Element("sub") - subElement.attrib["name"] = sub[0] - subElement.attrib["with"] = sub[1] - ruleElement.append(subElement) - if len(ruleElement): - self.root.findall(".rules")[0].append(ruleElement) - - def _addAxis(self, axisObject): - axisElement = ET.Element("axis") - axisElement.attrib["tag"] = axisObject.tag - axisElement.attrib["name"] = axisObject.name - self._addLabelNames(axisElement, axisObject.labelNames) - if axisObject.map: - for inputValue, outputValue in axisObject.map: - mapElement = ET.Element("map") - mapElement.attrib["input"] = self.intOrFloat(inputValue) - mapElement.attrib["output"] = self.intOrFloat(outputValue) - axisElement.append(mapElement) - if axisObject.axisOrdering is not None or axisObject.axisLabels: - labelsElement = ET.Element("labels") - if axisObject.axisOrdering is not None: - labelsElement.attrib["ordering"] = str(axisObject.axisOrdering) - for label in axisObject.axisLabels: - self._addAxisLabel(labelsElement, label) - axisElement.append(labelsElement) - if hasattr(axisObject, "minimum"): - axisElement.attrib["minimum"] = self.intOrFloat(axisObject.minimum) - axisElement.attrib["maximum"] = self.intOrFloat(axisObject.maximum) - elif hasattr(axisObject, "values"): - axisElement.attrib["values"] = " ".join( - self.intOrFloat(v) for v in axisObject.values - ) - axisElement.attrib["default"] = self.intOrFloat(axisObject.default) - if axisObject.hidden: - axisElement.attrib["hidden"] = "1" - self.root.findall(".axes")[0].append(axisElement) - - def _addAxisMapping(self, mappingsElement, mappingObject): - mappingElement = ET.Element("mapping") - if getattr(mappingObject, "description", None) is not None: - mappingElement.attrib["description"] = mappingObject.description - for what in ("inputLocation", "outputLocation"): - whatObject = getattr(mappingObject, what, None) - if whatObject is None: - continue - whatElement = ET.Element(what[:-8]) - mappingElement.append(whatElement) - - for name, value in whatObject.items(): - dimensionElement = ET.Element("dimension") - dimensionElement.attrib["name"] = name - dimensionElement.attrib["xvalue"] = self.intOrFloat(value) - whatElement.append(dimensionElement) - - mappingsElement.append(mappingElement) - - def _addAxisLabel( - self, axisElement: ET.Element, label: AxisLabelDescriptor - ) -> None: - labelElement = ET.Element("label") - labelElement.attrib["uservalue"] = self.intOrFloat(label.userValue) - if label.userMinimum is not None: - labelElement.attrib["userminimum"] = self.intOrFloat(label.userMinimum) - if label.userMaximum is not None: - labelElement.attrib["usermaximum"] = self.intOrFloat(label.userMaximum) - labelElement.attrib["name"] = label.name - if label.elidable: - labelElement.attrib["elidable"] = "true" - if label.olderSibling: - labelElement.attrib["oldersibling"] = "true" - if label.linkedUserValue is not None: - labelElement.attrib["linkeduservalue"] = self.intOrFloat( - label.linkedUserValue - ) - self._addLabelNames(labelElement, label.labelNames) - axisElement.append(labelElement) - - def _addLabelNames(self, parentElement, labelNames): - for languageCode, labelName in sorted(labelNames.items()): - languageElement = ET.Element("labelname") - languageElement.attrib[XML_LANG] = languageCode - languageElement.text = labelName - parentElement.append(languageElement) - - def _addLocationLabel( - self, parentElement: ET.Element, label: LocationLabelDescriptor - ) -> None: - labelElement = ET.Element("label") - labelElement.attrib["name"] = label.name - if label.elidable: - labelElement.attrib["elidable"] = "true" - if label.olderSibling: - labelElement.attrib["oldersibling"] = "true" - self._addLabelNames(labelElement, label.labelNames) - self._addLocationElement(labelElement, userLocation=label.userLocation) - parentElement.append(labelElement) - - def _addLocationElement( - self, - parentElement, - *, - designLocation: AnisotropicLocationDict = None, - userLocation: SimpleLocationDict = None, - ): - locElement = ET.Element("location") - for axis in self.documentObject.axes: - if designLocation is not None and axis.name in designLocation: - dimElement = ET.Element("dimension") - dimElement.attrib["name"] = axis.name - value = designLocation[axis.name] - if isinstance(value, tuple): - dimElement.attrib["xvalue"] = self.intOrFloat(value[0]) - dimElement.attrib["yvalue"] = self.intOrFloat(value[1]) - else: - dimElement.attrib["xvalue"] = self.intOrFloat(value) - locElement.append(dimElement) - elif userLocation is not None and axis.name in userLocation: - dimElement = ET.Element("dimension") - dimElement.attrib["name"] = axis.name - value = userLocation[axis.name] - dimElement.attrib["uservalue"] = self.intOrFloat(value) - locElement.append(dimElement) - if len(locElement) > 0: - parentElement.append(locElement) - - def _addInstance(self, instanceObject): - instanceElement = ET.Element("instance") - if instanceObject.name is not None: - instanceElement.attrib["name"] = instanceObject.name - if instanceObject.locationLabel is not None: - instanceElement.attrib["location"] = instanceObject.locationLabel - if instanceObject.familyName is not None: - instanceElement.attrib["familyname"] = instanceObject.familyName - if instanceObject.styleName is not None: - instanceElement.attrib["stylename"] = instanceObject.styleName - # add localisations - if instanceObject.localisedStyleName: - languageCodes = list(instanceObject.localisedStyleName.keys()) - languageCodes.sort() - for code in languageCodes: - if code == "en": - continue # already stored in the element attribute - localisedStyleNameElement = ET.Element("stylename") - localisedStyleNameElement.attrib[XML_LANG] = code - localisedStyleNameElement.text = instanceObject.getStyleName(code) - instanceElement.append(localisedStyleNameElement) - if instanceObject.localisedFamilyName: - languageCodes = list(instanceObject.localisedFamilyName.keys()) - languageCodes.sort() - for code in languageCodes: - if code == "en": - continue # already stored in the element attribute - localisedFamilyNameElement = ET.Element("familyname") - localisedFamilyNameElement.attrib[XML_LANG] = code - localisedFamilyNameElement.text = instanceObject.getFamilyName(code) - instanceElement.append(localisedFamilyNameElement) - if instanceObject.localisedStyleMapStyleName: - languageCodes = list(instanceObject.localisedStyleMapStyleName.keys()) - languageCodes.sort() - for code in languageCodes: - if code == "en": - continue - localisedStyleMapStyleNameElement = ET.Element("stylemapstylename") - localisedStyleMapStyleNameElement.attrib[XML_LANG] = code - localisedStyleMapStyleNameElement.text = ( - instanceObject.getStyleMapStyleName(code) - ) - instanceElement.append(localisedStyleMapStyleNameElement) - if instanceObject.localisedStyleMapFamilyName: - languageCodes = list(instanceObject.localisedStyleMapFamilyName.keys()) - languageCodes.sort() - for code in languageCodes: - if code == "en": - continue - localisedStyleMapFamilyNameElement = ET.Element("stylemapfamilyname") - localisedStyleMapFamilyNameElement.attrib[XML_LANG] = code - localisedStyleMapFamilyNameElement.text = ( - instanceObject.getStyleMapFamilyName(code) - ) - instanceElement.append(localisedStyleMapFamilyNameElement) - - if self.effectiveFormatTuple >= (5, 0): - if instanceObject.locationLabel is None: - self._addLocationElement( - instanceElement, - designLocation=instanceObject.designLocation, - userLocation=instanceObject.userLocation, - ) - else: - # Pre-version 5.0 code was validating and filling in the location - # dict while writing it out, as preserved below. - if instanceObject.location is not None: - locationElement, instanceObject.location = self._makeLocationElement( - instanceObject.location - ) - instanceElement.append(locationElement) - if instanceObject.filename is not None: - instanceElement.attrib["filename"] = instanceObject.filename - if instanceObject.postScriptFontName is not None: - instanceElement.attrib["postscriptfontname"] = ( - instanceObject.postScriptFontName - ) - if instanceObject.styleMapFamilyName is not None: - instanceElement.attrib["stylemapfamilyname"] = ( - instanceObject.styleMapFamilyName - ) - if instanceObject.styleMapStyleName is not None: - instanceElement.attrib["stylemapstylename"] = ( - instanceObject.styleMapStyleName - ) - if self.effectiveFormatTuple < (5, 0): - # Deprecated members as of version 5.0 - if instanceObject.glyphs: - if instanceElement.findall(".glyphs") == []: - glyphsElement = ET.Element("glyphs") - instanceElement.append(glyphsElement) - glyphsElement = instanceElement.findall(".glyphs")[0] - for glyphName, data in sorted(instanceObject.glyphs.items()): - glyphElement = self._writeGlyphElement( - instanceElement, instanceObject, glyphName, data - ) - glyphsElement.append(glyphElement) - if instanceObject.kerning: - kerningElement = ET.Element("kerning") - instanceElement.append(kerningElement) - if instanceObject.info: - infoElement = ET.Element("info") - instanceElement.append(infoElement) - self._addLib(instanceElement, instanceObject.lib, 4) - self.root.findall(".instances")[0].append(instanceElement) - - def _addSource(self, sourceObject): - sourceElement = ET.Element("source") - if sourceObject.filename is not None: - sourceElement.attrib["filename"] = sourceObject.filename - if sourceObject.name is not None: - if sourceObject.name.find("temp_master") != 0: - # do not save temporary source names - sourceElement.attrib["name"] = sourceObject.name - if sourceObject.familyName is not None: - sourceElement.attrib["familyname"] = sourceObject.familyName - if sourceObject.styleName is not None: - sourceElement.attrib["stylename"] = sourceObject.styleName - if sourceObject.layerName is not None: - sourceElement.attrib["layer"] = sourceObject.layerName - if sourceObject.localisedFamilyName: - languageCodes = list(sourceObject.localisedFamilyName.keys()) - languageCodes.sort() - for code in languageCodes: - if code == "en": - continue # already stored in the element attribute - localisedFamilyNameElement = ET.Element("familyname") - localisedFamilyNameElement.attrib[XML_LANG] = code - localisedFamilyNameElement.text = sourceObject.getFamilyName(code) - sourceElement.append(localisedFamilyNameElement) - if sourceObject.copyLib: - libElement = ET.Element("lib") - libElement.attrib["copy"] = "1" - sourceElement.append(libElement) - if sourceObject.copyGroups: - groupsElement = ET.Element("groups") - groupsElement.attrib["copy"] = "1" - sourceElement.append(groupsElement) - if sourceObject.copyFeatures: - featuresElement = ET.Element("features") - featuresElement.attrib["copy"] = "1" - sourceElement.append(featuresElement) - if sourceObject.copyInfo or sourceObject.muteInfo: - infoElement = ET.Element("info") - if sourceObject.copyInfo: - infoElement.attrib["copy"] = "1" - if sourceObject.muteInfo: - infoElement.attrib["mute"] = "1" - sourceElement.append(infoElement) - if sourceObject.muteKerning: - kerningElement = ET.Element("kerning") - kerningElement.attrib["mute"] = "1" - sourceElement.append(kerningElement) - if sourceObject.mutedGlyphNames: - for name in sourceObject.mutedGlyphNames: - glyphElement = ET.Element("glyph") - glyphElement.attrib["name"] = name - glyphElement.attrib["mute"] = "1" - sourceElement.append(glyphElement) - if self.effectiveFormatTuple >= (5, 0): - self._addLocationElement( - sourceElement, designLocation=sourceObject.location - ) - else: - # Pre-version 5.0 code was validating and filling in the location - # dict while writing it out, as preserved below. - locationElement, sourceObject.location = self._makeLocationElement( - sourceObject.location - ) - sourceElement.append(locationElement) - self.root.findall(".sources")[0].append(sourceElement) - - def _addVariableFont( - self, parentElement: ET.Element, vf: VariableFontDescriptor - ) -> None: - vfElement = ET.Element("variable-font") - vfElement.attrib["name"] = vf.name - if vf.filename is not None: - vfElement.attrib["filename"] = vf.filename - if vf.axisSubsets: - subsetsElement = ET.Element("axis-subsets") - for subset in vf.axisSubsets: - subsetElement = ET.Element("axis-subset") - subsetElement.attrib["name"] = subset.name - # Mypy doesn't support narrowing union types via hasattr() - # https://mypy.readthedocs.io/en/stable/type_narrowing.html - # TODO(Python 3.10): use TypeGuard - if hasattr(subset, "userMinimum"): - subset = cast(RangeAxisSubsetDescriptor, subset) - if subset.userMinimum != -math.inf: - subsetElement.attrib["userminimum"] = self.intOrFloat( - subset.userMinimum - ) - if subset.userMaximum != math.inf: - subsetElement.attrib["usermaximum"] = self.intOrFloat( - subset.userMaximum - ) - if subset.userDefault is not None: - subsetElement.attrib["userdefault"] = self.intOrFloat( - subset.userDefault - ) - elif hasattr(subset, "userValue"): - subset = cast(ValueAxisSubsetDescriptor, subset) - subsetElement.attrib["uservalue"] = self.intOrFloat( - subset.userValue - ) - subsetsElement.append(subsetElement) - vfElement.append(subsetsElement) - self._addLib(vfElement, vf.lib, 4) - parentElement.append(vfElement) - - def _addLib(self, parentElement: ET.Element, data: Any, indent_level: int) -> None: - if not data: - return - libElement = ET.Element("lib") - libElement.append(plistlib.totree(data, indent_level=indent_level)) - parentElement.append(libElement) - - def _writeGlyphElement(self, instanceElement, instanceObject, glyphName, data): - glyphElement = ET.Element("glyph") - if data.get("mute"): - glyphElement.attrib["mute"] = "1" - if data.get("unicodes") is not None: - glyphElement.attrib["unicode"] = " ".join( - [hex(u) for u in data.get("unicodes")] - ) - if data.get("instanceLocation") is not None: - locationElement, data["instanceLocation"] = self._makeLocationElement( - data.get("instanceLocation") - ) - glyphElement.append(locationElement) - if glyphName is not None: - glyphElement.attrib["name"] = glyphName - if data.get("note") is not None: - noteElement = ET.Element("note") - noteElement.text = data.get("note") - glyphElement.append(noteElement) - if data.get("masters") is not None: - mastersElement = ET.Element("masters") - for m in data.get("masters"): - masterElement = ET.Element("master") - if m.get("glyphName") is not None: - masterElement.attrib["glyphname"] = m.get("glyphName") - if m.get("font") is not None: - masterElement.attrib["source"] = m.get("font") - if m.get("location") is not None: - locationElement, m["location"] = self._makeLocationElement( - m.get("location") - ) - masterElement.append(locationElement) - mastersElement.append(masterElement) - glyphElement.append(mastersElement) - return glyphElement - - -class BaseDocReader(LogMixin): - axisDescriptorClass = AxisDescriptor - discreteAxisDescriptorClass = DiscreteAxisDescriptor - axisLabelDescriptorClass = AxisLabelDescriptor - axisMappingDescriptorClass = AxisMappingDescriptor - locationLabelDescriptorClass = LocationLabelDescriptor - ruleDescriptorClass = RuleDescriptor - sourceDescriptorClass = SourceDescriptor - variableFontsDescriptorClass = VariableFontDescriptor - valueAxisSubsetDescriptorClass = ValueAxisSubsetDescriptor - rangeAxisSubsetDescriptorClass = RangeAxisSubsetDescriptor - instanceDescriptorClass = InstanceDescriptor - - def __init__(self, documentPath, documentObject): - self.path = documentPath - self.documentObject = documentObject - tree = ET.parse(self.path) - self.root = tree.getroot() - self.documentObject.formatVersion = self.root.attrib.get("format", "3.0") - self._axes = [] - self.rules = [] - self.sources = [] - self.instances = [] - self.axisDefaults = {} - self._strictAxisNames = True - - @classmethod - def fromstring(cls, string, documentObject): - f = BytesIO(tobytes(string, encoding="utf-8")) - self = cls(f, documentObject) - self.path = None - return self - - def read(self): - self.readAxes() - self.readLabels() - self.readRules() - self.readVariableFonts() - self.readSources() - self.readInstances() - self.readLib() - - def readRules(self): - # we also need to read any conditions that are outside of a condition set. - rules = [] - rulesElement = self.root.find(".rules") - if rulesElement is not None: - processingValue = rulesElement.attrib.get("processing", "first") - if processingValue not in {"first", "last"}: - raise DesignSpaceDocumentError( - " processing attribute value is not valid: %r, " - "expected 'first' or 'last'" % processingValue - ) - self.documentObject.rulesProcessingLast = processingValue == "last" - for ruleElement in self.root.findall(".rules/rule"): - ruleObject = self.ruleDescriptorClass() - ruleName = ruleObject.name = ruleElement.attrib.get("name") - # read any stray conditions outside a condition set - externalConditions = self._readConditionElements( - ruleElement, - ruleName, - ) - if externalConditions: - ruleObject.conditionSets.append(externalConditions) - self.log.info( - "Found stray rule conditions outside a conditionset. " - "Wrapped them in a new conditionset." - ) - # read the conditionsets - for conditionSetElement in ruleElement.findall(".conditionset"): - conditionSet = self._readConditionElements( - conditionSetElement, - ruleName, - ) - if conditionSet is not None: - ruleObject.conditionSets.append(conditionSet) - for subElement in ruleElement.findall(".sub"): - a = subElement.attrib["name"] - b = subElement.attrib["with"] - ruleObject.subs.append((a, b)) - rules.append(ruleObject) - self.documentObject.rules = rules - - def _readConditionElements(self, parentElement, ruleName=None): - cds = [] - for conditionElement in parentElement.findall(".condition"): - cd = {} - cdMin = conditionElement.attrib.get("minimum") - if cdMin is not None: - cd["minimum"] = float(cdMin) - else: - # will allow these to be None, assume axis.minimum - cd["minimum"] = None - cdMax = conditionElement.attrib.get("maximum") - if cdMax is not None: - cd["maximum"] = float(cdMax) - else: - # will allow these to be None, assume axis.maximum - cd["maximum"] = None - cd["name"] = conditionElement.attrib.get("name") - # # test for things - if cd.get("minimum") is None and cd.get("maximum") is None: - raise DesignSpaceDocumentError( - "condition missing required minimum or maximum in rule" - + (" '%s'" % ruleName if ruleName is not None else "") - ) - cds.append(cd) - return cds - - def readAxes(self): - # read the axes elements, including the warp map. - axesElement = self.root.find(".axes") - if axesElement is not None and "elidedfallbackname" in axesElement.attrib: - self.documentObject.elidedFallbackName = axesElement.attrib[ - "elidedfallbackname" - ] - axisElements = self.root.findall(".axes/axis") - if not axisElements: - return - for axisElement in axisElements: - if ( - self.documentObject.formatTuple >= (5, 0) - and "values" in axisElement.attrib - ): - axisObject = self.discreteAxisDescriptorClass() - axisObject.values = [ - float(s) for s in axisElement.attrib["values"].split(" ") - ] - else: - axisObject = self.axisDescriptorClass() - axisObject.minimum = float(axisElement.attrib.get("minimum")) - axisObject.maximum = float(axisElement.attrib.get("maximum")) - axisObject.default = float(axisElement.attrib.get("default")) - axisObject.name = axisElement.attrib.get("name") - if axisElement.attrib.get("hidden", False): - axisObject.hidden = True - axisObject.tag = axisElement.attrib.get("tag") - for mapElement in axisElement.findall("map"): - a = float(mapElement.attrib["input"]) - b = float(mapElement.attrib["output"]) - axisObject.map.append((a, b)) - for labelNameElement in axisElement.findall("labelname"): - # Note: elementtree reads the "xml:lang" attribute name as - # '{http://www.w3.org/XML/1998/namespace}lang' - for key, lang in labelNameElement.items(): - if key == XML_LANG: - axisObject.labelNames[lang] = tostr(labelNameElement.text) - labelElement = axisElement.find(".labels") - if labelElement is not None: - if "ordering" in labelElement.attrib: - axisObject.axisOrdering = int(labelElement.attrib["ordering"]) - for label in labelElement.findall(".label"): - axisObject.axisLabels.append(self.readAxisLabel(label)) - self.documentObject.axes.append(axisObject) - self.axisDefaults[axisObject.name] = axisObject.default - - self.documentObject.axisMappings = [] - for mappingsElement in self.root.findall(".axes/mappings"): - groupDescription = mappingsElement.attrib.get("description") - for mappingElement in mappingsElement.findall("mapping"): - description = mappingElement.attrib.get("description") - inputElement = mappingElement.find("input") - outputElement = mappingElement.find("output") - inputLoc = {} - outputLoc = {} - for dimElement in inputElement.findall(".dimension"): - name = dimElement.attrib["name"] - value = float(dimElement.attrib["xvalue"]) - inputLoc[name] = value - for dimElement in outputElement.findall(".dimension"): - name = dimElement.attrib["name"] - value = float(dimElement.attrib["xvalue"]) - outputLoc[name] = value - axisMappingObject = self.axisMappingDescriptorClass( - inputLocation=inputLoc, - outputLocation=outputLoc, - description=description, - groupDescription=groupDescription, - ) - self.documentObject.axisMappings.append(axisMappingObject) - - def readAxisLabel(self, element: ET.Element): - xml_attrs = { - "userminimum", - "uservalue", - "usermaximum", - "name", - "elidable", - "oldersibling", - "linkeduservalue", - } - unknown_attrs = set(element.attrib) - xml_attrs - if unknown_attrs: - raise DesignSpaceDocumentError( - f"label element contains unknown attributes: {', '.join(unknown_attrs)}" - ) - - name = element.get("name") - if name is None: - raise DesignSpaceDocumentError("label element must have a name attribute.") - valueStr = element.get("uservalue") - if valueStr is None: - raise DesignSpaceDocumentError( - "label element must have a uservalue attribute." - ) - value = float(valueStr) - minimumStr = element.get("userminimum") - minimum = float(minimumStr) if minimumStr is not None else None - maximumStr = element.get("usermaximum") - maximum = float(maximumStr) if maximumStr is not None else None - linkedValueStr = element.get("linkeduservalue") - linkedValue = float(linkedValueStr) if linkedValueStr is not None else None - elidable = True if element.get("elidable") == "true" else False - olderSibling = True if element.get("oldersibling") == "true" else False - labelNames = { - lang: label_name.text or "" - for label_name in element.findall("labelname") - for attr, lang in label_name.items() - if attr == XML_LANG - # Note: elementtree reads the "xml:lang" attribute name as - # '{http://www.w3.org/XML/1998/namespace}lang' - } - return self.axisLabelDescriptorClass( - name=name, - userValue=value, - userMinimum=minimum, - userMaximum=maximum, - elidable=elidable, - olderSibling=olderSibling, - linkedUserValue=linkedValue, - labelNames=labelNames, - ) - - def readLabels(self): - if self.documentObject.formatTuple < (5, 0): - return - - xml_attrs = {"name", "elidable", "oldersibling"} - for labelElement in self.root.findall(".labels/label"): - unknown_attrs = set(labelElement.attrib) - xml_attrs - if unknown_attrs: - raise DesignSpaceDocumentError( - f"Label element contains unknown attributes: {', '.join(unknown_attrs)}" - ) - - name = labelElement.get("name") - if name is None: - raise DesignSpaceDocumentError( - "label element must have a name attribute." - ) - designLocation, userLocation = self.locationFromElement(labelElement) - if designLocation: - raise DesignSpaceDocumentError( - f'