Artificial Intelligence Policy
The journal Applied Aspects of Information Technology is committed to the principles of research integrity, transparency, reproducibility, confidentiality, and responsible innovation. The journal allows the use of generative artificial intelligence (AI) tools in research and manuscript preparation only under conditions of full disclosure, human oversight, and compliance with international publication ethics standards.
This policy is aligned with the recommendations of the Committee on Publication Ethics (COPE) and best practices adopted by leading scholarly publishers.
1. Authorship and Responsibility
- Generative AI tools cannot be listed as authors or co-authors and must not be cited as having authorship responsibility.
- Full responsibility for the content, accuracy, originality, and ethical integrity of a manuscript rests solely with the human authors.
- Any text, data, figures, code, or visual materials generated with the assistance of AI must be carefully reviewed, verified, and validated by the authors, including checks for:
- factual accuracy,
- completeness,
- plagiarism,
- fabricated or false references,
- bias or discriminatory content.
2. Mandatory Disclosure of AI Use (GAIDeT Declaration)
The journal requires full transparency regarding the use of generative AI.
All authors must disclose any use of generative AI tools during:
- the research process, and/or
- manuscript writing, editing, analysis, or preparation.
The disclosure must appear in a dedicated “Use of Artificial Intelligence” section of the paper.
The declaration must include:
- the name and version of the AI tool or service used (e.g., ChatGPT-5, Claude 3, Gemini 1.5);
- the specific tasks delegated to AI, based on the Generative AI Delegation Taxonomy (GAIDeT);
- confirmation of full human oversight and verification;
- a clear statement that AI tools are not authors and bear no responsibility for the content.
Recommended additional information (if applicable):
- period and context of use;
- key parameters or model version (if available);
- description of human review and fact-checking;
- information on data confidentiality, anonymisation, or deletion.
3. Permitted GAIDeT Task Categories
The journal follows the GAIDeT (Generative AI Delegation Taxonomy) framework. AI tools may be used, under human supervision, for the following task categories:
- Conceptualisation (idea generation, hypothesis formulation, feasibility assessment);
- Literature review (searching, systematisation, gap analysis);
- Methodology design;
- Software development and automation (code generation, optimisation);
- Data management (cleaning, validation, organisation);
- Data analysis and visualisation;
- Writing and editing (drafting, proofreading, summarising, translation);
- Ethical and social analysis (bias and risk assessment);
- Supervision and guidance (quality checks, limitation identification).
4. Data, Images, Code, and Reproducibility
- The submission of fabricated, manipulated, or hallucinated data, images, or references generated by AI is strictly prohibited.
- AI-generated images, graphics, or figures may require explicit labelling and confirmation of usage rights.
- Manuscripts must contain sufficient methodological detail to ensure reproducibility.
- If AI is used for data analysis, coding, or visualisation, this must be clearly described in the Methods section.
- Where possible, code, workflows, and processing logs should be deposited in a trusted repository.
5. Data Confidentiality and Security
- Authors must not upload confidential, personal, commercial, unpublished, or restricted data to public or commercial AI services without legal justification and written consent.
- Authors are fully responsible for data anonymisation and compliance with data protection regulations.
6. Use of AI in Peer Review and Editorial Processes
- Editors and reviewers must not upload unpublished manuscripts or review materials to generative AI systems.
- Limited auxiliary use (e.g., language editing) must be explicitly disclosed.
- AI tools must not act as reviewers, editors, or decision-makers at any stage of the editorial process.
7. Manuscript Screening and Enforcement
- The journal may use tools to support the detection of AI-generated content, plagiarism, or misconduct.
- Automated tools are never used as the sole basis for editorial decisions.
- All assessments are conducted by human editors in accordance with COPE procedures.
In cases of policy violations, the journal may apply standard measures, including:
- requests for clarification,
- corrections,
- expressions of concern,
- retractions.
8. Prohibited Practices
The following practices are considered serious ethical violations:
- undisclosed or concealed use of generative AI;
- deliberate deception, including fabricated sources or AI “hallucinations”;
- embedding hidden prompts or instructions to manipulate AI-assisted review systems.
9. Where and How to Declare AI Use
Authors must include the disclosure in the “Use of Artificial Intelligence” section of the paper.
Examples of Acceptable Declarations
- Example 1 – AI Used
The authors acknowledge the use of generative artificial intelligence tools in the preparation of this manuscript. In accordance with the Generative AI Delegation Taxonomy, tasks related to proofreading, editing, and translation were delegated under full human supervision. ChatGPT-5 (OpenAI, June 2025 release) was used. All AI-generated outputs were reviewed, verified, and approved by the authors, who take full responsibility for the content. Generative AI tools are not listed as authors and bear no responsibility for the final results. - Example 2 – No AI Used
The authors declare that no generative artificial intelligence tools or services were used at any stage of the research or manuscript preparation for any tasks defined in the Generative AI Delegation Taxonomy.

