
Back in early 2023, a group of 43 researchers from various disciplines examined the opportunities, challenges, and ethical dilemmas posed by generative AI.
Curated by Business Science Daily — peer-reviewed sources, human-verified.
Learn more
About Our Curation Process
Business Science Daily curates academic research in business and economics. Each featured study is selected from reputable, peer-reviewed journals, institutional repositories, or working papers (e.g., Elsevier, Sage, NBER, SSRN).
Articles are carefully summarized to ensure clarity and accuracy, with direct citations or links to original sources. Our process emphasizes transparency, academic integrity, and accessibility for a broader audience.
Learn more in our Editorial Standards & AI Policy.
The result was the opinion paper published in the International Journal of Information Management that quickly became a touchstone for anyone trying to make sense of generative AI.
Fast‑forward to 2026. We now live with an entire ecosystem of large language models – Claude, Gemini, Llama, Grok, and countless specialised variants. The hype has settled into everyday use. But the core dilemmas that the paper identified – bias, transparency, academic integrity, job displacement, regulatory gaps – have persisted.
What makes this article remarkable is not that it predicted every twist and turn. Of course, it did not foresee the rapid rise of multimodal AI or the fierce competitive landscape.
What it did do was to lay out a multidisciplinary research agenda that has proven remarkably durable. The paper’s structure – with 43 short, punchy expert contributions followed by a synthesised roadmap – makes it a perfect time capsule of that moment of collective awe and anxiety, and an excellent foundation for thinking about where we go next.
Generative AI: a multidisciplinary view
Transformative AI tools like ChatGPT are applicable across a wide range of contexts. They offer significant opportunities to enhance productivity in banking, hospitality, IT, and business activities such as management and marketing. However, they also raise profound ethical, legal, and societal challenges – including privacy, security, bias, misinformation, and the potential for misuse. The authors conclude that while ChatGPT should not be banned, it must be governed with transparency, human oversight, and updated educational and research policies.
Synthesised common themes
- Productivity enhancer: ChatGPT can automate mundane tasks, write first drafts, debug code, and answer queries, freeing humans for higher‑value work.
- Disruption to academia: Essay‑based assessments are under threat; plagiarism detection tools struggle; but the tool can also serve as a personalised tutor.
- Job displacement concerns: White‑collar roles such as copywriters, translators, customer service agents, and even some software developers may be affected.
- Misinformation & misuse: Generative AI can produce deepfakes, fake news, and propaganda at scale, with malicious actors exploiting its capabilities.
- Black‑box limitations: Lack of explainability, outdated training data (pre‑2021), and tendency to generate plausible but incorrect answers (hallucination).
- Regulatory vacuum: No clear legal frameworks for ownership, copyright, accountability, or ethical use of AI‑generated content.
Ten research propositions from the paper
- P1: Generative AI can replace some tasks performed by knowledge workers.
- P2: It can boost productivity by augmenting human capabilities.
- P3: It may become a more effective manipulation, misinformation and disinformation tool.
- P4: Performance depends heavily on data and training models.
- P5: Lack of formal and informal rules increases misuse and abuse.
- P6: Generative AI poses greater ethical dilemmas than prior technologies.
- P7: It may possess superior, subjective, and deceptive intelligence.
- P8: With natural language capabilities, it can play significant roles in business and society.
- P9: It can be superhuman or specialised depending on data and training.
- P10: Like all tools, it promises unique capabilities but requires responsible use.
Opportunities, risks, and practical challenges
- Automate repetitive writing and coding tasks
- Personalised tutoring and feedback for students
- 24/7 customer service (banking, travel, hospitality)
- Accelerate literature reviews and first‑draft writing for researchers
- Help non‑native English speakers improve academic writing
- Generate marketing copy, itineraries, and legal document drafts
- Plagiarism and academic integrity erosion
- Biased, offensive, or fabricated outputs
- Lack of explainability (“black box”)
- Outdated training data (pre‑2021, no real‑time updates)
- Spread of misinformation and deepfakes
- High computational cost and potential subscription fees
- Data privacy and security concerns
Sector‑specific impacts (summarised)
- Education: flipped classrooms, personalised tutoring, risk of cheating, need for oral exams and process‑based assessments.
- Banking & finance: automated customer support, fraud detection, regulatory text mining, but trust remains critical.
- Tourism & hospitality: dynamic itinerary building, concierge services, multilingual content, but risk of fake reviews.
- Healthcare: support for remote primary care, training augmentation, but accuracy and liability are major concerns.
- Legal services: efficient text mining of case law and regulations, but responsibility for errors is unclear.
Practical recommendations for organisations
- Do not ban ChatGPT outright – instead, develop clear usage policies and train employees on responsible use.
- Integrate AI as a co‑pilot – use it for first drafts, summarisation, and ideation, with human verification of facts.
- Update risk management frameworks (e.g., NIST AI RMF) to include generative AI specific risks.
- Invest in reskilling – focus on creativity, critical thinking, and the ability to verify AI outputs.
- Adopt transparency practices – always disclose when AI has been used to generate content.
How ChatGPT reshapes teaching, learning, and research
Fourteen of the 43 contributions focus exclusively on higher education. The consensus: traditional essay‑based assessments are no longer reliable. ChatGPT can pass MBA exams (Wharton), law exams, and even medical licensing exams. However, banning the tool is short‑sighted; instead, educators should redesign assessments to emphasise process, oral defence, and critical evaluation of AI‑generated outputs.
New pedagogical strategies recommended
- Use ChatGPT as a co‑teacher – generate first drafts and ask students to critique and improve them.
- Incorporate oral exams – have students explain their reasoning and defend their work verbally.
- Require process portfolios – track drafts, revisions, and reflections to show original thinking.
- Design authentic, local, or personal assignments – based on the student’s own experiences or local data that ChatGPT cannot access.
- Teach AI literacy – students need to understand prompt engineering, bias detection, and ethical use.
Research and publishing guidelines
Major publishers (Springer‑Nature, Elsevier, Taylor & Francis, Science, ICML) have updated policies: AI tools cannot be listed as authors because they cannot take accountability for the work. Any use of ChatGPT in research must be disclosed transparently (e.g., in the methods section). Fabricated references and hallucinations remain a serious problem – all AI‑generated content must be verified by human authors.
Key research questions for education (from the paper)
- How can ChatGPT be used to improve student engagement in online and offline learning environments?
- What are the long‑term benefits and challenges of using ChatGPT in teaching and learning?
- How can ChatGPT support students with disabilities and diverse learning needs?
- How can the academic community better respond to emerging disruptive technologies that threaten traditional assessment practices?
Roadmap for future research (three thematic areas)
The authors consolidate all contributions into a detailed research agenda covering (1) knowledge, transparency, and ethics; (2) digital transformation of organisations and societies; and (3) teaching, learning, and scholarly research. Below is a selection of the most urgent questions.
Theme 1: Knowledge, transparency & ethics
- Does ChatGPT challenge fundamental assumptions in research and lead to a paradigm shift?
- How can we develop techniques to enhance the transparency and explainability of generative AI models?
- How can we assess the accuracy and verify texts generated by ChatGPT?
- What biases are introduced by the training dataset and process, and how can they be mitigated?
- What is the impact of consolidating risk management frameworks (e.g., NIST AI RMF) with ethical perspectives on ChatGPT adoption?
Theme 2: Digital transformation of organisations & societies
- How can AI‑powered language tools facilitate digital transformation in industries such as travel, finance, and marketing?
- What new business models can be created using generative AI?
- Under what conditions can AI play a role in generating genuine innovation?
- What are the optimal ways to combine human and AI agents to maximise benefits and minimise negative impacts?
- What are the implications of worker displacement by generative AI, and who is responsible for mitigation?
- How can generative AI be used to support people with disabilities and address global grand challenges (SDGs)?
Theme 3: Teaching, learning & scholarly research
- What are the appropriate ways to introduce tools like ChatGPT in curriculum design?
- Can ChatGPT provide an enhanced student learning experience, and how should we measure that?
- What are the dark sides of using ChatGPT in education (e.g., over‑reliance, loss of critical thinking)?
- What is the long‑term impact of ChatGPT on scholarly writing and research?
- What is the role of human creativity when ChatGPT is used in scholarly writing?
43 expert perspectives – a flavour
The paper contains 43 individual contributions, grouped into five categories. Below are some representative quotes and insights.
- Venkatesh: “The skills required in the world powered by ChatGPT will be different. Research assumptions will be impacted.”
- Mariani: “There is a long way before AI platforms can lead independently to meaningful innovation. At best they augment human intelligence.”
- Wade: “ChatGPT’s biggest disruption will be knowledge work productivity – especially creating competent first drafts.”
- Richter: “ChatGPT can act as a coach, innovator, and software developer in hybrid teams.”
- Wirtz: “Generative AI brings unprecedented improvements in customer service, quality and productivity simultaneously.”
- Balakrishnan et al.: “ChatGPT can build marketing campaigns, content, keyword suggestions – but precise queries are critical.”
- Buhalis: “In tourism, ChatGPT revolutionises itinerary building, concierge services, and multilingual support.”
- Mogaji et al.: “In banking, ChatGPT can automate marketing, provision, and even customer advice – but trust and regulation remain barriers.”
- Wright & Sarker: “Use IT Mindfulness – alertness to distinction, multiple perspectives, openness to novelty – to explore ChatGPT in teaching.”
- Laumer: “The digital transformation of academia is underway. Writing text may no longer be the most essential skill.”
- Viglia: “Independent thinking is what makes us better humans. ChatGPT should not kill creativity.”
- Dubey & Dennehy: “Scholarly writing is a craft developed over time. ChatGPT cannot replace rigorous research.”
- Brooks: “You cannot please all the people all the time. Transparency is key to tackling unethical uses.”
- Stahl: “Ask ‘good bot or bad bot?’ – the answer requires systematic ethical foresight, not just reactions.”
- Edwards & Duan: “Generative AI must be human‑centred, responsible, and personalised to specific contexts.”
- Larsen: “We need new ‘characteristic validities’ to evaluate creativity, not just prediction accuracy.”
Ten high‑level recommendations from the paper
- Do not ban ChatGPT – adapt assessments and teach AI literacy.
- Develop transparent policies for disclosure of AI use in research and education.
- Invest in AI risk management frameworks that include generative AI specific risks.
- Create global regulatory coordination to address copyright, bias, and accountability.
- Promote human‑AI hybrid collaboration – use AI for first drafts, humans for verification and creativity.
- Update curricula to focus on critical thinking, problem‑solving, and ethical judgement.
- Provide reskilling programs for workers whose jobs are likely to be displaced.
- Encourage the development of open, transparent, and auditable training datasets.
- Support interdisciplinary research on the long‑term societal impacts of generative AI.
- Empower citizens with digital literacy to recognise AI‑generated misinformation.
Reference
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., … & Wright, R. (2023). Opinion Paper: “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Key citations within the paper: Bender et al. (2021) – Stochastic parrots; Terwiesch (2023) – ChatGPT at Wharton; Kung et al. (2022) – USMLE performance; Mollick (2022) – ChatGPT tipping point; many others as referenced.
This summary is based on the published open‑access article. All direct quotations are attributed to the respective contributors.