<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
  <channel>
    <title>Journal of Industrial Management Perspective</title>
    <link>https://jimp.sbu.ac.ir/</link>
    <description>Journal of Industrial Management Perspective</description>
    <atom:link href="" rel="self" type="application/rss+xml"/>
    <language>en</language>
    <sy:updatePeriod>daily</sy:updatePeriod>
    <sy:updateFrequency>1</sy:updateFrequency>
    <pubDate>Sat, 21 Mar 2026 00:00:00 +0330</pubDate>
    <lastBuildDate>Sat, 21 Mar 2026 00:00:00 +0330</lastBuildDate>
    <item>
      <title>Development of a Comprehensive Agent-Based Simulation Model for Smart Transformation of Iran’s Hotel Industry within the Tourism 4.0 Framework</title>
      <link>https://jimp.sbu.ac.ir/article_106780.html</link>
      <description>Introduction and Purpose: Luxury hotels, which rely on precise interaction between staff and guests, have actively introduced technologies to enhance guest experience and satisfaction. The upgrading of smart hotels has brought about significant changes in the development environment of the hotel industry, which has led to the emergence of smart hotels. In the modern era, it is very important to make full use of the functions of smart hotels and strive to develop them. Smart hotels allow guests to register their identities, process orders, and receive room cards online. In addition, guests can personalize the services they receive, making the reception and service process simpler and more satisfying. To build a smart tourism hotel, it is necessary to optimize and innovate the core business content of the hotel to enhance the customer experience. This research aimed to develop a comprehensive model for smartening Iran's hotel industry within the Tourism 4 framework and using agent-based simulation.Methodology: The research approach was hybrid and included library and field stages. First, using the systematic litrature review method, the findings of recent quantitative and qualitative research (2020 onwards) were reviewed and initial codes were extracted. Then, to identify neglected factors and localize the model, thematic analysis was performed on data from 13 semi-structured interviews with hotel industry experts. Combining the results of these two stages led to the identification of 5 main factors (customer, hotel, human resources, government, housekeeping) and 19 key variables. Using interpretive structural modeling, causal and hierarchical relationships of the determinants were determined and the final conceptual model was drawn. The model was implemented in a factor-based simulation environment and evaluated with internal (expert confirmation) and external (comparison with real data) validation, which showed a deviation of less than 0.2 percent.Findings: The results of this study have increased the awareness of managers and those interested in the field of hotel management and management in the field of smart hotel industry and factor-based models. Also, based on the results of the simultaneous interpretation structural modeling and weighting method, it is concluded that the factors "Customer" and "Hotel (Acceptance / Booking / Reservation)" which are located in the link area have high influence and dependence. In fact, any action on these variables causes changes in other variables. The variable "Government" is located between the autonomous and dependent area, has low influence and medium dependence. Also, the variables "Room and Reception Services (Housekeeping)" and "Human Resources" are located in the neutral area. It was also found that "Room and Reception Services (Housekeeping)", "Human Resources" and "Government" are at the highest level and are most influenced by the "Customer" and "Hotel (Reception/Booking/Reservation)" factors.Conclusions: Scenario analysis based on two key variables, "technological infrastructure" and "expert human resources," indicated that the simultaneous improvement of these two has the greatest impact on achieving the goal of providing 65 million person-nights of accommodation by 2025. The main innovation of the research is the integration of the three flows of materials, information, and finance in the Tourism 4 framework and the provision of a native and comprehensive model for strategic decision-making in the hotel industry that can help improve service quality, operational efficiency, and competitiveness in the smart tourism market.</description>
    </item>
    <item>
      <title>Presenting a mathematical approach for tool life modeling based on Weibull distribution and dependent on machining conditions</title>
      <link>https://jimp.sbu.ac.ir/article_106812.html</link>
      <description>Introduction: Currently, machining processes constitute a vital part of global manufacturing processes. The importance of these processes can be observed through the financial flow resulting from their use. One of the fundamental issues in utilizing machining processes for product manufacturing is tool wear. To date, various studies with diverse assumptions have been conducted to analyze wear characteristics under different conditions to satisfy various objectives. Traditional models for analyzing tool life and wear, which are often based on deterministic equations, do not consider the variations that occur in cutting processes, and for this reason, the actual tool life rarely matches the values predicted by these methods. In recent years, there has been increased attention to the use of statistical distributions for predicting tool life. Among them, the use of the Weibull distribution is of particular importance. The main challenge of these approaches is the accurate estimation of the tool life distribution function based on real data. Moreover, with changes in machining conditions, the tool life distribution function may change, estimating the distribution parameters for tool life more challenging. Additionally, due to the suitable fit of cutting tool life with the Weibull distribution, estimating the parameters of this distribution is complex due to its specific characteristics.Method: In this research, a hybrid method is presented that uses the design of an experiment based on the Box-Behnken model and applies a mathematical transformation to experiments on real tool life data to determine the parameters of the tool life distribution. This method is such that the relationship between the tool life distribution parameters and the machining conditions, including spindle speed, feed rate, and depth of cut, can be described by a polynomial equation. In this method, the golden section search technique will be used to fit the obtained data to the appropriate tool life distribution. Finally, the proposed methodology is implemented on a case study, and the results are reported.Result and discussion: After obtaining the values of the shape and scale parameters of the Weibull distribution at each level of experiments designed by the Box-Behnken methodology, the relationship of these parameters with machining conditions can be modeled using a full quadratic function. In this paper, the shape and scale parameters of the Weibull distribution are reported at each level of experiments, followed by the value of the SSE function obtained in the optimization process using the GSS algorithm. The results indicate desirable error values in the application of the proposed methodology. Furthermore, with the implementation of the proposed methodology in this paper, the R2 value for the shape parameter is 92.52%, and for the scale parameter, 96.80%. The appropriate correlation between the full quadratic model for each of the Weibull distribution parameters with the data obtained from the life of cutting tools indicates the adequacy of the proposed methodology in practical applications.Conclusions: In this paper, a hybrid methodology was developed to achieve two practical objectives, using the design of the experiment, mathematical transformations on the obtained data from tool life, and the implementation of the golden section search algorithm. The first goal is to estimate the parameters of the Weibull distribution under specific machining conditions. This will determine the distribution of cutting tool life under specific machining conditions based on the Weibull distribution. The second goal is to identify changes in the tool life distribution based on changes in machining conditions. For this purpose, in the presented methodology, the relationship between the Weibull distribution parameters and machining conditions is determined as a complete square model. Finally, the proposed method in this paper is implemented on a milling process with specific information, and the results obtained from it are reported.</description>
    </item>
    <item>
      <title>An integrated machin elearning and QFD method to assess risk mitigation strategies</title>
      <link>https://jimp.sbu.ac.ir/article_106779.html</link>
      <description>Introduction and Purpose: Plastic waste management has become one of the most critical environmental challenges of the modern era, requiring efficient and resilient supply chains. The recycling supply chain of plastics is exposed to multiple uncertainties, such as fluctuations in the quantity and quality of recyclable inputs, operational instabilities, and process vulnerabilities that threaten its sustainability. In this context, identifying and prioritizing risk factors and formulating preventive strategies to mitigate them are essential. The main purpose of this study is to develop an integrated framework that combines machine learning (ML) algorithms with the fuzzy Quality Function Deployment (QFD) technique to identify key risk factors and prioritize preventive strategies for risk reduction in the plastic recycling supply chain.Methodology: This research adopts a data-driven approach that leverages machine learning algorithms for risk assessment. The case study was conducted at Northe Shimi Plast Company, one of the largest plastic recycling complexes. The study population consisted of seven industrial experts, each with more than three years of practical experience in plastic waste recycling operations. Through a systematic literature review and expert validation, eleven risk factors and eight preventive strategies were identified. Feature importance techniques from decision-tree and random forest algorithms were employed to calculate the relative weights of risk factors. These weights were then integrated into the fuzzy QFD framework to rank the preventive strategies. Data analysis and model implementation were carried out using MATLAB software.Findings: The machine learning analysis revealed that input material risks are the most critical threats in the plastic recycling supply chain, followed by recycling process risks and health and safety risks. The fuzzy QFD analysis further indicated that buyer&amp;amp;ndash;supplier collaboration represents the most effective preventive strategy for risk mitigation, followed by supply chain transparency and supply chain agility. Buyer&amp;amp;ndash;supplier collaboration enhances supply chain resilience through information sharing, joint planning, and contingency strategy development. Meanwhile, digital technologies such as the Internet of Things (IoT), RFID tags, and GPS tracking contribute significantly to improving visibility and real-time risk monitoring across the supply chain.Conclusion: The results demonstrate that the proposed integrated ML&amp;amp;ndash;QFD approach provides a powerful, data-driven tool for risk management and decision-making in recycling supply chains. By automating weight estimation and reducing subjective bias, the model improves the precision and efficiency of the decision-making process. Moreover, the interpretability of tree-based algorithms allows managers to understand the logic behind the model&amp;amp;rsquo;s outputs and apply its insights in real-world operations.The proposed framework not only strengthens risk management capabilities in the plastic recycling industry but also offers transferability to other recycling sectors, including electronic waste, metal, and rubber recycling. Nevertheless, the model&amp;amp;rsquo;s effectiveness depends on the availability of sufficient quantitative data. Future research is encouraged to expand the proposed approach by integrating large language models (LLMs) for feature identification, applying operations research techniques such as DEMATEL or network analysis to explore interrelationships among risks, and using objective weighting methods like entropy, CRITIC, or SECA to enhance precision. Integrating this framework with intelligent, data-driven decision tools could further advance predictive risk management and support sustainable supply chain development.</description>
    </item>
    <item>
      <title>Evaluating Reliability Metrics of Artificial Intelligence Systems in Healthcare: An SEM_FCM Approach</title>
      <link>https://jimp.sbu.ac.ir/article_106801.html</link>
      <description>Introduction: As the field of computer science evolves, and with the emergence of concepts such as artificial intelligence (AI), machine learning (ML), and deep learning (DL), significant opportunities for achieving smart urban systems have been created. These transformative technologies are reshaping numerous industries, particularly healthcare, where their impact has been profound. AI-powered tools are now employed to manage patient medical histories, conduct digital consultations, and optimize drug administration. However, despite their vast potential, these tools are not without limitations. A significant challenge faced by these systems is the low accuracy of decision-making outputs, which hinders their effective implementation in critical areas. To address these issues, the present study evaluates reliability metrics specific to AI systems in healthcare. By focusing on these metrics, the research identifies key factors that improve trustworthiness, using the Fuzzy Cognitive Mapping (FCM) approach.&#13;
Methods: The study begins with the extraction of reliability metrics through a detailed literature review and interviews with healthcare professionals, ensuring that the metrics are both comprehensive and grounded in real-world applications. Subsequently, using the Delphi method, the critical criteria for evaluating the reliability of artificial intelligence systems in the targeted domain were identified. In the next step, a causal model was developed based on a review of the relevant literature. This model was then validated using the Structural Equation Modeling (SEM) approach. Following that, causal relationships were derived using the validated SEM model and expert opinions, and the interactions among the identified criteria were analyzed through the application of the Fuzzy Cognitive Mapping (FCM) method. This advanced method provided a clear understanding of which factors were most influential and which were most impacted, offering deeper insights into AI system reliability. For data collection, a range of questionnaires, including Likert scale, AHP, and FCM-based tools, were distributed to participants. The data collected was then analyzed using SmartPLS software, a powerful tool for path analysis and structural equation modeling.&#13;
Findings: The findings reveal that "continuous monitoring of generated outcomes and system reconfiguration" is the most effective metric for evaluating AI system reliability in healthcare. This underscores the importance of ongoing oversight and adaptability to maintain system accuracy and relevance. Another crucial finding identifies the "use of non-deterministic algorithms" as the most impacted metric, highlighting the need for flexible and probabilistic methods in AI systems. In total, six primary metrics were identified and evaluated:&#13;
&#13;
Trusted and homogeneous data to ensure consistent results.&#13;
Data security and privacy to protect sensitive medical information.&#13;
Weekly updates to improve system performance.&#13;
Use of non-deterministic algorithms to enhance adaptability.&#13;
Stakeholder evaluation structures for transparency and accountability.&#13;
Continuous monitoring of results to identify and address emerging issues.&#13;
&#13;
These metrics collectively form a comprehensive framework for enhancing AI system reliability in healthcare.&#13;
Conclusion: This study provides a detailed examination of AI system reliability in healthcare, emphasizing the critical role of continuous monitoring and regular updates in improving accuracy and trustworthiness. Moreover, ensuring data security and privacy is highlighted as essential for building confidence in these systems. The findings serve as a practical guide for AI developers in healthcare, helping them design reliable and efficient tools. Additionally, the study underscores the broader benefits of these improvements, such as enhanced medical service quality and increased patient trust in AI systems. Ultimately, adopting innovative approaches and focusing on the identified key components will drive significant advancements and transformations in healthcare delivery.</description>
    </item>
    <item>
      <title>The design of a model for the application of Fourth Industrial Revolution technologies in the humanitarian supply chain.</title>
      <link>https://jimp.sbu.ac.ir/article_106815.html</link>
      <description>Introduction and Objectives: The rapid advancements associated with the Fourth Industrial Revolution&amp;amp;mdash;including the Internet of Things (IoT), artificial intelligence (AI), blockchain, big data analytics, robotics, and 3D printing&amp;amp;mdash;have created new opportunities to enhance efficiency, transparency, and responsiveness in humanitarian supply chains. However, the complex nature of relief operations, scarcity of resources, lack of digital infrastructure, and limited inter-organizational coordination have made the adoption of these technologies in crisis environments particularly challenging. Moreover, the literature indicates that most existing studies adopt isolated, technology-specific approaches, while comprehensive and integrated models explaining how Industry 4.0 technologies can be deployed in real crisis contexts remain limited. In this context, the present study aims to develop a conceptual model that systematically and contextually explains the influencing factors, challenges, implementation strategies, and potential outcomes of adopting Fourth Industrial Revolution technologies within humanitarian supply chains.Methods: This study is applied in purpose and qualitative&amp;amp;ndash;exploratory in methodology, utilizing a grounded theory approach. Participants included eighteen experts comprising humanitarian logistics specialists, technology professionals, managers of government and non-governmental relief organizations, and crisis management officials. They were selected using purposive and snowball sampling. Data were collected through semi-structured interviews, fully transcribed, and analyzed using the three-stage coding process&amp;amp;mdash;open, axial, and selective coding&amp;amp;mdash;supported by MAXQDA software. Research validity was ensured through participant checking, independent coding by multiple researchers, and the application of credibility, transferability, dependability, and confirmability criteria. Theoretical saturation was achieved at the seventeenth interview.Findings: Data analysis identified a set of causal conditions including the need for enhanced transparency, improved inter-organizational coordination, faster relief operations, and reduced human error. Contextual conditions such as weak communication infrastructure, unstable data networks, limited access to digital equipment, financial constraints, and the absence of shared standards among humanitarian organizations were also identified. Additionally, intervening factors such as cultural resistance, insufficient digital skills, cybersecurity threats, and the technical complexity of emerging technologies were found to significantly influence the implementation process.The main strategies extracted from the data include developing technical infrastructures, creating modular and cloud-based platforms, strengthening inter-organizational collaboration, specialized staff training, establishing AI-based predictive systems, and deploying IoT, edge computing, robotics, and 3D printing technologies. The findings further revealed that these technologies not only operate independently but also function as components of an integrated &amp;amp;ldquo;data cycle&amp;amp;rdquo;: IoT generates data; cloud and edge computing process the data; AI analyzes it; and blockchain ensures its security and transparency. This cycle forms the technological backbone for operating effectively in high-uncertainty crisis environments.The positive outcomes of successful technology adoption include improved supply chain resilience, reduced response time, enhanced resource traceability, reduced administrative corruption, and increased efficiency in resource allocation. However, potential negative consequences&amp;amp;mdash;such as over-reliance on technology, exposure to cyberattacks, and increased maintenance costs&amp;amp;mdash;were also identified.Conclusion: The proposed conceptual model demonstrates that the effective implementation of Industry 4.0 technologies in humanitarian supply chains requires an integrated framework aligned with real-world crisis conditions. The model&amp;amp;rsquo;s distinction between the preparedness phase (emphasizing prediction, planning, and infrastructure creation) and the response phase (emphasizing real-time monitoring, operational coordination, and live data analysis) enhances its practical applicability. This model can serve as a strategic guideline for policymakers, humanitarian organizations, and technology designers seeking to advance digital transformation within humanitarian operations.</description>
    </item>
    <item>
      <title>Designing a health insurance fraud detection system using artificial intelligence algorithms</title>
      <link>https://jimp.sbu.ac.ir/article_106419.html</link>
      <description>Introduction: With the rapid expansion of healthcare services, fraud in health insurance systems has become a serious challenge. This study aims to design and develop an intelligent and modular framework for fraud detection in health insurance. The framework is designed to identify abusive and fraudulent behaviors regardless of the type of service or actor involved, and to adapt effectively to dynamic and complex environments. The primary objective is to provide a flexible solution that enhances the accuracy of fraud detection while reducing human error in the decision-making process.Methods: The proposed framework consists of four key modules. First, a knowledge-based module leverages insights from insurance and medical experts to build a simulation framework for fraud detection, enabling the medical-insurance team to describe and visualize abnormal behaviors based on the actions of different actors. Second, a two-stage data warehouse is designed to efficiently process large volumes of insurance data. In the first-stage warehouse, the ETL (extract&amp;amp;ndash;transform&amp;amp;ndash;load) process ingests claims data, cleanses data quality issues, and removes inconsistencies and errors to prepare the data for feature extraction required for fraud detection. In the second-stage warehouse, in collaboration with insurance and medical experts, relevant features for fraud detection are extracted and selected. To this end, a framework for simulating the fraud-detection process is built to enable the medical-insurance team to describe, analyze, and visualize abnormal behaviors based on the actions of different actors. Accordingly, a list of twenty key features for fraud detection was extracted and documented, covering information about actors, productss/services, and related features for each type of fraud. Third, the fraud detection engine is based on a proposed algorithm called K-IF, which first clusters data using Isolation Forest (IF) and then identifies suspicious samples using K-Means. Fourth, visualization tools and a dynamic management dashboard are developed to support interactive analysis and real-time updates by users.&amp;amp;nbsp;Results and discussion: Experimental results on labeled datasets demonstrate that the proposed algorithm, by leveraging the discriminative power of IF and the clustering precision of K-Means, achieves better performance across multiple metrics and computational times than common algorithms such as LOF, OCSVM, EE, DBSCAN, AE, and K-Means. Furthermore, results from applying the proposed algorithm to real data from a health insurance company indicate that this approach, with reduced dependence on contamination rate and improved accuracy in detecting edge cases, demonstrates strong anomaly-detection capabilities. Ultimately, the framework has been developed as a software package for private insurance companies, offering advanced analytical tools that significantly enhance decision-making and reduce the need for human intervention.Conclusion: This study highlights that success in detecting insurance fraud is directly tied to the quality and precision of features extracted from healthcare transaction data. The synergy between demographic, financial, and service-related data plays a crucial role in increasing the sensitivity of machine learning models to anomalous behaviors. However, the lack of accurate and structured data remains a major challenge in developing effective fraud detection software. The developed framework, designed as a software package for managing health insurance claims, integrates machine learning models, a modular architecture, and a modern user interface to deliver high scalability and rapid responsiveness to organizational needs. It is recommended that insurance company managers adopt this solution as part of their digital strategy for claims management. By integrating with existing systems and utilizing secure databases and interactive dashboards, they can achieve improved efficiency, greater transparency, and reduced fraud-related costs.</description>
    </item>
    <item>
      <title>A Novel Hybrid Machine Learning Model Based on Deep Learning for Predicting Recruitment Decisions</title>
      <link>https://jimp.sbu.ac.ir/article_106819.html</link>
      <description>Introduction and Objectives: In today's In a highly competitive environment, recruitment decisions can no longer rely only on human judgment. The increasing volume of applicant data and the complexity of the search for candidate attributes and high precision in the selection of personnel have become a need. Artificial intelligence (AI) and machine learning have been chosen as the only solution to such a problem. (ML) a strategic necessity for organizations. Despite that, classical ML models, such as decision trees and logistic regression, are giving acceptable However, when results are applied to imbalanced datasets, complex data structure setups, and high accuracy requirements, they are greatly limited. This study aims to achieve a hybrid machine learning model designed on the forces of both kinds of neural networks as well as the classical algorithms. I demonstrate how to deliver a powerful, accurate, and interpretable solution to predict recruitment outcomes.&#13;
Methods: A multi layer stacking architecture was used to develop the proposed model, in which Deep Neural Network (DNN) is employed with four of the high performing base learners such as Random Forest, Gradient Boosting, LightGBM and CatBoost. Finally, XGBoost was used as meta learner to learn the final prediction from the outputs of these base models. To handle the class imbalance problem, NearMiss undersampling technique was tried and we used the Tree structured Parzen Estimator (TPE) algorithm provided as a part of the Optuna framework for hyperparameter optimization. Additionally, Recursive Feature Elimination with Cross Validation (RFECV) was used for feature selection to find the most important variables related to the hiring decisions.&#13;
Findings: The proposed hybrid model has been evaluated on a sample dataset of 1500 samples against 16 well-known machine learning models. Results indicated that the proposed model surpassed all key performance metrics in all areas of accuracy, precision, recall and F1 score with an accuracy of 92.47% and F1 score of 92.12%. There were some other models such as CatBoost and LightGBM that also had good scores, no other models performed better than those metrics reported for the proposed model.Likewise, the feature importance assessment of the same dataset with the help of XGBoost displayed that the recruitment strategy, education level, and interview score were the major predictors of final hiring decisions. These findings were not only beneficial in improving model performance but also valuable for improving the research and data examination of the HR decision makers in relation to the policies and criteria used in recruitment.&#13;
Conclusion: This research develops the hybrid machine learning model that smoothly combines classical algorithms and deep learning by a stacked architecture, which provides an advanced and highly effective structure for predicting hiring outcomes accurately. The model achieved both statistical superiority in benchmark comparisons and practical benefits.These findings imply that the usage of such hybrid models can rewrite the context for intelligent HR systems by streamlining candidate evaluation as faster, fairer, and more data-driven. In addition, HR managers receive focused, evidence-based feedback from feature analysis when predicting with modeling. Future work involving larger datasets and unstructured data such as resumes and interview videos coupled with tools for making the black box more explainable, such as SHAP or LIME, is encouraged to add transparency and build organizational trust in AI-based decision-making systems.</description>
    </item>
    <item>
      <title>Designing and compiling a model of key success factors in industrial park company (Case study: Kermanshah Province)</title>
      <link>https://jimp.sbu.ac.ir/article_106848.html</link>
      <description>Introduction and Objectives: The success of industrial estates is influenced by a variety of factors. These factors play a vital role in improving the performance and ensuring the long-term sustainability of industrial estates and are considered essential components in the process of sustainable economic growth and the preservation of natural resources and the environment. Therefore, estates are known not only as centers of production and employment, but also as key and effective factors in regional and national economic development and have a significant contribution to the realization of the country's economic development goals. The research aims to identify the key factors of success in industrial estates in Kermanshah province and to analyze the role of these factors in economic growth, increasing employment, and improving the country's competitiveness.&#13;
Methods: This research was conducted with a descriptive, analytical, and applied purpose with a qualitative approach. Data collection was carried out through semi-structured interviews with academic professors with relevant scientific expertise and experience, experts from Kermanshah Province Industrial Estates Company, industry owners and business activists based in Kermanshah Province&amp;amp;rsquo;s industrial estates, as well as documents and texts related to the topic. Sampling continued until theoretical saturation was reached, and 17 experts participated. Data analysis was performed using the content analysis method based on Brown and Clark&amp;amp;rsquo;s six-step model, which included coding, generating, and reviewing themes. Reliability was confirmed with a kappa coefficient (0.75) and the validity of the study was ensured with a researcher sensitivity strategy. The findings were also confirmed by academic experts and specialists in the field.&#13;
Findings: Analysis of qualitative data from 17 interviews conducted, 118 initial codes were extracted, and after merging and integrating similar themes, 42 basic themes were finally identified. The basic themes were accurately collected and coded, and related and distinct categories were classified into organizing themes. The findings showed that the 42 basic themes were categorized into 13 organizing themes related to key success factors in the Industrial Estates Company. In the final stage, in order to identify the main components of the key success factors, the organizing themes were divided into 5 general and comprehensive categories, including: macro-management and strategic leadership of the organization based on knowledge and specialized competencies, strategic innovation and sustainable industrial development, environmental adaptation and active participation of stakeholders, optimal allocation of resources and development of infrastructure and organizational culture and intra-organizational communication relations. These findings have comprehensively examined and identified the key success factors of the Kermanshah Province Industrial Estates Company.&#13;
Conclusion: Accurate identification of these key factors allows for the provision of practical and effective solutions to improve the business environment in the aforementioned company. Improving the business environment through these solutions can pave the way for attracting more investors, increasing production capacity, and consequently improving the level of employment in the province. Therefore, the results of this part of the research play an important role in improving the performance and sustainable development of the Kermanshah Provincial Industrial Estates Company and can be used as a basis for management decisions and development policies, and help the managers of the Provincial Industrial Estates Company to improve performance, success, and implement management strategies.</description>
    </item>
    <item>
      <title>Evaluation of the Sustainability of Iranian Ports Based on The Parsimonious Best-Worst Method (PBWM): Challenges and Opportunities for Sustainable Economic Development</title>
      <link>https://jimp.sbu.ac.ir/article_106900.html</link>
      <description>Introduction: Ports, as key points in global logistics networks, are constantly evolving and adapting due to developments in global trade and changes in related sectors such as shipping. In recent years, challenges such as growing environmental awareness, social responsibility pressures, and the necessity of sustainable economic practices have put additional demands on ports. Iranian ports, due to their strategic geopolitical position and vital role in international transportation, are of particular significance. They play a key role in the country&amp;amp;#039;s economic growth and development. The development of port infrastructure, the application of modern technologies in operations management, improvements in energy efficiency, and the enhancement of trade interactions with neighboring countries create significant opportunities for sustainable economic development. Furthermore, the development of industries related to ports, such as logistics, transportation, and free trade zones, can lead to increased employment, foreign investment, and economic growth. This study aims to examine the sustainability of 14 major Iranian ports in all three dimensions of sustainability—economic, environmental, and social—and to propose strategies for optimal utilization of available capacities in order to achieve sustainable economic development.
Methods: In this study, the Parsimonious Best-Worst Method (PBWM) is used to assess the sustainability of ports. Similar to the classical BWM, this approach mitigates equalization bias and anchoring bias by identifying the best and worst attributes through pairwise comparisons. However, its superiority over the classical version lies in the fact that, unlike the traditional BWM—which can only be applied to a maximum of nine criteria or alternatives—this method allows for cases involving more than nine criteria or alternatives. In such situations, reference alternatives can be selected, and pairwise comparisons are then conducted based on these references.
Results and discussion: The results of this study indicate that, among the 14 ports examined, four Iranian ports performed favorably across all three dimensions of sustainability. This demonstrates the capacity of these ports to effectively manage resources while striking a balance between economic development and environmental preservation. Furthermore, positive engagement with local communities and attention to social responsibilities have played a significant role in enhancing their performance. Besides, three Iranian ports were ranked at the lower end as a result of shortcomings such as insufficient investment, ineffective management, or inadequate attention to environmental and social issues. Consequently, these ports require a reassessment of their policies and the adoption of innovative strategies to improve their current situation and enhance their competitiveness.
Conclusions: This study, by evaluating the sustainability of 14 major ports in Iran, emphasizes the importance of revisiting infrastructure, enhancing management processes, and adopting innovative strategies to improve sustainable competitiveness. These efforts could strengthen the role of Iranian ports in the global trade network and contribute to national and regional economic development, leading to greater productivity. Considering the environmental and social dimensions in port development can help strike a balance between economic growth and environmental preservation, thereby contributing to the long-term sustainability of ports. Therefore, it is recommended that policymakers and port managers adopt comprehensive and sustainable strategies to enhance port performance. By doing so, they can secure a competitive position in the global arena and contribute to the sustainable development of the country.</description>
    </item>
  </channel>
</rss>
