DETERMINANTS OF MONITORING AND EVALUATION ON PERFORMANCE OF HUMANITARIAN AND DEVELOPMENT AID ORGANIZATIONS: A CASE OF FINN CHURCH AID
Abstract
This study assessed the role of Monitoring and Evaluation on performance of Humanitarian and Development Aid organizations. The case study focused at Finn Church Aid as a Humanitarian and Development Aid organization. Insufficient capacity in Monitoring and Evaluation continues to cause a non-sustainable outcome for many projects. The study examined the research objectives in the context of the general objective and specific objectives. The general objective of the study was to establish how Monitoring and Evaluation determines the performance of Humanitarian and Development Aid organizations, a case of Finn Church Aid. Whereas, the specific objectives of the study determined how Staff Capacity, Survey and Surveillance, Feedback Mechanism and Donor Policy influence Humanitarian and Development Aid organization performance. The study in pursuit of effective Monitoring and Evaluation sought to give insights on how Monitoring and Evaluation influence performance of Humanitarian and Development Aid organization. The research design used for the study was descriptive survey. This was because some of the characteristics in Monitoring and Evaluation performance were perceptions, beliefs, opinions and knowledge. The target population was 180 employees from Finn Church Aid Eastern Africa Region. The researcher used Slovenes formula to derive a sample of 90 respondents for the study. The study used both primary and secondary data as collection instruments. Primary data was collected from the sample size using questionnaires, while secondary data was collected through reviews of both theoretical and empirical literatures. Pilot testing process entailed use of 10 questionnaires to asses’ questions validity and reliability factoring Cronbach’s Alpha rule. In regards to inferential statistics, Pearsons Product Correlation coefficient for regression analysis was used to link the independent variables and dependent variable.
Key Words: Monitoring and Evaluation, Staff Capacity, Survey and Surveillance, Feedback Mechanism and Donor Policy
Full Text:
PDFReferences
Abdulkadir, H. S. (2014). Challenges of Implementing Internal Control Systems in Non-Governmental Organisations (NGOs) in Kenya: A case of Faith-Based (FBOs) in Coast Region. Journal of Business and Management, 57-62
Abuzeid, F. (2009). Foreign Aid and the Push Theory, Lessons From sub-Saharan Africa. Stanford Journal of International Relations Vol (6) 2–11.
Angelo, L. E. (2008). Auditor Size and Author Quality. Journal of Accounting and Economics, Vol, 183-199.
Ashbaugh, H. (2004). Ethical Issues Related to the Provision of Adult and Non-Adult Services: Evidence from Academic Research. Journal of Business Ethics, 143-148
Alan, B. (2003). Triangulation: Encyclopedia of Social Research Methods. SAGE Publications. http://dx.doi.org/10.4135/9781412950589
AusAID (Australian Agency for International Development), (2000, March). Assisting Local Communities: Evaluation of Government Funded NGO Projects in Vietnam. The Australian Government Oversees Aid Program. Quality Assurance (Series No.18)
Baker, N. (2011). Raising Internal audit’s Potential. London: Internal Auditors Inc.
Bandura, A. (2001). Social Cognitive Theory: An Argentic Perspective, Annual Review of Psychology. https://doi.org/10.1146/annurev.psych.52.1.1
Bakwell, O. & Garbutt, A. (2005). The use and Abuse of the Logical Framework Approach: The International NGO Training and Research Centre (INTRAC)
Baum, W. C., & Tolbert, S. M. (2005). Investing in Development: Lessons of World Bank Experience, Washington, DC: World Bank
Birckmayer, J. D., & Weiss, C. H. (August, 2000). Theory-based evaluation in practice. What do we learn? Retrieved from https://www.ncbi.nlm.nih.gov/pubmed/11009866
Boughen, L., & Sweatmean, R. (2010). Handling Community feedback/Complaints Accountability Briefing: Level 1 (Foundation), UK: AFOD
Britton, B. (2009). Organizational Learning in NGOs: Creating the Motive, Means and Opportunity. (Praxis paper 3) The International NGO Training and Research Centre (INTRAC) www.intrac.org/resources.php?action=resource&id=677
Bryman, A. & Bell, E. (2010). Business Research Methods. First Canadian Edition. Toronto: Oxford University Press
Castelló, I., & Lozano, J. M. (2009). 'From Risk Management to Citizenship Corporate Social Responsibility: Analysis of Strategic Drivers of Change', Corporate Governance: The International Journal of Business in Society, EABIS SPECIAL ISSUE 2009, Leadership and Organisational Change, to be published
Grant, H. M., Bredahl, L. C., Clay, J., Ferrie, J., Groves, J. E., McDorman, T. A., & Dark, V. J. (1998). Context-dependent memory for meaningful material: Information for students. Applied Cognitive Psychology, 12(6), 617-623
CLEAR, (2013). Demand and Supply: Monitoring, Evaluation, and Performance management Information and services in Anglophone Sub-Saharan Africa. A Synthesis of Nine Studies: Washington DC, DC 20433. Retrieved from http://www.clearinitaitive.org/
Creswell, J. W. (2003). Research design: Qualitative, quantitative, and mixed methods approach (2nd Ed.). Thousand Oaks, CA: Sage.
Dave, A. (2014). Monitoring-and-Evaluation versus Feedback Loops. https://www.learning-theories.com/feedback-loops.html
Dawson, C. (2002). Practical Research method: A user friendly guide to Mastery research. Wiltshire UK: Cromwell Press
Donaldon, S. L., & Lipsey, M. W. (2006). Roles for Theory in Contemporary Evaluation Practice: Developing Practical Knowledge. (The Handbook of Evaluation: Policies Programs and practices). http://dx.doi.org/10.4135/9781848608078
Duignan, P. (2007). Systematic Outcome Analysis: A Complete Solution of Outcomes, Strategy Monitoring, Evaluation and Contracting. Retrieved from http://www.parkerduignan.com/oiiwa/
Edmunds, R., & Marchant, T. (2008, October). Official Statistics and Monitoring and Evaluation System in Developing Countries: Friends or Foes? Paper presented at Paris 21 secretariat on Partnership in Statistics for Development in the 21st Century. Retrieved from http://paris21.org/sites/default/files/3638.pdf
Funnell, S. C. (2000). The False Choice between Theory Based Evaluation and Experimentation: New Directions for Evaluation. https://doi.org/10.1002/ev.1179
Galgan, P. (2010, February). Bridging the Skill Gap: New Factors Compound the Growing Skill Shortage, 64, 44 - 49. Retrieved from https://www.td.org/magazines/td-magazine/
Gay, L. R. (2006). Educational Research: Competencies for Analysis and Application, 8th Education. Retrieved from https://trove.nla.gov.au/work/26716666
Gorgens, M., & Kusek, J. Z. (2009). Making Monitoring and Evaluation Systems Work: A Capacity Development Toolkit. Washington, DC: World Bank.
Grant, R. M. (1996). Towards Knowledge Based Theory of the Firm. Strategic Management Journal, 17: 109-125.
Hailey, J., & James, R. (2003, November). NGO Capacity Building: The Challenge of Impact Assessment. Paper presented to the New Directions in Impact Assessment for Development Methods and Practice Conference. IDPM, University of Manchester.
HAP (2010). The HAP Standard in Accountability and Quality Management, Geneva. Retrieved from https://www.chsalliance.org/
Hunter, J. (2009). Project Level Monitoring and Evaluation: Who really wants to know. The Annual report on Results and Impact of IFAD Operations. Office of Evaluation. Retrieved from https://www.ifad.org/web/ioe/arri
IFRC, (2011). Beneficiary Communication and Accountability: a Responsibility, not a choice, lessons learned and recommendations Indonesia, Haiti and Pakistan. Geneva, IFRC. Retrieved from http://www.ifrc.org/
Jaap, K, (2007). Monitoring and Evaluation for NGOs in Health and AIDS Programme. Retrieved from http://www.phc-amsterdam.nl/
Jacobs, A. (2010, November). Creating the Missing Feedback Loop, IDS Bulletin 41 (6). https://doi.org/10.1111/j.1759-5436.2010.00182.x
Karanja, G. (2014). Influence of Management Practices on Sustainability of Youth Income Generating Projects in Kangema District, Muranga County, Kenya. International Journal of Education and Research, Vol 2, 1-12.
Kelly, L., David, R. and Roch, C. (2008). Guidance on M & E for Civil Society Programs. Prepared for AUSAID Program Managers. Retrieved from https://www.researchgate.net/publication/
Khan, K. (2003). Strengthening of Monitoring and Evaluation Systems. Retrieved from https://www.researchgate.net/publication/
Kothari, C. R. (2004). Research Methodology: Methods and techniques. Second Edition. New International Publishers
Kothari, C. R. (March, 2001). Capital markets Research in Accounting: JAE Rochester Conference. Journal of Accounting and Economics 31 (2001) 105–231
Kusek, J. Z., & Rist, C. R. (2004). Ten Steps for Results Based Monitoring and Evaluation System. Washington, DC: World Bank
Lipsey, M. W. (2000). Evaluation Methods for Social Intervention. Annual Review of Psychology 51. https://www.annualreviews.org/doi/10.1146/annurev.psych.51.1.345
Mackay, K. (2007). How to Build M & E System to Support Better Government. Washington, DC: World Bank
Mahmoud, H. and Barech, Q. (2012). Setting Up and Implementation for Complaint / Feedback Handling. A Case Study from Jamam camp, South Sudan: Southern Sudan Response Programme
Mayne, J. (2007). Best Practices in Result Based Management: A review of Experience. A report for the United Nations Secretariat
Mosse, D. (1998). Process-Oriented Approach to Development Practice and Social Research. Development as Process, ODI Development Policy Studies. New York: Routledge.
Margoluis, S. (2010). Measures of Success. Washington, DC: Island Press.
Meister, J. C. Willyerd K. (2010). The 2020 Workplace: How Innovative Companies Attract, Develop and Keep Tomorrows Employees Today. New York: Harper Collins.
Mibey, H. K. (2011). Factors affecting Implementations of Monitoring and Evaluation Programs in the Kwazi Kwa Vijana Project by Government Ministries in Kakamega Central District Kenya. (Unpublished Master’s Thesis). University of Nairobi
Murunga, K. B. (2011). Factors Influencing the Application of Participatory Monitoring and Evaluation Approach of managing Development Projects in Kenya. The Case Study of Local Links Projects. (Unpublished Master’s Thesis). University of Nairobi. Retrieved from http://erepository.uonbi.ac.ke
Mugenda, O. M. & Mugenda, A. G. (2003). Research Methods, Quantitative and qualitative approaches. Nairobi: Acts Presss
Mugenda, A.G. (2008). Social Science Research: Theory and Principles. Nairobi, Acts Press.
Nabris, K. (2002). Monitoring and Evaluation Civil Society Empowerment, Jerusalem. PASSIA.
OXFAM International (2010). Complaints and Feedback handling procedure for Oxfam Affiliates in Haiti. MEAL Officer Report. Haiti Oxfam.
Okun, M. A. (2009). Factors affecting Sustainability of Donor Funded Projects in Arid and Semi- Arid areas in Kenya. A case of Marsabit Central District. (Unpublished MBA Project) Kenyatta University. Retrieved from http://ir-library.ku.ac.ke/handle/123456789/3494
Rist, R. C., Biony, M. H., & Martin, F. (2011). Influencing Change: Building Evaluation Capacity to Strenghten Governance. Washington, DC: World Bank.
Pritchett, L., Samji, S. and Hammer, J. (2012). It’s All About M & E. Working paper 2012/104 Helsinki :UNU-WIDER
Pamela, S. S., & Cooper, D. R. (2005). Marketing research. Tata McGraw-Hill
Pamela, S. S., & Cooper, D. R. (2007). Business Research Methods. Tata McGraw-Hill
Patton, M. Q. (2010). Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. New York: Guilford Press.
Porter, S., & Goldman, J. (2013). A growing Demand for Monitoring and Evaluation in Africa, African Evaluation Journal 2013, 1(1)
Rogers, B., & Fredrick, C. (2002). Beyond “Identity. Los Angelos, University of California.
Rogers, P. J. (2000). Casual Model in Program Theory Evaluation, New Directions for Evaluation.
Rogers, P. J. Hacsi, T. A., Petrosino, A., and Huebner, T. A. (2000). Program Theory Evaluation: Practices, Promise and Problems. Program Theory in Evaluation Challenges and Opportunities. New Directions for Evaluation.
Schabbel, C. (2007, January). The value chain of foreign aid: Development, poverty reduction, and regional conditions. Retrieved from https://www.researchgate.net/publication/287455378
Sekaran, U. (2010). Research Methods for Business: A skill building approach. 5th Edition. West Sussex, UK: John Wiley & Sons Ltd
Shapiro, S. L. (2009). Meditation: A universal Tool for cultivating Empathy. New York: Gilford Press
Shapiro, S. L. (2011). Mindfulness-based Stress reduction for health care professional: Results from a randomized trial
Smith, H. (2009). The right to a say and the Duty to Respond. The Impact of Complaints and Response Mechanism on Humanitarian Action. HAP Commissional Studies Series
UNICEF South Africa, Pretoria, (2012). The South African Child Support Grant Impact Assessment: Evidence from Survey of Children, Adolescents and Their Household. Department of Social Development, South African Social Security Agency and UNICEF
United Nations Development Group, (2011). Results Based Management Guidance and Policy.
USAID (2002). Guide: Preparing a performance Monitoring Plan. http://www.decorg/pdf_docs/pnaby215.pdf
Arumugam, V., Keng, B. O., & Tuck, C. F. (2008). "TQM practices and quality management performance: An investigation of their relationship using data from ISO 9001:2000 firms in Malaysia", The TQM Journal, Vol. 20 Issue: 6, pp. 630-650. https://doi.org/10.1108/17542730810909383
Wanjohi, A. M. (2016). Logical Framework: A Project Planning, Monitoring and Evaluation Tool. The Kenya Projects Organization.
Welsh, N., Schars, M. and Dethrasaving, C. (2005). Monitoring and Evaluation System Manual (M&E Principles). Publication of the Mekong Wetlands Biodiversity Conservation and Sustainable Use Programme.
Weis, C. H. (2000). Which Links in Which Theories Shall Evaluate? Program Theory in Evalaution, Challenges and Opportunities. New Directions for Evaluation. Vol. 87 San Franscisco: Jossey-Bass Publishers.
Weiss, C. H. (2004). On Theory-Based Evaluation. Winning Friends and Influencing People. The Evaluation Exchange, 9(4), 2-3.
Weitzman, B. C., Silver, D., & Dilman, K. N. (2002). Integrating a Comparison group design into a theory of change Evaluation: The Case of the Urban Health Initiative. American Journal of Evaluation, 23.
White, K. (2013). Evaluating to Learn: Monitoring and Evaluation Best Practices in Development INGOs. Available at dukespace.lib.duke.educ
William, G. (2009). Business Research Methods: Ohio
Wood, A. (2011). The Community Context and Complaints handling by NGO. Global Accountability discussion Series 1. Geneva : World Vision.
Wood, A. (2011a) ‘Overview of NGO – Community Complaints Mechanisms. Global Accountability Discussion Series 2’. Geneva: World Vision. http://www.alnap.org/resource/8768
World Bank, (2004). Development of Monitoring and Evaluation. Washington, DC: World Bank
World Bank, (2002). Monitoring and Evaluation: Some Tools , Methods and approaches. Washington, D.C: World Bank
Wu, W., & Rosenbaum, E. (2008). Migration and Housing: Comparing China with the United States. In J.R. Logans (eds) Urban China in Transition, Blackwell Publishing.
Xiayai, C. (2009). Monitoring and Evaluation in China’s Urban Planning. A case Study of Xuzhou, University of Waterloo.
Zvoushe, H., & Gideon, Z. (2013). Utilization of Monitoring and Evaluation Systems by Development Agencies: The case of the UNDP in Zimbabwe. American International Journal of Contemporary Research, Vol. 3, 70-83.
DOI: http://dx.doi.org/10.61426/sjbcm.v5i4.956
Refbacks
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.
PAST ISSUES:
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.