In modern novel collaborative multi-Automated Guided Vehicle (AGV) systems, vehicles are responsible for executing both mission-critical process-related operations and purely computational tasks, such as collision avoidance. This work investigates the problem of joint inter-AGV task placement and intra-AGV computational resource allocation in MEC-enabled multi-AGV environments. To address this challenge, a two-step strategy is proposed to maximize the number of scheduled and completed tasks across multiple AGVs while ensuring fair and efficient resource use within each AGV. The problem of inter-AGV task placement is solved by dynamically applying a catalog of deep reinforcement learning (DRL) models for varying numbers of AGVs. Training time for these models is reduced threefold by using datasets from existing optimization solvers. Transfer learning further reduces training times by up to 51%. Second, a multi-agent deep reinforcement learning (MADRL)-based collaborative protocol for dynamic intra- AGV resource allocation (MACP-DRA) is proposed, allowing AGVs to adjust computational resources dynamically. It incorporates a minimum guaranteed share strategy to ensure fair resource distribution while optimizing performance under dynamic workloads. Compared to existing MADRL approaches, MACP-DRA enhances conflict resolution efficiency while maintaining low computational cost. Evaluation results demonstrate that the proposed inter-AGV scheduling strategy approaches optimal performance while achieving a superior trade-off between decision time and task completion rates. Compared to a multi-agent DRL baseline, the proposed MACP-DRA models reduced resource conflicts by 54.9%, task processing delays by 35.7%, and resource underutilization by 9.93%, while maintaining minimal computational and energy consumption overhead.