The advancement towards B5G/6G relies on the synthesis of connect-compute platforms and their use in highly heterogeneous clusters featuring hardware accelerators. While these accelerators offer improved computational efficiency, sill, they make development, deployment, and orchestration of services more complex, with limited flexibility, and necessitate domain-specific knowledge. In AI@EDGE we are targeting seamless integration of such diverse platforms for executing AI-related tasks. This paper focuses on acceleration aspects and presents a MEC system that facilitates AI servicing over a cluster of FPGA, GPU, and CPU nodes. To this end, we develop our custom tools for generating multi-variant AI models, informative function descriptors, flexible MEC orchestrators, and runtime resource managers. The results show successful interoperability, with generic Python models getting deployed/migrated across distinct platforms for performance gains in the area of 10x.