tilelang.jit.adapter.ctypes.adapter module#
The profiler and convert to torch utils
- class tilelang.jit.adapter.ctypes.adapter.CtypesKernelAdapter(params: List[TensorType], result_idx: List[int], target: str, func_or_mod: Union[PrimFunc, IRModule], host_mod: Optional[IRModule] = None, device_mod: Optional[IRModule] = None, kernel_global_source: Optional[str] = None, verbose: bool = False, pass_configs: Optional[Dict[str, Any]] = None)#
Bases:
BaseKernelAdapter
Adapter class that converts TVM/TIR functions to callable CUDA kernels using ctypes.
This adapter handles: 1. Converting TIR functions to compiled CUDA libraries 2. Managing dynamic shapes in tensor operations 3. Wrapping C++ kernels for Python/PyTorch usage
- dynamic_symbolic_map: Optional[Dict[Var, Tuple[int, int]]] = None#
- classmethod from_database(params: List[TensorType], result_idx: List[int], target: str, func_or_mod: Union[PrimFunc, IRModule], kernel_global_source: str, kernel_lib_path: str, verbose: bool = False, pass_configs: Optional[Dict[str, Any]] = None)#
- get_kernel_source(kernel_only: bool = False)#
Returns the source code of the compiled kernel.
- ir_module: Optional[IRModule] = None#
- property is_dynamic#
Indicates whether the kernel handles dynamic shapes.
- kernel_global_source: Optional[str] = None#
- lib: Optional[CDLL] = None#
- property lib_code#
Returns the code of the compiled library.
- property libpath#
Returns the path to the compiled library.
- param_dtypes: Optional[List[torch.dtype]] = None#
- param_shapes: Optional[List[List]] = None#
- pass_configs: Optional[Dict[str, Any]] = None#
- property prim_func: PrimFunc#
Returns the primary TIR function from the IR module.
- property srcpath#
Returns the source path of the compiled library.
- target = 'cuda'#
- wrapped_source: Optional[str] = None#