runtime error

Exit code: 1. Reason: cated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. config.json: 0%| | 0.00/1.13k [00:00<?, ?B/s] config.json: 100%|██████████| 1.13k/1.13k [00:00<00:00, 3.33MB/s] tokenizer_config.json: 0%| | 0.00/1.53k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 1.53k/1.53k [00:00<00:00, 5.95MB/s] vocab.json: 0%| | 0.00/487k [00:00<?, ?B/s] vocab.json: 100%|██████████| 487k/487k [00:00<00:00, 49.4MB/s] merges.txt: 0%| | 0.00/287k [00:00<?, ?B/s] merges.txt: 100%|██████████| 287k/287k [00:00<00:00, 59.6MB/s] added_tokens.json: 0%| | 0.00/50.0 [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 50.0/50.0 [00:00<00:00, 277kB/s] special_tokens_map.json: 0%| | 0.00/608 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 608/608 [00:00<00:00, 2.67MB/s] tokenizer.json: 0%| | 0.00/2.15M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.15M/2.15M [00:00<00:00, 58.5MB/s] model.safetensors: 0%| | 0.00/436M [00:00<?, ?B/s] model.safetensors: 0%| | 0.00/436M [00:01<?, ?B/s] model.safetensors: 77%|███████▋ | 335M/436M [00:02<00:00, 280MB/s] model.safetensors: 100%|██████████| 436M/436M [00:02<00:00, 173MB/s] Loading weights: 0%| | 0/148 [00:00<?, ?it/s] Loading weights: 100%|██████████| 148/148 [00:00<00:00, 4183.59it/s] generation_config.json: 0%| | 0.00/159 [00:00<?, ?B/s] generation_config.json: 100%|██████████| 159/159 [00:00<00:00, 404kB/s] Traceback (most recent call last): File "/app/app.py", line 25, in <module> demo = gr.ChatInterface( fn=chat_fn, ...<7 lines>... ] ) TypeError: ChatInterface.__init__() got an unexpected keyword argument 'theme'

Container logs:

Fetching error logs...